Merge lp:~vishvananda/nova/volume-cleanup-2 into lp:~hudson-openstack/nova/trunk

Proposed by Vish Ishaya
Status: Needs review
Proposed branch: lp:~vishvananda/nova/volume-cleanup-2
Merge into: lp:~hudson-openstack/nova/trunk
Prerequisite: lp:~mcgrue/nova/volume-cleanup
Diff against target: 3176 lines (+1006/-933)
36 files modified
Authors (+1/-0)
bin/nova-manage (+2/-3)
doc/source/runnova/getting.started.rst (+0/-1)
nova/compute/api.py (+3/-2)
nova/compute/manager.py (+146/-82)
nova/compute/utils.py (+0/-29)
nova/db/api.py (+1/-35)
nova/db/sqlalchemy/api.py (+5/-55)
nova/db/sqlalchemy/migrate_repo/versions/048_kill_export_devices.py (+51/-0)
nova/db/sqlalchemy/migrate_repo/versions/049_add_connection_info_to_block_device_mapping.py (+35/-0)
nova/db/sqlalchemy/models.py (+2/-15)
nova/exception.py (+4/-4)
nova/rpc/common.py (+4/-5)
nova/tests/api/ec2/test_cloud.py (+11/-10)
nova/tests/fake_flags.py (+0/-4)
nova/tests/integrated/test_volumes.py (+5/-5)
nova/tests/scheduler/test_scheduler.py (+3/-2)
nova/tests/test_compute.py (+95/-227)
nova/tests/test_libvirt.py (+127/-13)
nova/tests/test_virt_drivers.py (+5/-3)
nova/tests/test_volume.py (+2/-80)
nova/tests/test_xenapi.py (+20/-4)
nova/virt/driver.py (+6/-5)
nova/virt/fake.py (+19/-4)
nova/virt/hyperv.py (+4/-3)
nova/virt/libvirt.xml.template (+7/-15)
nova/virt/libvirt/connection.py (+91/-48)
nova/virt/libvirt/volume.py (+149/-0)
nova/virt/vmwareapi_conn.py (+4/-3)
nova/virt/xenapi/volume_utils.py (+8/-7)
nova/virt/xenapi/volumeops.py (+7/-4)
nova/virt/xenapi_conn.py (+10/-7)
nova/volume/api.py (+40/-4)
nova/volume/driver.py (+110/-221)
nova/volume/manager.py (+29/-30)
nova/volume/san.py (+0/-3)
To merge this branch: bzr merge lp:~vishvananda/nova/volume-cleanup-2
Reviewer Review Type Date Requested Status
Thierry Carrez (community) ffe Abstain
Christopher MacGown (community) Needs Resubmitting
Review via email: mp+72270@code.launchpad.net

Description of the change

This is an initial proposal just to get on the radar, and potentially start collecting feedback. I'm trying to decouple the interactions between compute and volume and allow new drivers to be written for each hypervisor. This code is not expected to run nor pass tests yet. The goal is to allow volumes to be used generically and easily support other services like Lunr and VSA. I'm cleaning it up still, but here is the curent progress:

 * Removes discover and undiscover volume
 * Implements a generic driver model for libvirt volume attachment (something can be done for xen as well, but right now it only supports iscsi)
 * Adds initialize_connection and terminate_connection which prepare a volume to be attached to from another machine

To post a comment you must log in.
Revision history for this message
Thierry Carrez (ttx) wrote :

This should wait for Essex, based on the meeting we had on 2011-08-23

review: Disapprove
Revision history for this message
Thierry Carrez (ttx) :
review: Disapprove (ffe)
Revision history for this message
Christopher MacGown (0x44) wrote :

If this is FFE refused, we should put it back into WIP.

review: Needs Resubmitting
Revision history for this message
Thierry Carrez (ttx) wrote :

Essex is open

review: Abstain (ffe)

Unmerged revisions

1402. By Vish Ishaya

make it work when we are on the same host

1401. By Vish Ishaya

use tuples for login and logout

1400. By Vish Ishaya

renumber migrations

1399. By Vish Ishaya

merged trunk

1398. By Vish Ishaya

fix rescan and messed up permissions

1397. By Vish Ishaya

compare as string instead of converting to int

1396. By Vish Ishaya

fix integrated attach volume test

1395. By Vish Ishaya

fix scheduler test

1394. By Vish Ishaya

changes from volume api

1393. By Vish Ishaya

pull in changes from manager

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'Authors'
--- Authors 2011-09-20 03:21:10 +0000
+++ Authors 2011-09-20 16:57:39 +0000
@@ -10,6 +10,7 @@
10Antony Messerli <ant@openstack.org>10Antony Messerli <ant@openstack.org>
11Armando Migliaccio <Armando.Migliaccio@eu.citrix.com>11Armando Migliaccio <Armando.Migliaccio@eu.citrix.com>
12Arvind Somya <asomya@cisco.com>12Arvind Somya <asomya@cisco.com>
13Ben McGraw <ben@pistoncloud.com>
13Bilal Akhtar <bilalakhtar@ubuntu.com>14Bilal Akhtar <bilalakhtar@ubuntu.com>
14Brad Hall <brad@nicira.com>15Brad Hall <brad@nicira.com>
15Brad McConnell <bmcconne@rackspace.com>16Brad McConnell <bmcconne@rackspace.com>
1617
=== modified file 'bin/nova-manage'
--- bin/nova-manage 2011-09-20 06:50:27 +0000
+++ bin/nova-manage 2011-09-20 16:57:39 +0000
@@ -962,9 +962,8 @@
962 msg = _('Only KVM and QEmu are supported for now. Sorry!')962 msg = _('Only KVM and QEmu are supported for now. Sorry!')
963 raise exception.Error(msg)963 raise exception.Error(msg)
964964
965 if (FLAGS.volume_driver != 'nova.volume.driver.AOEDriver' and \965 if FLAGS.volume_driver != 'nova.volume.driver.ISCSIDriver':
966 FLAGS.volume_driver != 'nova.volume.driver.ISCSIDriver'):966 msg = _("Support only ISCSIDriver. Sorry!")
967 msg = _("Support only AOEDriver and ISCSIDriver. Sorry!")
968 raise exception.Error(msg)967 raise exception.Error(msg)
969968
970 rpc.call(ctxt,969 rpc.call(ctxt,
971970
=== modified file 'bin/nova-spoolsentry' (properties changed: -x to +x)
=== modified file 'builddeb.sh' (properties changed: +x to -x)
=== modified file 'contrib/nova.sh' (properties changed: +x to -x)
=== modified file 'doc/find_autodoc_modules.sh' (properties changed: +x to -x)
=== modified file 'doc/generate_autodoc_index.sh' (properties changed: +x to -x)
=== modified file 'doc/source/image_src/zones_distsched_illustrations.odp' (properties changed: +x to -x)
=== modified file 'doc/source/images/nova.compute.api.create.png' (properties changed: +x to -x)
=== modified file 'doc/source/images/nova.compute.api.create_all_at_once.png' (properties changed: +x to -x)
=== modified file 'doc/source/images/zone_aware_overview.png' (properties changed: +x to -x)
=== modified file 'doc/source/images/zone_overview.png' (properties changed: +x to -x)
=== modified file 'doc/source/runnova/getting.started.rst'
--- doc/source/runnova/getting.started.rst 2011-02-21 20:30:20 +0000
+++ doc/source/runnova/getting.started.rst 2011-09-20 16:57:39 +0000
@@ -73,7 +73,6 @@
73* dnsmasq73* dnsmasq
74* vlan74* vlan
75* open-iscsi and iscsitarget (if you use iscsi volumes)75* open-iscsi and iscsitarget (if you use iscsi volumes)
76* aoetools and vblade-persist (if you use aoe-volumes)
7776
78Nova uses cutting-edge versions of many packages. There are ubuntu packages in77Nova uses cutting-edge versions of many packages. There are ubuntu packages in
79the nova-core trunk ppa. You can use add this ppa to your sources list on an78the nova-core trunk ppa. You can use add this ppa to your sources list on an
8079
=== modified file 'nova/CA/geninter.sh' (properties changed: +x to -x)
=== modified file 'nova/CA/genrootca.sh' (properties changed: +x to -x)
=== modified file 'nova/CA/genvpn.sh' (properties changed: +x to -x)
=== modified file 'nova/auth/opendj.sh' (properties changed: +x to -x)
=== modified file 'nova/auth/slap.sh' (properties changed: +x to -x)
=== modified file 'nova/cloudpipe/bootscript.template' (properties changed: +x to -x)
=== modified file 'nova/compute/api.py'
--- nova/compute/api.py 2011-09-19 21:53:17 +0000
+++ nova/compute/api.py 2011-09-20 16:57:39 +0000
@@ -37,7 +37,6 @@
37from nova.compute import power_state37from nova.compute import power_state
38from nova.compute import task_states38from nova.compute import task_states
39from nova.compute import vm_states39from nova.compute import vm_states
40from nova.compute.utils import terminate_volumes
41from nova.scheduler import api as scheduler_api40from nova.scheduler import api as scheduler_api
42from nova.db import base41from nova.db import base
4342
@@ -770,7 +769,9 @@
770 self._cast_compute_message('terminate_instance', context,769 self._cast_compute_message('terminate_instance', context,
771 instance_id, host)770 instance_id, host)
772 else:771 else:
773 terminate_volumes(self.db, context, instance_id)772 for bdm in self.db.block_device_mapping_get_all_by_instance(
773 context, instance_id):
774 self.db.block_device_mapping_destroy(context, bdm['id'])
774 self.db.instance_destroy(context, instance_id)775 self.db.instance_destroy(context, instance_id)
775776
776 @scheduler_api.reroute_compute("stop")777 @scheduler_api.reroute_compute("stop")
777778
=== modified file 'nova/compute/manager.py'
--- nova/compute/manager.py 2011-09-19 15:25:00 +0000
+++ nova/compute/manager.py 2011-09-20 16:57:39 +0000
@@ -30,8 +30,6 @@
30:instances_path: Where instances are kept on disk30:instances_path: Where instances are kept on disk
31:compute_driver: Name of class that is used to handle virtualization, loaded31:compute_driver: Name of class that is used to handle virtualization, loaded
32 by :func:`nova.utils.import_object`32 by :func:`nova.utils.import_object`
33:volume_manager: Name of class that handles persistent storage, loaded by
34 :func:`nova.utils.import_object`
3533
36"""34"""
3735
@@ -59,7 +57,6 @@
59from nova.compute import task_states57from nova.compute import task_states
60from nova.compute import vm_states58from nova.compute import vm_states
61from nova.notifier import api as notifier59from nova.notifier import api as notifier
62from nova.compute.utils import terminate_volumes
63from nova.virt import driver60from nova.virt import driver
6461
6562
@@ -144,7 +141,6 @@
144141
145 self.network_api = network.API()142 self.network_api = network.API()
146 self.network_manager = utils.import_object(FLAGS.network_manager)143 self.network_manager = utils.import_object(FLAGS.network_manager)
147 self.volume_manager = utils.import_object(FLAGS.volume_manager)
148 self._last_host_check = 0144 self._last_host_check = 0
149 super(ComputeManager, self).__init__(service_name="compute",145 super(ComputeManager, self).__init__(service_name="compute",
150 *args, **kwargs)146 *args, **kwargs)
@@ -282,8 +278,8 @@
282 if not ((bdm['snapshot_id'] is None) or278 if not ((bdm['snapshot_id'] is None) or
283 (bdm['volume_id'] is not None)):279 (bdm['volume_id'] is not None)):
284 LOG.error(_('corrupted state of block device mapping '280 LOG.error(_('corrupted state of block device mapping '
285 'id: %(id)s '281 'id: %(id)s snapshot: %(snapshot_id)s '
286 'snapshot: %(snapshot_id) volume: %(vollume_id)') %282 'volume: %(volume_id)s') %
287 {'id': bdm['id'],283 {'id': bdm['id'],
288 'snapshot_id': bdm['snapshot'],284 'snapshot_id': bdm['snapshot'],
289 'volume_id': bdm['volume_id']})285 'volume_id': bdm['volume_id']})
@@ -293,10 +289,13 @@
293 if bdm['volume_id'] is not None:289 if bdm['volume_id'] is not None:
294 volume_api.check_attach(context,290 volume_api.check_attach(context,
295 volume_id=bdm['volume_id'])291 volume_id=bdm['volume_id'])
296 dev_path = self._attach_volume_boot(context, instance_id,292 cinfo = self._attach_volume_boot(context, instance_id,
297 bdm['volume_id'],293 bdm['volume_id'],
298 bdm['device_name'])294 bdm['device_name'])
299 block_device_mapping.append({'device_path': dev_path,295 self.db.block_device_mapping_update(
296 context, bdm['id'],
297 {'connection_info': utils.dumps(cinfo)})
298 block_device_mapping.append({'connection_info': cinfo,
300 'mount_device':299 'mount_device':
301 bdm['device_name']})300 bdm['device_name']})
302301
@@ -450,6 +449,23 @@
450 # be fixed once we have no-db-messaging449 # be fixed once we have no-db-messaging
451 pass450 pass
452451
452 def _get_instance_volume_bdms(self, context, instance_id):
453 bdms = self.db.block_device_mapping_get_all_by_instance(context,
454 instance_id)
455 return [bdm for bdm in bdms if bdm['volume_id']]
456
457 def _get_instance_volume_block_device_info(self, context, instance_id):
458 bdms = self._get_instance_volume_bdms(context, instance_id)
459 block_device_mapping = []
460 for bdm in bdms:
461 cinfo = utils.loads(bdm['connection_info'])
462 block_device_mapping.append({'connection_info': cinfo,
463 'mount_device':
464 bdm['device_name']})
465 ## NOTE(vish): The mapping is passed in so the driver can disconnect
466 ## from remote volumes if necessary
467 return {'block_device_mapping': block_device_mapping}
468
453 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())469 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
454 def run_instance(self, context, instance_id, **kwargs):470 def run_instance(self, context, instance_id, **kwargs):
455 self._run_instance(context, instance_id, **kwargs)471 self._run_instance(context, instance_id, **kwargs)
@@ -460,9 +476,11 @@
460 """Starting an instance on this host."""476 """Starting an instance on this host."""
461 # TODO(yamahata): injected_files isn't supported.477 # TODO(yamahata): injected_files isn't supported.
462 # Anyway OSAPI doesn't support stop/start yet478 # Anyway OSAPI doesn't support stop/start yet
479 # FIXME(vish): I've kept the files during stop instance, but
480 # I think start will fail due to the files still
463 self._run_instance(context, instance_id)481 self._run_instance(context, instance_id)
464482
465 def _shutdown_instance(self, context, instance_id, action_str):483 def _shutdown_instance(self, context, instance_id, action_str, cleanup):
466 """Shutdown an instance on this host."""484 """Shutdown an instance on this host."""
467 context = context.elevated()485 context = context.elevated()
468 instance = self.db.instance_get(context, instance_id)486 instance = self.db.instance_get(context, instance_id)
@@ -474,24 +492,37 @@
474 if not FLAGS.stub_network:492 if not FLAGS.stub_network:
475 self.network_api.deallocate_for_instance(context, instance)493 self.network_api.deallocate_for_instance(context, instance)
476494
477 volumes = instance.get('volumes') or []495 for bdm in self._get_instance_volume_bdms(context, instance_id):
478 for volume in volumes:496 volume_id = bdm['volume_id']
479 self._detach_volume(context, instance_id, volume['id'], False)497 try:
498 self._detach_volume(context, instance_id, volume_id)
499 except exception.DiskNotFound as exc:
500 LOG.warn(_("Ignoring DiskNotFound: %s") % exc)
480501
481 if instance['power_state'] == power_state.SHUTOFF:502 if instance['power_state'] == power_state.SHUTOFF:
482 self.db.instance_destroy(context, instance_id)503 self.db.instance_destroy(context, instance_id)
483 raise exception.Error(_('trying to destroy already destroyed'504 raise exception.Error(_('trying to destroy already destroyed'
484 ' instance: %s') % instance_id)505 ' instance: %s') % instance_id)
485 self.driver.destroy(instance, network_info)506 block_device_info = self._get_instance_volume_block_device_info(
507 context, instance_id)
508 self.driver.destroy(instance, network_info, block_device_info, cleanup)
486509
487 if action_str == 'Terminating':510 def _cleanup_volumes(self, context, instance_id):
488 terminate_volumes(self.db, context, instance_id)511 volume_api = volume.API()
512 bdms = self.db.block_device_mapping_get_all_by_instance(context,
513 instance_id)
514 for bdm in bdms:
515 LOG.debug(_("terminating bdm %s") % bdm)
516 if bdm['volume_id'] and bdm['delete_on_termination']:
517 volume_api.delete(context, bdm['volume_id'])
518 # NOTE(vish): bdms will be deleted on instance destroy
489519
490 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())520 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
491 @checks_instance_lock521 @checks_instance_lock
492 def terminate_instance(self, context, instance_id):522 def terminate_instance(self, context, instance_id):
493 """Terminate an instance on this host."""523 """Terminate an instance on this host."""
494 self._shutdown_instance(context, instance_id, 'Terminating')524 self._shutdown_instance(context, instance_id, 'Terminating', True)
525 self._cleanup_volumes(context, instance_id)
495 instance = self.db.instance_get(context.elevated(), instance_id)526 instance = self.db.instance_get(context.elevated(), instance_id)
496 self._instance_update(context,527 self._instance_update(context,
497 instance_id,528 instance_id,
@@ -510,7 +541,11 @@
510 @checks_instance_lock541 @checks_instance_lock
511 def stop_instance(self, context, instance_id):542 def stop_instance(self, context, instance_id):
512 """Stopping an instance on this host."""543 """Stopping an instance on this host."""
513 self._shutdown_instance(context, instance_id, 'Stopping')544 # FIXME(vish): I've kept the files during stop instance, but
545 # I think start will fail due to the files still
546 # existing. I don't really know what the purpose of
547 # stop and start are when compared to pause and unpause
548 self._shutdown_instance(context, instance_id, 'Stopping', False)
514 self._instance_update(context,549 self._instance_update(context,
515 instance_id,550 instance_id,
516 vm_state=vm_states.STOPPED,551 vm_state=vm_states.STOPPED,
@@ -558,7 +593,6 @@
558 instance_id,593 instance_id,
559 vm_state=vm_states.REBUILDING,594 vm_state=vm_states.REBUILDING,
560 task_state=task_states.SPAWNING)595 task_state=task_states.SPAWNING)
561
562 # pull in new password here since the original password isn't in the db596 # pull in new password here since the original password isn't in the db
563 instance_ref.admin_pass = kwargs.get('new_pass',597 instance_ref.admin_pass = kwargs.get('new_pass',
564 utils.generate_password(FLAGS.password_length))598 utils.generate_password(FLAGS.password_length))
@@ -1226,17 +1260,17 @@
1226 """Attach a volume to an instance at boot time. So actual attach1260 """Attach a volume to an instance at boot time. So actual attach
1227 is done by instance creation"""1261 is done by instance creation"""
12281262
1229 # TODO(yamahata):
1230 # should move check_attach to volume manager?
1231 volume.API().check_attach(context, volume_id)
1232
1233 context = context.elevated()1263 context = context.elevated()
1234 LOG.audit(_("instance %(instance_id)s: booting with "1264 LOG.audit(_("instance %(instance_id)s: booting with "
1235 "volume %(volume_id)s at %(mountpoint)s") %1265 "volume %(volume_id)s at %(mountpoint)s") %
1236 locals(), context=context)1266 locals(), context=context)
1237 dev_path = self.volume_manager.setup_compute_volume(context, volume_id)1267 address = FLAGS.my_ip
1238 self.db.volume_attached(context, volume_id, instance_id, mountpoint)1268 volume_api = volume.API()
1239 return dev_path1269 connection_info = volume_api.initialize_connection(context,
1270 volume_id,
1271 address)
1272 volume_api.attach(context, volume_id, instance_id, mountpoint)
1273 return connection_info
12401274
1241 @checks_instance_lock1275 @checks_instance_lock
1242 def attach_volume(self, context, instance_id, volume_id, mountpoint):1276 def attach_volume(self, context, instance_id, volume_id, mountpoint):
@@ -1245,56 +1279,73 @@
1245 instance_ref = self.db.instance_get(context, instance_id)1279 instance_ref = self.db.instance_get(context, instance_id)
1246 LOG.audit(_("instance %(instance_id)s: attaching volume %(volume_id)s"1280 LOG.audit(_("instance %(instance_id)s: attaching volume %(volume_id)s"
1247 " to %(mountpoint)s") % locals(), context=context)1281 " to %(mountpoint)s") % locals(), context=context)
1248 dev_path = self.volume_manager.setup_compute_volume(context,1282 volume_api = volume.API()
1249 volume_id)1283 address = FLAGS.my_ip
1284 connection_info = volume_api.initialize_connection(context,
1285 volume_id,
1286 address)
1250 try:1287 try:
1251 self.driver.attach_volume(instance_ref['name'],1288 self.driver.attach_volume(connection_info,
1252 dev_path,1289 instance_ref['name'],
1253 mountpoint)1290 mountpoint)
1254 self.db.volume_attached(context,1291 except Exception: # pylint: disable=W0702
1255 volume_id,1292 exc = sys.exc_info()
1256 instance_id,
1257 mountpoint)
1258 values = {
1259 'instance_id': instance_id,
1260 'device_name': mountpoint,
1261 'delete_on_termination': False,
1262 'virtual_name': None,
1263 'snapshot_id': None,
1264 'volume_id': volume_id,
1265 'volume_size': None,
1266 'no_device': None}
1267 self.db.block_device_mapping_create(context, values)
1268 except Exception as exc: # pylint: disable=W0702
1269 # NOTE(vish): The inline callback eats the exception info so we1293 # NOTE(vish): The inline callback eats the exception info so we
1270 # log the traceback here and reraise the same1294 # log the traceback here and reraise the same
1271 # ecxception below.1295 # ecxception below.
1272 LOG.exception(_("instance %(instance_id)s: attach failed"1296 LOG.exception(_("instance %(instance_id)s: attach failed"
1273 " %(mountpoint)s, removing") % locals(), context=context)1297 " %(mountpoint)s, removing") % locals(), context=context)
1274 self.volume_manager.remove_compute_volume(context,1298 volume_api.terminate_connection(context, volume_id, address)
1275 volume_id)
1276 raise exc1299 raise exc
12771300
1301 volume_api.attach(context, volume_id, instance_id, mountpoint)
1302 values = {
1303 'instance_id': instance_id,
1304 'connection_info': utils.dumps(connection_info),
1305 'device_name': mountpoint,
1306 'delete_on_termination': False,
1307 'virtual_name': None,
1308 'snapshot_id': None,
1309 'volume_id': volume_id,
1310 'volume_size': None,
1311 'no_device': None}
1312 self.db.block_device_mapping_create(context, values)
1278 return True1313 return True
12791314
1280 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())1315 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
1281 @checks_instance_lock1316 @checks_instance_lock
1282 def _detach_volume(self, context, instance_id, volume_id, destroy_bdm):1317 def _detach_volume(self, context, instance_id, volume_id,
1318 destroy_bdm=False, mark_detached=True,
1319 force_detach=False):
1283 """Detach a volume from an instance."""1320 """Detach a volume from an instance."""
1284 context = context.elevated()1321 context = context.elevated()
1285 instance_ref = self.db.instance_get(context, instance_id)1322 instance_ref = self.db.instance_get(context, instance_id)
1286 volume_ref = self.db.volume_get(context, volume_id)1323 bdms = self.db.block_device_mapping_get_all_by_instance(
1287 mp = volume_ref['mountpoint']1324 context, instance_id)
1325 for item in bdms:
1326 # NOTE(vish): Comparing as strings because the os_api doesn't
1327 # convert to integer and we may wish to support uuids
1328 # in the future.
1329 if str(item['volume_id']) == str(volume_id):
1330 bdm = item
1331 break
1332 mp = bdm['device_name']
1333
1288 LOG.audit(_("Detach volume %(volume_id)s from mountpoint %(mp)s"1334 LOG.audit(_("Detach volume %(volume_id)s from mountpoint %(mp)s"
1289 " on instance %(instance_id)s") % locals(), context=context)1335 " on instance %(instance_id)s") % locals(), context=context)
1290 if instance_ref['name'] not in self.driver.list_instances():1336 volume_api = volume.API()
1337 if (instance_ref['name'] not in self.driver.list_instances() and
1338 not force_detach):
1291 LOG.warn(_("Detaching volume from unknown instance %s"),1339 LOG.warn(_("Detaching volume from unknown instance %s"),
1292 instance_id, context=context)1340 instance_id, context=context)
1293 else:1341 else:
1294 self.driver.detach_volume(instance_ref['name'],1342 self.driver.detach_volume(utils.loads(bdm['connection_info']),
1295 volume_ref['mountpoint'])1343 instance_ref['name'],
1296 self.volume_manager.remove_compute_volume(context, volume_id)1344 bdm['device_name'])
1297 self.db.volume_detached(context, volume_id)1345 address = FLAGS.my_ip
1346 volume_api.terminate_connection(context, volume_id, address)
1347 if mark_detached:
1348 volume_api.detach(context, volume_id)
1298 if destroy_bdm:1349 if destroy_bdm:
1299 self.db.block_device_mapping_destroy_by_instance_and_volume(1350 self.db.block_device_mapping_destroy_by_instance_and_volume(
1300 context, instance_id, volume_id)1351 context, instance_id, volume_id)
@@ -1304,13 +1355,17 @@
1304 """Detach a volume from an instance."""1355 """Detach a volume from an instance."""
1305 return self._detach_volume(context, instance_id, volume_id, True)1356 return self._detach_volume(context, instance_id, volume_id, True)
13061357
1307 def remove_volume(self, context, volume_id):1358 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
1308 """Remove volume on compute host.1359 def remove_volume_connection(self, context, instance_id, volume_id):
13091360 """Detach a volume from an instance.,"""
1310 :param context: security context1361 # NOTE(vish): We don't want to actually mark the volume
1311 :param volume_id: volume ID1362 # detached, or delete the bdm, just remove the
1312 """1363 # connection from this host.
1313 self.volume_manager.remove_compute_volume(context, volume_id)1364 try:
1365 self._detach_volume(context, instance_id, volume_id,
1366 False, False, True)
1367 except exception.NotFound:
1368 pass
13141369
1315 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())1370 @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id())
1316 def compare_cpu(self, context, cpu_info):1371 def compare_cpu(self, context, cpu_info):
@@ -1393,14 +1448,14 @@
13931448
1394 # Getting instance info1449 # Getting instance info
1395 instance_ref = self.db.instance_get(context, instance_id)1450 instance_ref = self.db.instance_get(context, instance_id)
1396 hostname = instance_ref['hostname']
13971451
1398 # If any volume is mounted, prepare here.1452 # If any volume is mounted, prepare here.
1399 if not instance_ref['volumes']:1453 block_device_info = \
1400 LOG.info(_("%s has no volume."), hostname)1454 self._get_instance_volume_block_device_info(context, instance_id)
1401 else:1455 if not block_device_info['block_device_mapping']:
1402 for v in instance_ref['volumes']:1456 LOG.info(_("%s has no volume."), instance_ref.name)
1403 self.volume_manager.setup_compute_volume(context, v['id'])1457
1458 self.driver.pre_live_migration(block_device_info)
14041459
1405 # Bridge settings.1460 # Bridge settings.
1406 # Call this method prior to ensure_filtering_rules_for_instance,1461 # Call this method prior to ensure_filtering_rules_for_instance,
@@ -1436,7 +1491,7 @@
1436 # In addition, this method is creating filtering rule1491 # In addition, this method is creating filtering rule
1437 # onto destination host.1492 # onto destination host.
1438 self.driver.ensure_filtering_rules_for_instance(instance_ref,1493 self.driver.ensure_filtering_rules_for_instance(instance_ref,
1439 network_info)1494 network_info)
14401495
1441 # Preparation for block migration1496 # Preparation for block migration
1442 if block_migration:1497 if block_migration:
@@ -1460,7 +1515,7 @@
1460 try:1515 try:
1461 # Checking volume node is working correctly when any volumes1516 # Checking volume node is working correctly when any volumes
1462 # are attached to instances.1517 # are attached to instances.
1463 if instance_ref['volumes']:1518 if self._get_instance_volume_bdms(context, instance_id):
1464 rpc.call(context,1519 rpc.call(context,
1465 FLAGS.volume_topic,1520 FLAGS.volume_topic,
1466 {"method": "check_for_export",1521 {"method": "check_for_export",
@@ -1480,12 +1535,13 @@
1480 'disk': disk}})1535 'disk': disk}})
14811536
1482 except Exception:1537 except Exception:
1538 exc = sys.exc_info()
1483 i_name = instance_ref.name1539 i_name = instance_ref.name
1484 msg = _("Pre live migration for %(i_name)s failed at %(dest)s")1540 msg = _("Pre live migration for %(i_name)s failed at %(dest)s")
1485 LOG.error(msg % locals())1541 LOG.exception(msg % locals())
1486 self.rollback_live_migration(context, instance_ref,1542 self.rollback_live_migration(context, instance_ref,
1487 dest, block_migration)1543 dest, block_migration)
1488 raise1544 raise exc
14891545
1490 # Executing live migration1546 # Executing live migration
1491 # live_migration might raises exceptions, but1547 # live_migration might raises exceptions, but
@@ -1513,11 +1569,12 @@
1513 instance_id = instance_ref['id']1569 instance_id = instance_ref['id']
15141570
1515 # Detaching volumes.1571 # Detaching volumes.
1516 try:1572 for bdm in self._get_instance_volume_bdms(ctxt, instance_id):
1517 for vol in self.db.volume_get_all_by_instance(ctxt, instance_id):1573 # NOTE(vish): We don't want to actually mark the volume
1518 self.volume_manager.remove_compute_volume(ctxt, vol['id'])1574 # detached, or delete the bdm, just remove the
1519 except exception.NotFound:1575 # connection from this host.
1520 pass1576 self.remove_volume_connection(ctxt, instance_id,
1577 bdm['volume_id'])
15211578
1522 # Releasing vlan.1579 # Releasing vlan.
1523 # (not necessary in current implementation?)1580 # (not necessary in current implementation?)
@@ -1616,10 +1673,11 @@
1616 vm_state=vm_states.ACTIVE,1673 vm_state=vm_states.ACTIVE,
1617 task_state=None)1674 task_state=None)
16181675
1619 for volume_ref in instance_ref['volumes']:1676 for bdm in self._get_instance_volume_bdms(context, instance_ref['id']):
1620 volume_id = volume_ref['id']1677 volume_id = bdm['volume_id']
1621 self.db.volume_update(context, volume_id, {'status': 'in-use'})1678 self.db.volume_update(context, volume_id, {'status': 'in-use'})
1622 volume.API().remove_from_compute(context, volume_id, dest)1679 volume.API().remove_from_compute(context, instance_ref['id'],
1680 volume_id, dest)
16231681
1624 # Block migration needs empty image at destination host1682 # Block migration needs empty image at destination host
1625 # before migration starts, so if any failure occurs,1683 # before migration starts, so if any failure occurs,
@@ -1636,9 +1694,15 @@
1636 :param context: security context1694 :param context: security context
1637 :param instance_id: nova.db.sqlalchemy.models.Instance.Id1695 :param instance_id: nova.db.sqlalchemy.models.Instance.Id
1638 """1696 """
1639 instances_ref = self.db.instance_get(context, instance_id)1697 instance_ref = self.db.instance_get(context, instance_id)
1640 network_info = self._get_instance_nw_info(context, instances_ref)1698 network_info = self._get_instance_nw_info(context, instance_ref)
1641 self.driver.destroy(instances_ref, network_info)1699
1700 # NOTE(vish): The mapping is passed in so the driver can disconnect
1701 # from remote volumes if necessary
1702 block_device_info = \
1703 self._get_instance_volume_block_device_info(context, instance_id)
1704 instance = instance_ref['name']
1705 self.driver.destroy(instance, network_info, block_device_info, True)
16421706
1643 def periodic_tasks(self, context=None):1707 def periodic_tasks(self, context=None):
1644 """Tasks to be run at a periodic interval."""1708 """Tasks to be run at a periodic interval."""
16451709
=== removed file 'nova/compute/utils.py'
--- nova/compute/utils.py 2011-06-15 15:32:03 +0000
+++ nova/compute/utils.py 1970-01-01 00:00:00 +0000
@@ -1,29 +0,0 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright (c) 2011 VA Linux Systems Japan K.K
4# Copyright (c) 2011 Isaku Yamahata
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18from nova import volume
19
20
21def terminate_volumes(db, context, instance_id):
22 """delete volumes of delete_on_termination=True in block device mapping"""
23 volume_api = volume.API()
24 for bdm in db.block_device_mapping_get_all_by_instance(context,
25 instance_id):
26 #LOG.debug(_("terminating bdm %s") % bdm)
27 if bdm['volume_id'] and bdm['delete_on_termination']:
28 volume_api.delete(context, bdm['volume_id'])
29 db.block_device_mapping_destroy(context, bdm['id'])
300
=== modified file 'nova/db/api.py'
--- nova/db/api.py 2011-09-19 22:32:45 +0000
+++ nova/db/api.py 2011-09-20 16:57:39 +0000
@@ -56,18 +56,13 @@
56 sqlalchemy='nova.db.sqlalchemy.api')56 sqlalchemy='nova.db.sqlalchemy.api')
5757
5858
59class NoMoreBlades(exception.Error):
60 """No more available blades."""
61 pass
62
63
64class NoMoreNetworks(exception.Error):59class NoMoreNetworks(exception.Error):
65 """No more available networks."""60 """No more available networks."""
66 pass61 pass
6762
6863
69class NoMoreTargets(exception.Error):64class NoMoreTargets(exception.Error):
70 """No more available blades"""65 """No more available targets"""
71 pass66 pass
7267
7368
@@ -804,25 +799,6 @@
804###################799###################
805800
806801
807def export_device_count(context):
808 """Return count of export devices."""
809 return IMPL.export_device_count(context)
810
811
812def export_device_create_safe(context, values):
813 """Create an export_device from the values dictionary.
814
815 The device is not returned. If the create violates the unique
816 constraints because the shelf_id and blade_id already exist,
817 no exception is raised.
818
819 """
820 return IMPL.export_device_create_safe(context, values)
821
822
823###################
824
825
826def iscsi_target_count_by_host(context, host):802def iscsi_target_count_by_host(context, host):
827 """Return count of export devices."""803 """Return count of export devices."""
828 return IMPL.iscsi_target_count_by_host(context, host)804 return IMPL.iscsi_target_count_by_host(context, host)
@@ -898,11 +874,6 @@
898###################874###################
899875
900876
901def volume_allocate_shelf_and_blade(context, volume_id):
902 """Atomically allocate a free shelf and blade from the pool."""
903 return IMPL.volume_allocate_shelf_and_blade(context, volume_id)
904
905
906def volume_allocate_iscsi_target(context, volume_id, host):877def volume_allocate_iscsi_target(context, volume_id, host):
907 """Atomically allocate a free iscsi_target from the pool."""878 """Atomically allocate a free iscsi_target from the pool."""
908 return IMPL.volume_allocate_iscsi_target(context, volume_id, host)879 return IMPL.volume_allocate_iscsi_target(context, volume_id, host)
@@ -968,11 +939,6 @@
968 return IMPL.volume_get_instance(context, volume_id)939 return IMPL.volume_get_instance(context, volume_id)
969940
970941
971def volume_get_shelf_and_blade(context, volume_id):
972 """Get the shelf and blade allocated to the volume."""
973 return IMPL.volume_get_shelf_and_blade(context, volume_id)
974
975
976def volume_get_iscsi_target_num(context, volume_id):942def volume_get_iscsi_target_num(context, volume_id):
977 """Get the target num (tid) allocated to the volume."""943 """Get the target num (tid) allocated to the volume."""
978 return IMPL.volume_get_iscsi_target_num(context, volume_id)944 return IMPL.volume_get_iscsi_target_num(context, volume_id)
979945
=== modified file 'nova/db/sqlalchemy/api.py'
--- nova/db/sqlalchemy/api.py 2011-09-19 22:32:45 +0000
+++ nova/db/sqlalchemy/api.py 2011-09-20 16:57:39 +0000
@@ -1127,6 +1127,11 @@
1127 update({'deleted': True,1127 update({'deleted': True,
1128 'deleted_at': utils.utcnow(),1128 'deleted_at': utils.utcnow(),
1129 'updated_at': literal_column('updated_at')})1129 'updated_at': literal_column('updated_at')})
1130 session.query(models.BlockDeviceMapping).\
1131 filter_by(instance_id=instance_id).\
1132 update({'deleted': True,
1133 'deleted_at': utils.utcnow(),
1134 'updated_at': literal_column('updated_at')})
11301135
11311136
1132@require_context1137@require_context
@@ -1954,28 +1959,6 @@
19541959
19551960
1956@require_admin_context1961@require_admin_context
1957def export_device_count(context):
1958 session = get_session()
1959 return session.query(models.ExportDevice).\
1960 filter_by(deleted=can_read_deleted(context)).\
1961 count()
1962
1963
1964@require_admin_context
1965def export_device_create_safe(context, values):
1966 export_device_ref = models.ExportDevice()
1967 export_device_ref.update(values)
1968 try:
1969 export_device_ref.save()
1970 return export_device_ref
1971 except IntegrityError:
1972 return None
1973
1974
1975###################
1976
1977
1978@require_admin_context
1979def iscsi_target_count_by_host(context, host):1962def iscsi_target_count_by_host(context, host):
1980 session = get_session()1963 session = get_session()
1981 return session.query(models.IscsiTarget).\1964 return session.query(models.IscsiTarget).\
@@ -2111,24 +2094,6 @@
21112094
21122095
2113@require_admin_context2096@require_admin_context
2114def volume_allocate_shelf_and_blade(context, volume_id):
2115 session = get_session()
2116 with session.begin():
2117 export_device = session.query(models.ExportDevice).\
2118 filter_by(volume=None).\
2119 filter_by(deleted=False).\
2120 with_lockmode('update').\
2121 first()
2122 # NOTE(vish): if with_lockmode isn't supported, as in sqlite,
2123 # then this has concurrency issues
2124 if not export_device:
2125 raise db.NoMoreBlades()
2126 export_device.volume_id = volume_id
2127 session.add(export_device)
2128 return (export_device.shelf_id, export_device.blade_id)
2129
2130
2131@require_admin_context
2132def volume_allocate_iscsi_target(context, volume_id, host):2097def volume_allocate_iscsi_target(context, volume_id, host):
2133 session = get_session()2098 session = get_session()
2134 with session.begin():2099 with session.begin():
@@ -2194,9 +2159,6 @@
2194 update({'deleted': True,2159 update({'deleted': True,
2195 'deleted_at': utils.utcnow(),2160 'deleted_at': utils.utcnow(),
2196 'updated_at': literal_column('updated_at')})2161 'updated_at': literal_column('updated_at')})
2197 session.query(models.ExportDevice).\
2198 filter_by(volume_id=volume_id).\
2199 update({'volume_id': None})
2200 session.query(models.IscsiTarget).\2162 session.query(models.IscsiTarget).\
2201 filter_by(volume_id=volume_id).\2163 filter_by(volume_id=volume_id).\
2202 update({'volume_id': None})2164 update({'volume_id': None})
@@ -2316,18 +2278,6 @@
23162278
23172279
2318@require_admin_context2280@require_admin_context
2319def volume_get_shelf_and_blade(context, volume_id):
2320 session = get_session()
2321 result = session.query(models.ExportDevice).\
2322 filter_by(volume_id=volume_id).\
2323 first()
2324 if not result:
2325 raise exception.ExportDeviceNotFoundForVolume(volume_id=volume_id)
2326
2327 return (result.shelf_id, result.blade_id)
2328
2329
2330@require_admin_context
2331def volume_get_iscsi_target_num(context, volume_id):2281def volume_get_iscsi_target_num(context, volume_id):
2332 session = get_session()2282 session = get_session()
2333 result = session.query(models.IscsiTarget).\2283 result = session.query(models.IscsiTarget).\
23342284
=== added file 'nova/db/sqlalchemy/migrate_repo/versions/048_kill_export_devices.py'
--- nova/db/sqlalchemy/migrate_repo/versions/048_kill_export_devices.py 1970-01-01 00:00:00 +0000
+++ nova/db/sqlalchemy/migrate_repo/versions/048_kill_export_devices.py 2011-09-20 16:57:39 +0000
@@ -0,0 +1,51 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2011 University of Southern California
4#
5# Licensed under the Apache License, Version 2.0 (the "License"); you may
6# not use this file except in compliance with the License. You may obtain
7# a copy of the License at
8#
9# http://www.apache.org/licenses/LICENSE-2.0
10#
11# Unless required by applicable law or agreed to in writing, software
12# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
13# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
14# License for the specific language governing permissions and limitations
15# under the License.
16
17from sqlalchemy import Boolean, Column, DateTime, ForeignKey, Integer
18from sqlalchemy import MetaData, String, Table
19from nova import log as logging
20
21meta = MetaData()
22
23# Table definition
24export_devices = Table('export_devices', meta,
25 Column('created_at', DateTime(timezone=False)),
26 Column('updated_at', DateTime(timezone=False)),
27 Column('deleted_at', DateTime(timezone=False)),
28 Column('deleted', Boolean(create_constraint=True, name=None)),
29 Column('id', Integer(), primary_key=True, nullable=False),
30 Column('shelf_id', Integer()),
31 Column('blade_id', Integer()),
32 Column('volume_id',
33 Integer(),
34 ForeignKey('volumes.id'),
35 nullable=True),
36 )
37
38
39def downgrade(migrate_engine):
40 meta.bind = migrate_engine
41 try:
42 export_devices.create()
43 except Exception:
44 logging.info(repr(export_devices))
45 logging.exception('Exception while creating table')
46 raise
47
48
49def upgrade(migrate_engine):
50 meta.bind = migrate_engine
51 export_devices.drop()
052
=== added file 'nova/db/sqlalchemy/migrate_repo/versions/049_add_connection_info_to_block_device_mapping.py'
--- nova/db/sqlalchemy/migrate_repo/versions/049_add_connection_info_to_block_device_mapping.py 1970-01-01 00:00:00 +0000
+++ nova/db/sqlalchemy/migrate_repo/versions/049_add_connection_info_to_block_device_mapping.py 2011-09-20 16:57:39 +0000
@@ -0,0 +1,35 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2011 OpenStack LLC.
4# All Rights Reserved.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.from sqlalchemy import *
17
18from sqlalchemy import Column, MetaData, Table, Text
19
20
21meta = MetaData()
22
23new_column = Column('connection_info', Text())
24
25
26def upgrade(migrate_engine):
27 meta.bind = migrate_engine
28 table = Table('block_device_mapping', meta, autoload=True)
29 table.create_column(new_column)
30
31
32def downgrade(migrate_engine):
33 meta.bind = migrate_engine
34 table = Table('block_device_mapping', meta, autoload=True)
35 table.c.connection_info.drop()
036
=== modified file 'nova/db/sqlalchemy/models.py'
--- nova/db/sqlalchemy/models.py 2011-09-14 15:19:03 +0000
+++ nova/db/sqlalchemy/models.py 2011-09-20 16:57:39 +0000
@@ -467,21 +467,8 @@
467 # for no device to suppress devices.467 # for no device to suppress devices.
468 no_device = Column(Boolean, nullable=True)468 no_device = Column(Boolean, nullable=True)
469469
470470 # dur, it's information about the connection!
471class ExportDevice(BASE, NovaBase):471 connection_info = Column(Text, nullable=True)
472 """Represates a shelf and blade that a volume can be exported on."""
473 __tablename__ = 'export_devices'
474 __table_args__ = (schema.UniqueConstraint("shelf_id", "blade_id"),
475 {'mysql_engine': 'InnoDB'})
476 id = Column(Integer, primary_key=True)
477 shelf_id = Column(Integer)
478 blade_id = Column(Integer)
479 volume_id = Column(Integer, ForeignKey('volumes.id'), nullable=True)
480 volume = relationship(Volume,
481 backref=backref('export_device', uselist=False),
482 foreign_keys=volume_id,
483 primaryjoin='and_(ExportDevice.volume_id==Volume.id,'
484 'ExportDevice.deleted==False)')
485472
486473
487class IscsiTarget(BASE, NovaBase):474class IscsiTarget(BASE, NovaBase):
488475
=== modified file 'nova/exception.py'
--- nova/exception.py 2011-09-15 21:58:22 +0000
+++ nova/exception.py 2011-09-20 16:57:39 +0000
@@ -374,10 +374,6 @@
374 message = _("deleting volume %(volume_name)s that has snapshot")374 message = _("deleting volume %(volume_name)s that has snapshot")
375375
376376
377class ExportDeviceNotFoundForVolume(NotFound):
378 message = _("No export device found for volume %(volume_id)s.")
379
380
381class ISCSITargetNotFoundForVolume(NotFound):377class ISCSITargetNotFoundForVolume(NotFound):
382 message = _("No target id found for volume %(volume_id)s.")378 message = _("No target id found for volume %(volume_id)s.")
383379
@@ -386,6 +382,10 @@
386 message = _("No disk at %(location)s")382 message = _("No disk at %(location)s")
387383
388384
385class VolumeDriverNotFound(NotFound):
386 message = _("Could not find a handler for %(driver_type)s volume.")
387
388
389class InvalidImageRef(Invalid):389class InvalidImageRef(Invalid):
390 message = _("Invalid image href %(image_href)s.")390 message = _("Invalid image href %(image_href)s.")
391391
392392
=== modified file 'nova/rpc/common.py'
--- nova/rpc/common.py 2011-08-29 02:22:53 +0000
+++ nova/rpc/common.py 2011-09-20 16:57:39 +0000
@@ -10,7 +10,7 @@
10 'Size of RPC connection pool')10 'Size of RPC connection pool')
1111
1212
13class RemoteError(exception.Error):13class RemoteError(exception.NovaException):
14 """Signifies that a remote class has raised an exception.14 """Signifies that a remote class has raised an exception.
1515
16 Containes a string representation of the type of the original exception,16 Containes a string representation of the type of the original exception,
@@ -19,11 +19,10 @@
19 contains all of the relevent info.19 contains all of the relevent info.
2020
21 """21 """
22 message = _("Remote error: %(exc_type)s %(value)s\n%(traceback)s.")
2223
23 def __init__(self, exc_type, value, traceback):24 def __init__(self, exc_type=None, value=None, traceback=None):
24 self.exc_type = exc_type25 self.exc_type = exc_type
25 self.value = value26 self.value = value
26 self.traceback = traceback27 self.traceback = traceback
27 super(RemoteError, self).__init__('%s %s\n%s' % (exc_type,28 super(RemoteError, self).__init__(**self.__dict__)
28 value,
29 traceback))
3029
=== modified file 'nova/tests/api/ec2/test_cloud.py'
--- nova/tests/api/ec2/test_cloud.py 2011-09-16 15:17:34 +0000
+++ nova/tests/api/ec2/test_cloud.py 2011-09-20 16:57:39 +0000
@@ -1218,7 +1218,7 @@
1218 LOG.debug(info)1218 LOG.debug(info)
1219 if predicate(info):1219 if predicate(info):
1220 break1220 break
1221 greenthread.sleep(1)1221 greenthread.sleep(0.5)
12221222
1223 def _wait_for_running(self, instance_id):1223 def _wait_for_running(self, instance_id):
1224 def is_running(info):1224 def is_running(info):
@@ -1237,6 +1237,16 @@
1237 def _wait_for_terminate(self, instance_id):1237 def _wait_for_terminate(self, instance_id):
1238 def is_deleted(info):1238 def is_deleted(info):
1239 return info['deleted']1239 return info['deleted']
1240 id = ec2utils.ec2_id_to_id(instance_id)
1241 # NOTE(vish): Wait for InstanceNotFound, then verify that
1242 # the instance is actually deleted.
1243 while True:
1244 try:
1245 self.cloud.compute_api.get(self.context, instance_id=id)
1246 except exception.InstanceNotFound:
1247 break
1248 greenthread.sleep(0.1)
1249
1240 elevated = self.context.elevated(read_deleted=True)1250 elevated = self.context.elevated(read_deleted=True)
1241 self._wait_for_state(elevated, instance_id, is_deleted)1251 self._wait_for_state(elevated, instance_id, is_deleted)
12421252
@@ -1252,26 +1262,21 @@
12521262
1253 # a running instance can't be started. It is just ignored.1263 # a running instance can't be started. It is just ignored.
1254 result = self.cloud.start_instances(self.context, [instance_id])1264 result = self.cloud.start_instances(self.context, [instance_id])
1255 greenthread.sleep(0.3)
1256 self.assertTrue(result)1265 self.assertTrue(result)
12571266
1258 result = self.cloud.stop_instances(self.context, [instance_id])1267 result = self.cloud.stop_instances(self.context, [instance_id])
1259 greenthread.sleep(0.3)
1260 self.assertTrue(result)1268 self.assertTrue(result)
1261 self._wait_for_stopped(instance_id)1269 self._wait_for_stopped(instance_id)
12621270
1263 result = self.cloud.start_instances(self.context, [instance_id])1271 result = self.cloud.start_instances(self.context, [instance_id])
1264 greenthread.sleep(0.3)
1265 self.assertTrue(result)1272 self.assertTrue(result)
1266 self._wait_for_running(instance_id)1273 self._wait_for_running(instance_id)
12671274
1268 result = self.cloud.stop_instances(self.context, [instance_id])1275 result = self.cloud.stop_instances(self.context, [instance_id])
1269 greenthread.sleep(0.3)
1270 self.assertTrue(result)1276 self.assertTrue(result)
1271 self._wait_for_stopped(instance_id)1277 self._wait_for_stopped(instance_id)
12721278
1273 result = self.cloud.terminate_instances(self.context, [instance_id])1279 result = self.cloud.terminate_instances(self.context, [instance_id])
1274 greenthread.sleep(0.3)
1275 self.assertTrue(result)1280 self.assertTrue(result)
12761281
1277 self._restart_compute_service()1282 self._restart_compute_service()
@@ -1483,24 +1488,20 @@
1483 self.assertTrue(vol2_id)1488 self.assertTrue(vol2_id)
14841489
1485 self.cloud.terminate_instances(self.context, [ec2_instance_id])1490 self.cloud.terminate_instances(self.context, [ec2_instance_id])
1486 greenthread.sleep(0.3)
1487 self._wait_for_terminate(ec2_instance_id)1491 self._wait_for_terminate(ec2_instance_id)
14881492
1489 greenthread.sleep(0.3)
1490 admin_ctxt = context.get_admin_context(read_deleted=False)1493 admin_ctxt = context.get_admin_context(read_deleted=False)
1491 vol = db.volume_get(admin_ctxt, vol1_id)1494 vol = db.volume_get(admin_ctxt, vol1_id)
1492 self._assert_volume_detached(vol)1495 self._assert_volume_detached(vol)
1493 self.assertFalse(vol['deleted'])1496 self.assertFalse(vol['deleted'])
1494 db.volume_destroy(self.context, vol1_id)1497 db.volume_destroy(self.context, vol1_id)
14951498
1496 greenthread.sleep(0.3)
1497 admin_ctxt = context.get_admin_context(read_deleted=True)1499 admin_ctxt = context.get_admin_context(read_deleted=True)
1498 vol = db.volume_get(admin_ctxt, vol2_id)1500 vol = db.volume_get(admin_ctxt, vol2_id)
1499 self.assertTrue(vol['deleted'])1501 self.assertTrue(vol['deleted'])
15001502
1501 for snapshot_id in (ec2_snapshot1_id, ec2_snapshot2_id):1503 for snapshot_id in (ec2_snapshot1_id, ec2_snapshot2_id):
1502 self.cloud.delete_snapshot(self.context, snapshot_id)1504 self.cloud.delete_snapshot(self.context, snapshot_id)
1503 greenthread.sleep(0.3)
1504 db.volume_destroy(self.context, vol['id'])1505 db.volume_destroy(self.context, vol['id'])
15051506
1506 def test_create_image(self):1507 def test_create_image(self):
15071508
=== modified file 'nova/tests/fake_flags.py'
--- nova/tests/fake_flags.py 2011-07-27 16:44:14 +0000
+++ nova/tests/fake_flags.py 2011-09-20 16:57:39 +0000
@@ -33,11 +33,7 @@
33FLAGS['num_networks'].SetDefault(2)33FLAGS['num_networks'].SetDefault(2)
34FLAGS['fake_network'].SetDefault(True)34FLAGS['fake_network'].SetDefault(True)
35FLAGS['image_service'].SetDefault('nova.image.fake.FakeImageService')35FLAGS['image_service'].SetDefault('nova.image.fake.FakeImageService')
36flags.DECLARE('num_shelves', 'nova.volume.driver')
37flags.DECLARE('blades_per_shelf', 'nova.volume.driver')
38flags.DECLARE('iscsi_num_targets', 'nova.volume.driver')36flags.DECLARE('iscsi_num_targets', 'nova.volume.driver')
39FLAGS['num_shelves'].SetDefault(2)
40FLAGS['blades_per_shelf'].SetDefault(4)
41FLAGS['iscsi_num_targets'].SetDefault(8)37FLAGS['iscsi_num_targets'].SetDefault(8)
42FLAGS['verbose'].SetDefault(True)38FLAGS['verbose'].SetDefault(True)
43FLAGS['sqlite_db'].SetDefault("tests.sqlite")39FLAGS['sqlite_db'].SetDefault("tests.sqlite")
4440
=== modified file 'nova/tests/integrated/test_volumes.py'
--- nova/tests/integrated/test_volumes.py 2011-08-24 16:18:53 +0000
+++ nova/tests/integrated/test_volumes.py 2011-09-20 16:57:39 +0000
@@ -262,22 +262,22 @@
262262
263 LOG.debug("Logs: %s" % driver.LoggingVolumeDriver.all_logs())263 LOG.debug("Logs: %s" % driver.LoggingVolumeDriver.all_logs())
264264
265 # Discover_volume and undiscover_volume are called from compute265 # prepare_attach and prepare_detach are called from compute
266 # on attach/detach266 # on attach/detach
267267
268 disco_moves = driver.LoggingVolumeDriver.logs_like(268 disco_moves = driver.LoggingVolumeDriver.logs_like(
269 'discover_volume',269 'initialize_connection',
270 id=volume_id)270 id=volume_id)
271 LOG.debug("discover_volume actions: %s" % disco_moves)271 LOG.debug("initialize_connection actions: %s" % disco_moves)
272272
273 self.assertEquals(1, len(disco_moves))273 self.assertEquals(1, len(disco_moves))
274 disco_move = disco_moves[0]274 disco_move = disco_moves[0]
275 self.assertEquals(disco_move['id'], volume_id)275 self.assertEquals(disco_move['id'], volume_id)
276276
277 last_days_of_disco_moves = driver.LoggingVolumeDriver.logs_like(277 last_days_of_disco_moves = driver.LoggingVolumeDriver.logs_like(
278 'undiscover_volume',278 'terminate_connection',
279 id=volume_id)279 id=volume_id)
280 LOG.debug("undiscover_volume actions: %s" % last_days_of_disco_moves)280 LOG.debug("terminate_connection actions: %s" % last_days_of_disco_moves)
281281
282 self.assertEquals(1, len(last_days_of_disco_moves))282 self.assertEquals(1, len(last_days_of_disco_moves))
283 undisco_move = last_days_of_disco_moves[0]283 undisco_move = last_days_of_disco_moves[0]
284284
=== modified file 'nova/tests/scheduler/test_scheduler.py'
--- nova/tests/scheduler/test_scheduler.py 2011-09-08 08:09:22 +0000
+++ nova/tests/scheduler/test_scheduler.py 2011-09-20 16:57:39 +0000
@@ -919,7 +919,8 @@
919 rpc.call(mox.IgnoreArg(), mox.IgnoreArg(),919 rpc.call(mox.IgnoreArg(), mox.IgnoreArg(),
920 {"method": 'compare_cpu',920 {"method": 'compare_cpu',
921 "args": {'cpu_info': s_ref2['compute_node'][0]['cpu_info']}}).\921 "args": {'cpu_info': s_ref2['compute_node'][0]['cpu_info']}}).\
922 AndRaise(rpc.RemoteError("doesn't have compatibility to", "", ""))922 AndRaise(rpc.RemoteError(exception.InvalidCPUInfo,
923 exception.InvalidCPUInfo(reason='fake')))
923924
924 self.mox.ReplayAll()925 self.mox.ReplayAll()
925 try:926 try:
@@ -928,7 +929,7 @@
928 dest,929 dest,
929 False)930 False)
930 except rpc.RemoteError, e:931 except rpc.RemoteError, e:
931 c = (e.message.find(_("doesn't have compatibility to")) >= 0)932 c = (e.exc_type == exception.InvalidCPUInfo)
932933
933 self.assertTrue(c)934 self.assertTrue(c)
934 db.instance_destroy(self.context, instance_id)935 db.instance_destroy(self.context, instance_id)
935936
=== modified file 'nova/tests/test_compute.py'
--- nova/tests/test_compute.py 2011-09-19 21:53:17 +0000
+++ nova/tests/test_compute.py 2011-09-20 16:57:39 +0000
@@ -20,6 +20,7 @@
20Tests For Compute20Tests For Compute
21"""21"""
2222
23import mox
23from nova import compute24from nova import compute
24from nova import context25from nova import context
25from nova import db26from nova import db
@@ -120,21 +121,6 @@
120 'project_id': self.project_id}121 'project_id': self.project_id}
121 return db.security_group_create(self.context, values)122 return db.security_group_create(self.context, values)
122123
123 def _get_dummy_instance(self):
124 """Get mock-return-value instance object
125 Use this when any testcase executed later than test_run_terminate
126 """
127 vol1 = models.Volume()
128 vol1['id'] = 1
129 vol2 = models.Volume()
130 vol2['id'] = 2
131 instance_ref = models.Instance()
132 instance_ref['id'] = 1
133 instance_ref['volumes'] = [vol1, vol2]
134 instance_ref['hostname'] = 'hostname-1'
135 instance_ref['host'] = 'dummy'
136 return instance_ref
137
138 def test_create_instance_defaults_display_name(self):124 def test_create_instance_defaults_display_name(self):
139 """Verify that an instance cannot be created without a display_name."""125 """Verify that an instance cannot be created without a display_name."""
140 cases = [dict(), dict(display_name=None)]126 cases = [dict(), dict(display_name=None)]
@@ -657,235 +643,123 @@
657643
658 def test_pre_live_migration_instance_has_no_fixed_ip(self):644 def test_pre_live_migration_instance_has_no_fixed_ip(self):
659 """Confirm raising exception if instance doesn't have fixed_ip."""645 """Confirm raising exception if instance doesn't have fixed_ip."""
660 instance_ref = self._get_dummy_instance()646 # creating instance testdata
647 instance_id = self._create_instance({'host': 'dummy'})
661 c = context.get_admin_context()648 c = context.get_admin_context()
662 i_id = instance_ref['id']649 inst_ref = db.instance_get(c, instance_id)
663650 topic = db.queue_get_for(c, FLAGS.compute_topic, inst_ref['host'])
664 dbmock = self.mox.CreateMock(db)651
665 dbmock.instance_get(c, i_id).AndReturn(instance_ref)652 # start test
666653 self.assertRaises(exception.FixedIpNotFoundForInstance,
667 self.compute.db = dbmock
668 self.mox.ReplayAll()
669 self.assertRaises(exception.NotFound,
670 self.compute.pre_live_migration,654 self.compute.pre_live_migration,
671 c, instance_ref['id'], time=FakeTime())655 c, inst_ref['id'], time=FakeTime())
656 # cleanup
657 db.instance_destroy(c, instance_id)
672658
673 def test_pre_live_migration_instance_has_volume(self):659 def test_pre_live_migration_works_correctly(self):
674 """Confirm setup_compute_volume is called when volume is mounted."""660 """Confirm setup_compute_volume is called when volume is mounted."""
675 def fake_nw_info(*args, **kwargs):661 # creating instance testdata
676 return [(0, {'ips':['dummy']})]662 instance_id = self._create_instance({'host': 'dummy'})
677663 c = context.get_admin_context()
678 i_ref = self._get_dummy_instance()664 inst_ref = db.instance_get(c, instance_id)
679 c = context.get_admin_context()665 topic = db.queue_get_for(c, FLAGS.compute_topic, inst_ref['host'])
680666
681 self._setup_other_managers()667 # creating mocks
682 dbmock = self.mox.CreateMock(db)668 self.mox.StubOutWithMock(self.compute.db,
683 volmock = self.mox.CreateMock(self.volume_manager)669 'instance_get_fixed_addresses')
684 drivermock = self.mox.CreateMock(self.compute_driver)670 self.compute.db.instance_get_fixed_addresses(c, instance_id
685671 ).AndReturn(['1.1.1.1'])
686 dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref)672 self.mox.StubOutWithMock(self.compute.driver, 'pre_live_migration')
687 for i in range(len(i_ref['volumes'])):673 self.compute.driver.pre_live_migration({'block_device_mapping': []})
688 vid = i_ref['volumes'][i]['id']674 self.mox.StubOutWithMock(self.compute.driver, 'plug_vifs')
689 volmock.setup_compute_volume(c, vid).InAnyOrder('g1')675 self.compute.driver.plug_vifs(mox.IsA(inst_ref), [])
690 drivermock.plug_vifs(i_ref, fake_nw_info())676 self.mox.StubOutWithMock(self.compute.driver,
691 drivermock.ensure_filtering_rules_for_instance(i_ref, fake_nw_info())677 'ensure_filtering_rules_for_instance')
692678 self.compute.driver.ensure_filtering_rules_for_instance(
693 self.stubs.Set(self.compute, '_get_instance_nw_info', fake_nw_info)679 mox.IsA(inst_ref), [])
694 self.compute.db = dbmock680
695 self.compute.volume_manager = volmock681 # start test
696 self.compute.driver = drivermock682 self.mox.ReplayAll()
697683 ret = self.compute.pre_live_migration(c, inst_ref['id'])
698 self.mox.ReplayAll()684 self.assertEqual(ret, None)
699 ret = self.compute.pre_live_migration(c, i_ref['id'])685
700 self.assertEqual(ret, None)686 # cleanup
701687 db.instance_destroy(c, instance_id)
702 def test_pre_live_migration_instance_has_no_volume(self):
703 """Confirm log meg when instance doesn't mount any volumes."""
704 def fake_nw_info(*args, **kwargs):
705 return [(0, {'ips':['dummy']})]
706
707 i_ref = self._get_dummy_instance()
708 i_ref['volumes'] = []
709 c = context.get_admin_context()
710
711 self._setup_other_managers()
712 dbmock = self.mox.CreateMock(db)
713 drivermock = self.mox.CreateMock(self.compute_driver)
714
715 dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref)
716 self.mox.StubOutWithMock(compute_manager.LOG, 'info')
717 compute_manager.LOG.info(_("%s has no volume."), i_ref['hostname'])
718 drivermock.plug_vifs(i_ref, fake_nw_info())
719 drivermock.ensure_filtering_rules_for_instance(i_ref, fake_nw_info())
720
721 self.stubs.Set(self.compute, '_get_instance_nw_info', fake_nw_info)
722 self.compute.db = dbmock
723 self.compute.driver = drivermock
724
725 self.mox.ReplayAll()
726 ret = self.compute.pre_live_migration(c, i_ref['id'], time=FakeTime())
727 self.assertEqual(ret, None)
728
729 def test_pre_live_migration_setup_compute_node_fail(self):
730 """Confirm operation setup_compute_network() fails.
731
732 It retries and raise exception when timeout exceeded.
733
734 """
735 def fake_nw_info(*args, **kwargs):
736 return [(0, {'ips':['dummy']})]
737
738 i_ref = self._get_dummy_instance()
739 c = context.get_admin_context()
740
741 self._setup_other_managers()
742 dbmock = self.mox.CreateMock(db)
743 netmock = self.mox.CreateMock(self.network_manager)
744 volmock = self.mox.CreateMock(self.volume_manager)
745 drivermock = self.mox.CreateMock(self.compute_driver)
746
747 dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref)
748 for i in range(len(i_ref['volumes'])):
749 volmock.setup_compute_volume(c, i_ref['volumes'][i]['id'])
750 for i in range(FLAGS.live_migration_retry_count):
751 drivermock.plug_vifs(i_ref, fake_nw_info()).\
752 AndRaise(exception.ProcessExecutionError())
753
754 self.stubs.Set(self.compute, '_get_instance_nw_info', fake_nw_info)
755 self.compute.db = dbmock
756 self.compute.network_manager = netmock
757 self.compute.volume_manager = volmock
758 self.compute.driver = drivermock
759
760 self.mox.ReplayAll()
761 self.assertRaises(exception.ProcessExecutionError,
762 self.compute.pre_live_migration,
763 c, i_ref['id'], time=FakeTime())
764
765 def test_live_migration_works_correctly_with_volume(self):
766 """Confirm check_for_export to confirm volume health check."""
767 i_ref = self._get_dummy_instance()
768 c = context.get_admin_context()
769 topic = db.queue_get_for(c, FLAGS.compute_topic, i_ref['host'])
770
771 dbmock = self.mox.CreateMock(db)
772 dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref)
773 self.mox.StubOutWithMock(rpc, 'call')
774 rpc.call(c, FLAGS.volume_topic, {"method": "check_for_export",
775 "args": {'instance_id': i_ref['id']}})
776 dbmock.queue_get_for(c, FLAGS.compute_topic, i_ref['host']).\
777 AndReturn(topic)
778 rpc.call(c, topic, {"method": "pre_live_migration",
779 "args": {'instance_id': i_ref['id'],
780 'block_migration': False,
781 'disk': None}})
782
783 self.mox.StubOutWithMock(self.compute.driver, 'live_migration')
784 self.compute.driver.live_migration(c, i_ref, i_ref['host'],
785 self.compute.post_live_migration,
786 self.compute.rollback_live_migration,
787 False)
788
789 self.compute.db = dbmock
790 self.mox.ReplayAll()
791 ret = self.compute.live_migration(c, i_ref['id'], i_ref['host'])
792 self.assertEqual(ret, None)
793688
794 def test_live_migration_dest_raises_exception(self):689 def test_live_migration_dest_raises_exception(self):
795 """Confirm exception when pre_live_migration fails."""690 """Confirm exception when pre_live_migration fails."""
796 i_ref = self._get_dummy_instance()691 # creating instance testdata
692 instance_id = self._create_instance({'host': 'dummy'})
797 c = context.get_admin_context()693 c = context.get_admin_context()
798 topic = db.queue_get_for(c, FLAGS.compute_topic, i_ref['host'])694 inst_ref = db.instance_get(c, instance_id)
695 topic = db.queue_get_for(c, FLAGS.compute_topic, inst_ref['host'])
696 # creating volume testdata
697 volume_id = 1
698 db.volume_create(c, {'id': volume_id})
699 values = {'instance_id': instance_id, 'device_name': '/dev/vdc',
700 'delete_on_termination': False, 'volume_id': volume_id}
701 db.block_device_mapping_create(c, values)
799702
800 dbmock = self.mox.CreateMock(db)703 # creating mocks
801 dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref)
802 self.mox.StubOutWithMock(rpc, 'call')704 self.mox.StubOutWithMock(rpc, 'call')
803 rpc.call(c, FLAGS.volume_topic, {"method": "check_for_export",705 rpc.call(c, FLAGS.volume_topic, {"method": "check_for_export",
804 "args": {'instance_id': i_ref['id']}})706 "args": {'instance_id': instance_id}})
805 dbmock.queue_get_for(c, FLAGS.compute_topic, i_ref['host']).\707 rpc.call(c, topic, {"method": "pre_live_migration",
806 AndReturn(topic)708 "args": {'instance_id': instance_id,
807 rpc.call(c, topic, {"method": "pre_live_migration",709 'block_migration': True,
808 "args": {'instance_id': i_ref['id'],710 'disk': None}}).\
809 'block_migration': False,711 AndRaise(rpc.common.RemoteError('', '', ''))
810 'disk': None}}).\712 # mocks for rollback
811 AndRaise(rpc.RemoteError('', '', ''))713 rpc.call(c, topic, {"method": "remove_volume_connection",
812 dbmock.instance_update(c, i_ref['id'], {'vm_state': vm_states.ACTIVE,714 "args": {'instance_id': instance_id,
813 'task_state': None,715 'volume_id': volume_id}})
814 'host': i_ref['host']})716 rpc.cast(c, topic, {"method": "rollback_live_migration_at_destination",
815 for v in i_ref['volumes']:717 "args": {'instance_id': inst_ref['id']}})
816 dbmock.volume_update(c, v['id'], {'status': 'in-use'})718
817 # mock for volume_api.remove_from_compute719 # start test
818 rpc.call(c, topic, {"method": "remove_volume",720 self.mox.ReplayAll()
819 "args": {'volume_id': v['id']}})721 self.assertRaises(rpc.RemoteError,
820722 self.compute.live_migration,
821 self.compute.db = dbmock723 c, instance_id, inst_ref['host'], True)
822 self.mox.ReplayAll()724
823 self.assertRaises(rpc.RemoteError,725 # cleanup
824 self.compute.live_migration,726 for bdms in db.block_device_mapping_get_all_by_instance(c, instance_id):
825 c, i_ref['id'], i_ref['host'])727 db.block_device_mapping_destroy(c, bdms['id'])
826728 db.volume_destroy(c, volume_id)
827 def test_live_migration_dest_raises_exception_no_volume(self):729 db.instance_destroy(c, instance_id)
828 """Same as above test(input pattern is different) """730
829 i_ref = self._get_dummy_instance()731 def test_live_migration_works_correctly(self):
830 i_ref['volumes'] = []
831 c = context.get_admin_context()
832 topic = db.queue_get_for(c, FLAGS.compute_topic, i_ref['host'])
833
834 dbmock = self.mox.CreateMock(db)
835 dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref)
836 dbmock.queue_get_for(c, FLAGS.compute_topic, i_ref['host']).\
837 AndReturn(topic)
838 self.mox.StubOutWithMock(rpc, 'call')
839 rpc.call(c, topic, {"method": "pre_live_migration",
840 "args": {'instance_id': i_ref['id'],
841 'block_migration': False,
842 'disk': None}}).\
843 AndRaise(rpc.RemoteError('', '', ''))
844 dbmock.instance_update(c, i_ref['id'], {'vm_state': vm_states.ACTIVE,
845 'task_state': None,
846 'host': i_ref['host']})
847
848 self.compute.db = dbmock
849 self.mox.ReplayAll()
850 self.assertRaises(rpc.RemoteError,
851 self.compute.live_migration,
852 c, i_ref['id'], i_ref['host'])
853
854 def test_live_migration_works_correctly_no_volume(self):
855 """Confirm live_migration() works as expected correctly."""732 """Confirm live_migration() works as expected correctly."""
856 i_ref = self._get_dummy_instance()733 # creating instance testdata
857 i_ref['volumes'] = []734 instance_id = self._create_instance({'host': 'dummy'})
858 c = context.get_admin_context()735 c = context.get_admin_context()
859 topic = db.queue_get_for(c, FLAGS.compute_topic, i_ref['host'])736 inst_ref = db.instance_get(c, instance_id)
737 topic = db.queue_get_for(c, FLAGS.compute_topic, inst_ref['host'])
860738
861 dbmock = self.mox.CreateMock(db)739 # create
862 dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref)
863 self.mox.StubOutWithMock(rpc, 'call')740 self.mox.StubOutWithMock(rpc, 'call')
864 dbmock.queue_get_for(c, FLAGS.compute_topic, i_ref['host']).\
865 AndReturn(topic)
866 rpc.call(c, topic, {"method": "pre_live_migration",741 rpc.call(c, topic, {"method": "pre_live_migration",
867 "args": {'instance_id': i_ref['id'],742 "args": {'instance_id': instance_id,
868 'block_migration': False,743 'block_migration': False,
869 'disk': None}})744 'disk': None}})
870 self.mox.StubOutWithMock(self.compute.driver, 'live_migration')
871 self.compute.driver.live_migration(c, i_ref, i_ref['host'],
872 self.compute.post_live_migration,
873 self.compute.rollback_live_migration,
874 False)
875745
876 self.compute.db = dbmock746 # start test
877 self.mox.ReplayAll()747 self.mox.ReplayAll()
878 ret = self.compute.live_migration(c, i_ref['id'], i_ref['host'])748 ret = self.compute.live_migration(c, inst_ref['id'], inst_ref['host'])
879 self.assertEqual(ret, None)749 self.assertEqual(ret, None)
880750
751 # cleanup
752 db.instance_destroy(c, instance_id)
753
881 def test_post_live_migration_working_correctly(self):754 def test_post_live_migration_working_correctly(self):
882 """Confirm post_live_migration() works as expected correctly."""755 """Confirm post_live_migration() works as expected correctly."""
883 dest = 'desthost'756 dest = 'desthost'
884 flo_addr = '1.2.1.2'757 flo_addr = '1.2.1.2'
885758
886 # Preparing datas759 # creating testdata
887 c = context.get_admin_context()760 c = context.get_admin_context()
888 instance_id = self._create_instance()761 instance_id = self._create_instance({'state_description': 'migrating',
762 'state': power_state.PAUSED})
889 i_ref = db.instance_get(c, instance_id)763 i_ref = db.instance_get(c, instance_id)
890 db.instance_update(c, i_ref['id'], {'vm_state': vm_states.MIGRATING,764 db.instance_update(c, i_ref['id'], {'vm_state': vm_states.MIGRATING,
891 'power_state': power_state.PAUSED})765 'power_state': power_state.PAUSED})
@@ -895,14 +769,8 @@
895 fix_ref = db.fixed_ip_get_by_address(c, fix_addr)769 fix_ref = db.fixed_ip_get_by_address(c, fix_addr)
896 flo_ref = db.floating_ip_create(c, {'address': flo_addr,770 flo_ref = db.floating_ip_create(c, {'address': flo_addr,
897 'fixed_ip_id': fix_ref['id']})771 'fixed_ip_id': fix_ref['id']})
898 # reload is necessary before setting mocks
899 i_ref = db.instance_get(c, instance_id)
900772
901 # Preparing mocks773 # creating mocks
902 self.mox.StubOutWithMock(self.compute.volume_manager,
903 'remove_compute_volume')
904 for v in i_ref['volumes']:
905 self.compute.volume_manager.remove_compute_volume(c, v['id'])
906 self.mox.StubOutWithMock(self.compute.driver, 'unfilter_instance')774 self.mox.StubOutWithMock(self.compute.driver, 'unfilter_instance')
907 self.compute.driver.unfilter_instance(i_ref, [])775 self.compute.driver.unfilter_instance(i_ref, [])
908 self.mox.StubOutWithMock(rpc, 'call')776 self.mox.StubOutWithMock(rpc, 'call')
@@ -910,18 +778,18 @@
910 {"method": "post_live_migration_at_destination",778 {"method": "post_live_migration_at_destination",
911 "args": {'instance_id': i_ref['id'], 'block_migration': False}})779 "args": {'instance_id': i_ref['id'], 'block_migration': False}})
912780
913 # executing781 # start test
914 self.mox.ReplayAll()782 self.mox.ReplayAll()
915 ret = self.compute.post_live_migration(c, i_ref, dest)783 ret = self.compute.post_live_migration(c, i_ref, dest)
916784
917 # make sure every data is rewritten to dest785 # make sure every data is rewritten to destinatioin hostname.
918 i_ref = db.instance_get(c, i_ref['id'])786 i_ref = db.instance_get(c, i_ref['id'])
919 c1 = (i_ref['host'] == dest)787 c1 = (i_ref['host'] == dest)
920 flo_refs = db.floating_ip_get_all_by_host(c, dest)788 flo_refs = db.floating_ip_get_all_by_host(c, dest)
921 c2 = (len(flo_refs) != 0 and flo_refs[0]['address'] == flo_addr)789 c2 = (len(flo_refs) != 0 and flo_refs[0]['address'] == flo_addr)
922
923 # post operaton
924 self.assertTrue(c1 and c2)790 self.assertTrue(c1 and c2)
791
792 # cleanup
925 db.instance_destroy(c, instance_id)793 db.instance_destroy(c, instance_id)
926 db.volume_destroy(c, v_ref['id'])794 db.volume_destroy(c, v_ref['id'])
927 db.floating_ip_destroy(c, flo_addr)795 db.floating_ip_destroy(c, flo_addr)
928796
=== modified file 'nova/tests/test_libvirt.py'
--- nova/tests/test_libvirt.py 2011-09-19 14:22:34 +0000
+++ nova/tests/test_libvirt.py 2011-09-20 16:57:39 +0000
@@ -30,6 +30,7 @@
30from nova import db30from nova import db
31from nova import exception31from nova import exception
32from nova import flags32from nova import flags
33from nova import log as logging
33from nova import test34from nova import test
34from nova import utils35from nova import utils
35from nova.api.ec2 import cloud36from nova.api.ec2 import cloud
@@ -38,10 +39,13 @@
38from nova.virt import driver39from nova.virt import driver
39from nova.virt.libvirt import connection40from nova.virt.libvirt import connection
40from nova.virt.libvirt import firewall41from nova.virt.libvirt import firewall
42from nova.virt.libvirt import volume
43from nova.volume import driver as volume_driver
41from nova.tests import fake_network44from nova.tests import fake_network
4245
43libvirt = None46libvirt = None
44FLAGS = flags.FLAGS47FLAGS = flags.FLAGS
48LOG = logging.getLogger('nova.tests.test_libvirt')
4549
46_fake_network_info = fake_network.fake_get_instance_nw_info50_fake_network_info = fake_network.fake_get_instance_nw_info
47_ipv4_like = fake_network.ipv4_like51_ipv4_like = fake_network.ipv4_like
@@ -87,6 +91,72 @@
87 return self._fake_dom_xml91 return self._fake_dom_xml
8892
8993
94class LibvirtVolumeTestCase(test.TestCase):
95
96 @staticmethod
97 def fake_execute(*cmd, **kwargs):
98 LOG.debug("FAKE EXECUTE: %s" % ' '.join(cmd))
99 return None, None
100
101 def setUp(self):
102 super(LibvirtVolumeTestCase, self).setUp()
103 self.stubs.Set(utils, 'execute', self.fake_execute)
104
105 def test_libvirt_iscsi_driver(self):
106 # NOTE(vish) exists is to make driver assume connecting worked
107 self.stubs.Set(os.path, 'exists', lambda x: True)
108 vol_driver = volume_driver.ISCSIDriver()
109 libvirt_driver = volume.LibvirtISCSIVolumeDriver('fake')
110 name = 'volume-00000001'
111 vol = {'id': 1,
112 'name': name,
113 'provider_auth': None,
114 'provider_location': '10.0.2.15:3260,fake '
115 'iqn.2010-10.org.openstack:volume-00000001'}
116 address = '127.0.0.1'
117 connection_info = vol_driver.initialize_connection(vol, address)
118 mount_device = "vde"
119 xml = libvirt_driver.connect_volume(connection_info, mount_device)
120 tree = xml_to_tree(xml)
121 dev_str = '/dev/disk/by-path/ip-10.0.2.15:3260-iscsi-iqn.' \
122 '2010-10.org.openstack:%s-lun-0' % name
123 self.assertEqual(tree.get('type'), 'block')
124 self.assertEqual(tree.find('./source').get('dev'), dev_str)
125 libvirt_driver.disconnect_volume(connection_info, mount_device)
126
127
128 def test_libvirt_sheepdog_driver(self):
129 vol_driver = volume_driver.SheepdogDriver()
130 libvirt_driver = volume.LibvirtNetVolumeDriver('fake')
131 name = 'volume-00000001'
132 vol = {'id': 1, 'name': name}
133 address = '127.0.0.1'
134 connection_info = vol_driver.initialize_connection(vol, address)
135 mount_device = "vde"
136 xml = libvirt_driver.connect_volume(connection_info, mount_device)
137 tree = xml_to_tree(xml)
138 self.assertEqual(tree.get('type'), 'network')
139 self.assertEqual(tree.find('./source').get('protocol'), 'sheepdog')
140 self.assertEqual(tree.find('./source').get('name'), name)
141 libvirt_driver.disconnect_volume(connection_info, mount_device)
142
143 def test_libvirt_rbd_driver(self):
144 vol_driver = volume_driver.RBDDriver()
145 libvirt_driver = volume.LibvirtNetVolumeDriver('fake')
146 name = 'volume-00000001'
147 vol = {'id': 1, 'name': name}
148 address = '127.0.0.1'
149 connection_info = vol_driver.initialize_connection(vol, address)
150 mount_device = "vde"
151 xml = libvirt_driver.connect_volume(connection_info, mount_device)
152 tree = xml_to_tree(xml)
153 self.assertEqual(tree.get('type'), 'network')
154 self.assertEqual(tree.find('./source').get('protocol'), 'rbd')
155 rbd_name ='%s/%s' % (FLAGS.rbd_pool, name)
156 self.assertEqual(tree.find('./source').get('name'), rbd_name)
157 libvirt_driver.disconnect_volume(connection_info, mount_device)
158
159
90class CacheConcurrencyTestCase(test.TestCase):160class CacheConcurrencyTestCase(test.TestCase):
91 def setUp(self):161 def setUp(self):
92 super(CacheConcurrencyTestCase, self).setUp()162 super(CacheConcurrencyTestCase, self).setUp()
@@ -145,6 +215,20 @@
145 eventlet.sleep(0)215 eventlet.sleep(0)
146216
147217
218class FakeVolumeDriver(object):
219 def __init__(self, *args, **kwargs):
220 pass
221
222 def attach_volume(self, *args):
223 pass
224
225 def detach_volume(self, *args):
226 pass
227
228 def get_xml(self, *args):
229 return ""
230
231
148class LibvirtConnTestCase(test.TestCase):232class LibvirtConnTestCase(test.TestCase):
149233
150 def setUp(self):234 def setUp(self):
@@ -192,14 +276,14 @@
192 return FakeVirtDomain()276 return FakeVirtDomain()
193277
194 # Creating mocks278 # Creating mocks
279 volume_driver = 'iscsi=nova.tests.test_libvirt.FakeVolumeDriver'
280 self.flags(libvirt_volume_drivers=[volume_driver])
195 fake = FakeLibvirtConnection()281 fake = FakeLibvirtConnection()
196 # Customizing above fake if necessary282 # Customizing above fake if necessary
197 for key, val in kwargs.items():283 for key, val in kwargs.items():
198 fake.__setattr__(key, val)284 fake.__setattr__(key, val)
199285
200 self.flags(image_service='nova.image.fake.FakeImageService')286 self.flags(image_service='nova.image.fake.FakeImageService')
201 fw_driver = "nova.tests.fake_network.FakeIptablesFirewallDriver"
202 self.flags(firewall_driver=fw_driver)
203 self.flags(libvirt_vif_driver="nova.tests.fake_network.FakeVIFDriver")287 self.flags(libvirt_vif_driver="nova.tests.fake_network.FakeVIFDriver")
204288
205 self.mox.StubOutWithMock(connection.LibvirtConnection, '_conn')289 self.mox.StubOutWithMock(connection.LibvirtConnection, '_conn')
@@ -382,14 +466,16 @@
382 self.assertEquals(snapshot['status'], 'active')466 self.assertEquals(snapshot['status'], 'active')
383 self.assertEquals(snapshot['name'], snapshot_name)467 self.assertEquals(snapshot['name'], snapshot_name)
384468
385 def test_attach_invalid_device(self):469 def test_attach_invalid_volume_type(self):
386 self.create_fake_libvirt_mock()470 self.create_fake_libvirt_mock()
387 connection.LibvirtConnection._conn.lookupByName = self.fake_lookup471 connection.LibvirtConnection._conn.lookupByName = self.fake_lookup
388 self.mox.ReplayAll()472 self.mox.ReplayAll()
389 conn = connection.LibvirtConnection(False)473 conn = connection.LibvirtConnection(False)
390 self.assertRaises(exception.InvalidDevicePath,474 self.assertRaises(exception.VolumeDriverNotFound,
391 conn.attach_volume,475 conn.attach_volume,
392 "fake", "bad/device/path", "/dev/fake")476 {"driver_volume_type": "badtype"},
477 "fake",
478 "/dev/fake")
393479
394 def test_multi_nic(self):480 def test_multi_nic(self):
395 instance_data = dict(self.test_instance)481 instance_data = dict(self.test_instance)
@@ -637,9 +723,15 @@
637 self.mox.ReplayAll()723 self.mox.ReplayAll()
638 try:724 try:
639 conn = connection.LibvirtConnection(False)725 conn = connection.LibvirtConnection(False)
640 conn.firewall_driver.setattr('setup_basic_filtering', fake_none)726 self.stubs.Set(conn.firewall_driver,
641 conn.firewall_driver.setattr('prepare_instance_filter', fake_none)727 'setup_basic_filtering',
642 conn.firewall_driver.setattr('instance_filter_exists', fake_none)728 fake_none)
729 self.stubs.Set(conn.firewall_driver,
730 'prepare_instance_filter',
731 fake_none)
732 self.stubs.Set(conn.firewall_driver,
733 'instance_filter_exists',
734 fake_none)
643 conn.ensure_filtering_rules_for_instance(instance_ref,735 conn.ensure_filtering_rules_for_instance(instance_ref,
644 network_info,736 network_info,
645 time=fake_timer)737 time=fake_timer)
@@ -684,10 +776,7 @@
684 return vdmock776 return vdmock
685777
686 self.create_fake_libvirt_mock(lookupByName=fake_lookup)778 self.create_fake_libvirt_mock(lookupByName=fake_lookup)
687# self.mox.StubOutWithMock(self.compute, "recover_live_migration")
688 self.mox.StubOutWithMock(self.compute, "rollback_live_migration")779 self.mox.StubOutWithMock(self.compute, "rollback_live_migration")
689# self.compute.recover_live_migration(self.context, instance_ref,
690# dest='dest')
691 self.compute.rollback_live_migration(self.context, instance_ref,780 self.compute.rollback_live_migration(self.context, instance_ref,
692 'dest', False)781 'dest', False)
693782
@@ -708,6 +797,27 @@
708 db.volume_destroy(self.context, volume_ref['id'])797 db.volume_destroy(self.context, volume_ref['id'])
709 db.instance_destroy(self.context, instance_ref['id'])798 db.instance_destroy(self.context, instance_ref['id'])
710799
800 def test_pre_live_migration_works_correctly(self):
801 """Confirms pre_block_migration works correctly."""
802 # Creating testdata
803 vol = {'block_device_mapping':[
804 {'connection_info': 'dummy', 'mount_device': '/dev/sda'},
805 {'connection_info': 'dummy', 'mount_device': '/dev/sdb'}]}
806 conn = connection.LibvirtConnection(False)
807
808 # Creating mocks
809 self.mox.StubOutWithMock(driver, "block_device_info_get_mapping")
810 driver.block_device_info_get_mapping(vol
811 ).AndReturn(vol['block_device_mapping'])
812 self.mox.StubOutWithMock(conn, "volume_driver_method")
813 for v in vol['block_device_mapping']:
814 conn.volume_driver_method('connect_volume',
815 v['connection_info'], v['mount_device'])
816
817 # Starting test
818 self.mox.ReplayAll()
819 self.assertEqual(conn.pre_live_migration(vol), None)
820
711 def test_pre_block_migration_works_correctly(self):821 def test_pre_block_migration_works_correctly(self):
712 """Confirms pre_block_migration works correctly."""822 """Confirms pre_block_migration works correctly."""
713823
@@ -812,8 +922,12 @@
812 # Start test922 # Start test
813 self.mox.ReplayAll()923 self.mox.ReplayAll()
814 conn = connection.LibvirtConnection(False)924 conn = connection.LibvirtConnection(False)
815 conn.firewall_driver.setattr('setup_basic_filtering', fake_none)925 self.stubs.Set(conn.firewall_driver,
816 conn.firewall_driver.setattr('prepare_instance_filter', fake_none)926 'setup_basic_filtering',
927 fake_none)
928 self.stubs.Set(conn.firewall_driver,
929 'prepare_instance_filter',
930 fake_none)
817931
818 network_info = _fake_network_info(self.stubs, 1)932 network_info = _fake_network_info(self.stubs, 1)
819933
820934
=== modified file 'nova/tests/test_virt_drivers.py'
--- nova/tests/test_virt_drivers.py 2011-09-15 19:09:14 +0000
+++ nova/tests/test_virt_drivers.py 2011-09-20 16:57:39 +0000
@@ -253,9 +253,11 @@
253 network_info = test_utils.get_test_network_info()253 network_info = test_utils.get_test_network_info()
254 instance_ref = test_utils.get_test_instance()254 instance_ref = test_utils.get_test_instance()
255 self.connection.spawn(self.ctxt, instance_ref, network_info)255 self.connection.spawn(self.ctxt, instance_ref, network_info)
256 self.connection.attach_volume(instance_ref['name'],256 self.connection.attach_volume({'driver_volume_type': 'fake'},
257 '/dev/null', '/mnt/nova/something')257 instance_ref['name'],
258 self.connection.detach_volume(instance_ref['name'],258 '/mnt/nova/something')
259 self.connection.detach_volume({'driver_volume_type': 'fake'},
260 instance_ref['name'],
259 '/mnt/nova/something')261 '/mnt/nova/something')
260262
261 @catch_notimplementederror263 @catch_notimplementederror
262264
=== modified file 'nova/tests/test_volume.py'
--- nova/tests/test_volume.py 2011-08-05 14:23:48 +0000
+++ nova/tests/test_volume.py 2011-09-20 16:57:39 +0000
@@ -257,7 +257,7 @@
257257
258class DriverTestCase(test.TestCase):258class DriverTestCase(test.TestCase):
259 """Base Test class for Drivers."""259 """Base Test class for Drivers."""
260 driver_name = "nova.volume.driver.FakeAOEDriver"260 driver_name = "nova.volume.driver.FakeBaseDriver"
261261
262 def setUp(self):262 def setUp(self):
263 super(DriverTestCase, self).setUp()263 super(DriverTestCase, self).setUp()
@@ -295,83 +295,6 @@
295 self.volume.delete_volume(self.context, volume_id)295 self.volume.delete_volume(self.context, volume_id)
296296
297297
298class AOETestCase(DriverTestCase):
299 """Test Case for AOEDriver"""
300 driver_name = "nova.volume.driver.AOEDriver"
301
302 def setUp(self):
303 super(AOETestCase, self).setUp()
304
305 def tearDown(self):
306 super(AOETestCase, self).tearDown()
307
308 def _attach_volume(self):
309 """Attach volumes to an instance. This function also sets
310 a fake log message."""
311 volume_id_list = []
312 for index in xrange(3):
313 vol = {}
314 vol['size'] = 0
315 volume_id = db.volume_create(self.context,
316 vol)['id']
317 self.volume.create_volume(self.context, volume_id)
318
319 # each volume has a different mountpoint
320 mountpoint = "/dev/sd" + chr((ord('b') + index))
321 db.volume_attached(self.context, volume_id, self.instance_id,
322 mountpoint)
323
324 (shelf_id, blade_id) = db.volume_get_shelf_and_blade(self.context,
325 volume_id)
326 self.output += "%s %s eth0 /dev/nova-volumes/vol-foo auto run\n" \
327 % (shelf_id, blade_id)
328
329 volume_id_list.append(volume_id)
330
331 return volume_id_list
332
333 def test_check_for_export_with_no_volume(self):
334 """No log message when no volume is attached to an instance."""
335 self.stream.truncate(0)
336 self.volume.check_for_export(self.context, self.instance_id)
337 self.assertEqual(self.stream.getvalue(), '')
338
339 def test_check_for_export_with_all_vblade_processes(self):
340 """No log message when all the vblade processes are running."""
341 volume_id_list = self._attach_volume()
342
343 self.stream.truncate(0)
344 self.volume.check_for_export(self.context, self.instance_id)
345 self.assertEqual(self.stream.getvalue(), '')
346
347 self._detach_volume(volume_id_list)
348
349 def test_check_for_export_with_vblade_process_missing(self):
350 """Output a warning message when some vblade processes aren't
351 running."""
352 volume_id_list = self._attach_volume()
353
354 # the first vblade process isn't running
355 self.output = self.output.replace("run", "down", 1)
356 (shelf_id, blade_id) = db.volume_get_shelf_and_blade(self.context,
357 volume_id_list[0])
358
359 msg_is_match = False
360 self.stream.truncate(0)
361 try:
362 self.volume.check_for_export(self.context, self.instance_id)
363 except exception.ProcessExecutionError, e:
364 volume_id = volume_id_list[0]
365 msg = _("Cannot confirm exported volume id:%(volume_id)s. "
366 "vblade process for e%(shelf_id)s.%(blade_id)s "
367 "isn't running.") % locals()
368
369 msg_is_match = (0 <= e.message.find(msg))
370
371 self.assertTrue(msg_is_match)
372 self._detach_volume(volume_id_list)
373
374
375class ISCSITestCase(DriverTestCase):298class ISCSITestCase(DriverTestCase):
376 """Test Case for ISCSIDriver"""299 """Test Case for ISCSIDriver"""
377 driver_name = "nova.volume.driver.ISCSIDriver"300 driver_name = "nova.volume.driver.ISCSIDriver"
@@ -408,7 +331,7 @@
408 self.assertEqual(self.stream.getvalue(), '')331 self.assertEqual(self.stream.getvalue(), '')
409332
410 def test_check_for_export_with_all_volume_exported(self):333 def test_check_for_export_with_all_volume_exported(self):
411 """No log message when all the vblade processes are running."""334 """No log message when all the processes are running."""
412 volume_id_list = self._attach_volume()335 volume_id_list = self._attach_volume()
413336
414 self.mox.StubOutWithMock(self.volume.driver, '_execute')337 self.mox.StubOutWithMock(self.volume.driver, '_execute')
@@ -431,7 +354,6 @@
431 by ietd."""354 by ietd."""
432 volume_id_list = self._attach_volume()355 volume_id_list = self._attach_volume()
433356
434 # the first vblade process isn't running
435 tid = db.volume_get_iscsi_target_num(self.context, volume_id_list[0])357 tid = db.volume_get_iscsi_target_num(self.context, volume_id_list[0])
436 self.mox.StubOutWithMock(self.volume.driver, '_execute')358 self.mox.StubOutWithMock(self.volume.driver, '_execute')
437 self.volume.driver._execute("ietadm", "--op", "show",359 self.volume.driver._execute("ietadm", "--op", "show",
438360
=== modified file 'nova/tests/test_xenapi.py'
--- nova/tests/test_xenapi.py 2011-09-13 20:33:34 +0000
+++ nova/tests/test_xenapi.py 2011-09-20 16:57:39 +0000
@@ -98,6 +98,20 @@
98 vol['attach_status'] = "detached"98 vol['attach_status'] = "detached"
99 return db.volume_create(self.context, vol)99 return db.volume_create(self.context, vol)
100100
101 @staticmethod
102 def _make_info():
103 return {
104 'driver_volume_type': 'iscsi',
105 'data': {
106 'volume_id': 1,
107 'target_iqn': 'iqn.2010-10.org.openstack:volume-00000001',
108 'target_portal': '127.0.0.1:3260,fake',
109 'auth_method': 'CHAP',
110 'auth_method': 'fake',
111 'auth_method': 'fake',
112 }
113 }
114
101 def test_create_iscsi_storage(self):115 def test_create_iscsi_storage(self):
102 """This shows how to test helper classes' methods."""116 """This shows how to test helper classes' methods."""
103 stubs.stubout_session(self.stubs, stubs.FakeSessionForVolumeTests)117 stubs.stubout_session(self.stubs, stubs.FakeSessionForVolumeTests)
@@ -105,7 +119,7 @@
105 helper = volume_utils.VolumeHelper119 helper = volume_utils.VolumeHelper
106 helper.XenAPI = session.get_imported_xenapi()120 helper.XenAPI = session.get_imported_xenapi()
107 vol = self._create_volume()121 vol = self._create_volume()
108 info = helper.parse_volume_info(vol['id'], '/dev/sdc')122 info = helper.parse_volume_info(self._make_info(), '/dev/sdc')
109 label = 'SR-%s' % vol['id']123 label = 'SR-%s' % vol['id']
110 description = 'Test-SR'124 description = 'Test-SR'
111 sr_ref = helper.create_iscsi_storage(session, info, label, description)125 sr_ref = helper.create_iscsi_storage(session, info, label, description)
@@ -123,8 +137,9 @@
123 # oops, wrong mount point!137 # oops, wrong mount point!
124 self.assertRaises(volume_utils.StorageError,138 self.assertRaises(volume_utils.StorageError,
125 helper.parse_volume_info,139 helper.parse_volume_info,
126 vol['id'],140 self._make_info(),
127 '/dev/sd')141 'dev/sd'
142 )
128 db.volume_destroy(context.get_admin_context(), vol['id'])143 db.volume_destroy(context.get_admin_context(), vol['id'])
129144
130 def test_attach_volume(self):145 def test_attach_volume(self):
@@ -134,7 +149,8 @@
134 volume = self._create_volume()149 volume = self._create_volume()
135 instance = db.instance_create(self.context, self.values)150 instance = db.instance_create(self.context, self.values)
136 vm = xenapi_fake.create_vm(instance.name, 'Running')151 vm = xenapi_fake.create_vm(instance.name, 'Running')
137 result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc')152 result = conn.attach_volume(self._make_info(),
153 instance.name, '/dev/sdc')
138154
139 def check():155 def check():
140 # check that the VM has a VBD attached to it156 # check that the VM has a VBD attached to it
141157
=== modified file 'nova/virt/driver.py'
--- nova/virt/driver.py 2011-09-15 18:44:49 +0000
+++ nova/virt/driver.py 2011-09-20 16:57:39 +0000
@@ -149,7 +149,8 @@
149 """149 """
150 raise NotImplementedError()150 raise NotImplementedError()
151151
152 def destroy(self, instance, network_info, cleanup=True):152 def destroy(self, instance, network_info, block_device_info=None,
153 cleanup=True):
153 """Destroy (shutdown and delete) the specified instance.154 """Destroy (shutdown and delete) the specified instance.
154155
155 If the instance is not found (for example if networking failed), this156 If the instance is not found (for example if networking failed), this
@@ -203,12 +204,12 @@
203 # TODO(Vek): Need to pass context in for access to auth_token204 # TODO(Vek): Need to pass context in for access to auth_token
204 raise NotImplementedError()205 raise NotImplementedError()
205206
206 def attach_volume(self, context, instance_id, volume_id, mountpoint):207 def attach_volume(self, connection_info, instance_name, mountpoint):
207 """Attach the disk at device_path to the instance at mountpoint"""208 """Attach the disk to the instance at mountpoint using info"""
208 raise NotImplementedError()209 raise NotImplementedError()
209210
210 def detach_volume(self, context, instance_id, volume_id):211 def detach_volume(self, connection_info, instance_name, mountpoint):
211 """Detach the disk attached to the instance at mountpoint"""212 """Detach the disk attached to the instance"""
212 raise NotImplementedError()213 raise NotImplementedError()
213214
214 def compare_cpu(self, cpu_info):215 def compare_cpu(self, cpu_info):
215216
=== modified file 'nova/virt/fake.py'
--- nova/virt/fake.py 2011-09-15 18:44:49 +0000
+++ nova/virt/fake.py 2011-09-20 16:57:39 +0000
@@ -92,6 +92,10 @@
92 info_list.append(self._map_to_instance_info(instance))92 info_list.append(self._map_to_instance_info(instance))
93 return info_list93 return info_list
9494
95 def plug_vifs(self, instance, network_info):
96 """Plugin VIFs into networks."""
97 pass
98
95 def spawn(self, context, instance,99 def spawn(self, context, instance,
96 network_info=None, block_device_info=None):100 network_info=None, block_device_info=None):
97 name = instance.name101 name = instance.name
@@ -148,7 +152,8 @@
148 def resume(self, instance, callback):152 def resume(self, instance, callback):
149 pass153 pass
150154
151 def destroy(self, instance, network_info, cleanup=True):155 def destroy(self, instance, network_info, block_device_info=None,
156 cleanup=True):
152 key = instance['name']157 key = instance['name']
153 if key in self.instances:158 if key in self.instances:
154 del self.instances[key]159 del self.instances[key]
@@ -156,13 +161,15 @@
156 LOG.warning("Key '%s' not in instances '%s'" %161 LOG.warning("Key '%s' not in instances '%s'" %
157 (key, self.instances))162 (key, self.instances))
158163
159 def attach_volume(self, instance_name, device_path, mountpoint):164 def attach_volume(self, connection_info, instance_name, mountpoint):
165 """Attach the disk to the instance at mountpoint using info"""
160 if not instance_name in self._mounts:166 if not instance_name in self._mounts:
161 self._mounts[instance_name] = {}167 self._mounts[instance_name] = {}
162 self._mounts[instance_name][mountpoint] = device_path168 self._mounts[instance_name][mountpoint] = connection_info
163 return True169 return True
164170
165 def detach_volume(self, instance_name, mountpoint):171 def detach_volume(self, connection_info, instance_name, mountpoint):
172 """Detach the disk attached to the instance"""
166 try:173 try:
167 del self._mounts[instance_name][mountpoint]174 del self._mounts[instance_name][mountpoint]
168 except KeyError:175 except KeyError:
@@ -233,11 +240,19 @@
233 """This method is supported only by libvirt."""240 """This method is supported only by libvirt."""
234 raise NotImplementedError('This method is supported only by libvirt.')241 raise NotImplementedError('This method is supported only by libvirt.')
235242
243 def get_instance_disk_info(self, ctxt, instance_ref):
244 """This method is supported only by libvirt."""
245 return
246
236 def live_migration(self, context, instance_ref, dest,247 def live_migration(self, context, instance_ref, dest,
237 post_method, recover_method, block_migration=False):248 post_method, recover_method, block_migration=False):
238 """This method is supported only by libvirt."""249 """This method is supported only by libvirt."""
239 return250 return
240251
252 def pre_live_migration(self, block_device_info):
253 """This method is supported only by libvirt."""
254 return
255
241 def unfilter_instance(self, instance_ref, network_info):256 def unfilter_instance(self, instance_ref, network_info):
242 """This method is supported only by libvirt."""257 """This method is supported only by libvirt."""
243 raise NotImplementedError('This method is supported only by libvirt.')258 raise NotImplementedError('This method is supported only by libvirt.')
244259
=== modified file 'nova/virt/hyperv.py'
--- nova/virt/hyperv.py 2011-09-15 18:44:49 +0000
+++ nova/virt/hyperv.py 2011-09-20 16:57:39 +0000
@@ -374,7 +374,8 @@
374 raise exception.InstanceNotFound(instance_id=instance.id)374 raise exception.InstanceNotFound(instance_id=instance.id)
375 self._set_vm_state(instance.name, 'Reboot')375 self._set_vm_state(instance.name, 'Reboot')
376376
377 def destroy(self, instance, network_info, cleanup=True):377 def destroy(self, instance, network_info, block_device_info=None,
378 cleanup=True):
378 """Destroy the VM. Also destroy the associated VHD disk files"""379 """Destroy the VM. Also destroy the associated VHD disk files"""
379 LOG.debug(_("Got request to destroy vm %s"), instance.name)380 LOG.debug(_("Got request to destroy vm %s"), instance.name)
380 vm = self._lookup(instance.name)381 vm = self._lookup(instance.name)
@@ -474,12 +475,12 @@
474 LOG.error(msg)475 LOG.error(msg)
475 raise Exception(msg)476 raise Exception(msg)
476477
477 def attach_volume(self, instance_name, device_path, mountpoint):478 def attach_volume(self, connection_info, instance_name, mountpoint):
478 vm = self._lookup(instance_name)479 vm = self._lookup(instance_name)
479 if vm is None:480 if vm is None:
480 raise exception.InstanceNotFound(instance_id=instance_name)481 raise exception.InstanceNotFound(instance_id=instance_name)
481482
482 def detach_volume(self, instance_name, mountpoint):483 def detach_volume(self, connection_info, instance_name, mountpoint):
483 vm = self._lookup(instance_name)484 vm = self._lookup(instance_name)
484 if vm is None:485 if vm is None:
485 raise exception.InstanceNotFound(instance_id=instance_name)486 raise exception.InstanceNotFound(instance_id=instance_name)
486487
=== modified file 'nova/virt/libvirt.xml.template'
--- nova/virt/libvirt.xml.template 2011-08-24 23:48:04 +0000
+++ nova/virt/libvirt.xml.template 2011-09-20 16:57:39 +0000
@@ -80,30 +80,22 @@
80 <target dev='${local_device}' bus='${disk_bus}'/>80 <target dev='${local_device}' bus='${disk_bus}'/>
81 </disk>81 </disk>
82 #end if82 #end if
83 #for $eph in $ephemerals83 #for $eph in $ephemerals
84 <disk type='block'>84 <disk type='block'>
85 <driver type='${driver_type}'/>85 <driver type='${driver_type}'/>
86 <source dev='${basepath}/${eph.device_path}'/>86 <source dev='${basepath}/${eph.device_path}'/>
87 <target dev='${eph.device}' bus='${disk_bus}'/>87 <target dev='${eph.device}' bus='${disk_bus}'/>
88 </disk>88 </disk>
89 #end for89 #end for
90 #if $getVar('swap_device', False)90 #if $getVar('swap_device', False)
91 <disk type='file'>91 <disk type='file'>
92 <driver type='${driver_type}'/>92 <driver type='${driver_type}'/>
93 <source file='${basepath}/disk.swap'/>93 <source file='${basepath}/disk.swap'/>
94 <target dev='${swap_device}' bus='${disk_bus}'/>94 <target dev='${swap_device}' bus='${disk_bus}'/>
95 </disk>95 </disk>
96 #end if96 #end if
97 #for $vol in $volumes97 #for $vol in $volumes
98 <disk type='${vol.type}'>98 ${vol}
99 <driver type='raw'/>
100 #if $vol.type == 'network'
101 <source protocol='${vol.protocol}' name='${vol.name}'/>
102 #else
103 <source dev='${vol.device_path}'/>
104 #end if
105 <target dev='${vol.mount_device}' bus='${disk_bus}'/>
106 </disk>
107 #end for99 #end for
108 #end if100 #end if
109 #if $getVar('config_drive', False)101 #if $getVar('config_drive', False)
110102
=== modified file 'nova/virt/libvirt/connection.py'
--- nova/virt/libvirt/connection.py 2011-09-20 10:12:01 +0000
+++ nova/virt/libvirt/connection.py 2011-09-20 16:57:39 +0000
@@ -134,6 +134,12 @@
134flags.DEFINE_string('libvirt_vif_driver',134flags.DEFINE_string('libvirt_vif_driver',
135 'nova.virt.libvirt.vif.LibvirtBridgeDriver',135 'nova.virt.libvirt.vif.LibvirtBridgeDriver',
136 'The libvirt VIF driver to configure the VIFs.')136 'The libvirt VIF driver to configure the VIFs.')
137flags.DEFINE_list('libvirt_volume_drivers',
138 ['iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver',
139 'local=nova.virt.libvirt.volume.LibvirtVolumeDriver',
140 'rdb=nova.virt.libvirt.volume.LibvirtNetVolumeDriver',
141 'sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver'],
142 'Libvirt handlers for remote volumes.')
137flags.DEFINE_string('default_local_format',143flags.DEFINE_string('default_local_format',
138 None,144 None,
139 'The default format a local_volume will be formatted with '145 'The default format a local_volume will be formatted with '
@@ -184,6 +190,11 @@
184 fw_class = utils.import_class(FLAGS.firewall_driver)190 fw_class = utils.import_class(FLAGS.firewall_driver)
185 self.firewall_driver = fw_class(get_connection=self._get_connection)191 self.firewall_driver = fw_class(get_connection=self._get_connection)
186 self.vif_driver = utils.import_object(FLAGS.libvirt_vif_driver)192 self.vif_driver = utils.import_object(FLAGS.libvirt_vif_driver)
193 self.volume_drivers = {}
194 for driver_str in FLAGS.libvirt_volume_drivers:
195 driver_type, _sep, driver = driver_str.partition('=')
196 driver_class = utils.import_class(driver)
197 self.volume_drivers[driver_type] = driver_class(self)
187198
188 def init_host(self, host):199 def init_host(self, host):
189 # NOTE(nsokolov): moved instance restarting to ComputeManager200 # NOTE(nsokolov): moved instance restarting to ComputeManager
@@ -261,7 +272,8 @@
261 for (network, mapping) in network_info:272 for (network, mapping) in network_info:
262 self.vif_driver.plug(instance, network, mapping)273 self.vif_driver.plug(instance, network, mapping)
263274
264 def destroy(self, instance, network_info, cleanup=True):275 def destroy(self, instance, network_info, block_device_info=None,
276 cleanup=True):
265 instance_name = instance['name']277 instance_name = instance['name']
266278
267 try:279 try:
@@ -292,21 +304,21 @@
292 locals())304 locals())
293 raise305 raise
294306
295 try:307 try:
296 # NOTE(justinsb): We remove the domain definition. We probably308 # NOTE(justinsb): We remove the domain definition. We probably
297 # would do better to keep it if cleanup=False (e.g. volumes?)309 # would do better to keep it if cleanup=False (e.g. volumes?)
298 # (e.g. #2 - not losing machines on failure)310 # (e.g. #2 - not losing machines on failure)
299 virt_dom.undefine()311 virt_dom.undefine()
300 except libvirt.libvirtError as e:312 except libvirt.libvirtError as e:
301 errcode = e.get_error_code()313 errcode = e.get_error_code()
302 LOG.warning(_("Error from libvirt during undefine of "314 LOG.warning(_("Error from libvirt during undefine of "
303 "%(instance_name)s. Code=%(errcode)s "315 "%(instance_name)s. Code=%(errcode)s "
304 "Error=%(e)s") %316 "Error=%(e)s") %
305 locals())317 locals())
306 raise318 raise
307319
308 for (network, mapping) in network_info:320 for (network, mapping) in network_info:
309 self.vif_driver.unplug(instance, network, mapping)321 self.vif_driver.unplug(instance, network, mapping)
310322
311 def _wait_for_destroy():323 def _wait_for_destroy():
312 """Called at an interval until the VM is gone."""324 """Called at an interval until the VM is gone."""
@@ -325,6 +337,15 @@
325 self.firewall_driver.unfilter_instance(instance,337 self.firewall_driver.unfilter_instance(instance,
326 network_info=network_info)338 network_info=network_info)
327339
340 # NOTE(vish): we disconnect from volumes regardless
341 block_device_mapping = driver.block_device_info_get_mapping(
342 block_device_info)
343 for vol in block_device_mapping:
344 connection_info = vol['connection_info']
345 mountpoint = vol['mount_device']
346 xml = self.volume_driver_method('disconnect_volume',
347 connection_info,
348 mountpoint)
328 if cleanup:349 if cleanup:
329 self._cleanup(instance)350 self._cleanup(instance)
330351
@@ -340,24 +361,22 @@
340 if os.path.exists(target):361 if os.path.exists(target):
341 shutil.rmtree(target)362 shutil.rmtree(target)
342363
364 def volume_driver_method(self, method_name, connection_info,
365 *args, **kwargs):
366 driver_type = connection_info.get('driver_volume_type')
367 if not driver_type in self.volume_drivers:
368 raise exception.VolumeDriverNotFound(driver_type=driver_type)
369 driver = self.volume_drivers[driver_type]
370 method = getattr(driver, method_name)
371 return method(connection_info, *args, **kwargs)
372
343 @exception.wrap_exception()373 @exception.wrap_exception()
344 def attach_volume(self, instance_name, device_path, mountpoint):374 def attach_volume(self, connection_info, instance_name, mountpoint):
345 virt_dom = self._lookup_by_name(instance_name)375 virt_dom = self._lookup_by_name(instance_name)
346 mount_device = mountpoint.rpartition("/")[2]376 mount_device = mountpoint.rpartition("/")[2]
347 (type, protocol, name) = \377 xml = self.volume_driver_method('connect_volume',
348 self._get_volume_device_info(device_path)378 connection_info,
349 if type == 'block':379 mount_device)
350 xml = """<disk type='block'>
351 <driver name='qemu' type='raw'/>
352 <source dev='%s'/>
353 <target dev='%s' bus='virtio'/>
354 </disk>""" % (device_path, mount_device)
355 elif type == 'network':
356 xml = """<disk type='network'>
357 <driver name='qemu' type='raw'/>
358 <source protocol='%s' name='%s'/>
359 <target dev='%s' bus='virtio'/>
360 </disk>""" % (protocol, name, mount_device)
361 virt_dom.attachDevice(xml)380 virt_dom.attachDevice(xml)
362381
363 def _get_disk_xml(self, xml, device):382 def _get_disk_xml(self, xml, device):
@@ -380,14 +399,24 @@
380 if doc is not None:399 if doc is not None:
381 doc.freeDoc()400 doc.freeDoc()
382401
402
383 @exception.wrap_exception()403 @exception.wrap_exception()
384 def detach_volume(self, instance_name, mountpoint):404 def detach_volume(self, connection_info, instance_name, mountpoint):
385 virt_dom = self._lookup_by_name(instance_name)
386 mount_device = mountpoint.rpartition("/")[2]405 mount_device = mountpoint.rpartition("/")[2]
387 xml = self._get_disk_xml(virt_dom.XMLDesc(0), mount_device)406 try:
388 if not xml:407 # NOTE(vish): This is called to cleanup volumes after live migration,
389 raise exception.DiskNotFound(location=mount_device)408 # so we should still logout even if the instance doesn't
390 virt_dom.detachDevice(xml)409 # exist here anymore.
410 virt_dom = self._lookup_by_name(instance_name)
411 xml = self._get_disk_xml(virt_dom.XMLDesc(0), mount_device)
412 if not xml:
413 raise exception.DiskNotFound(location=mount_device)
414 virt_dom.detachDevice(xml)
415 finally:
416 self.volume_driver_method('disconnect_volume',
417 connection_info,
418 mount_device)
419
391420
392 @exception.wrap_exception()421 @exception.wrap_exception()
393 def snapshot(self, context, instance, image_href):422 def snapshot(self, context, instance, image_href):
@@ -1047,14 +1076,6 @@
1047 LOG.debug(_("block_device_list %s"), block_device_list)1076 LOG.debug(_("block_device_list %s"), block_device_list)
1048 return block_device.strip_dev(mount_device) in block_device_list1077 return block_device.strip_dev(mount_device) in block_device_list
10491078
1050 def _get_volume_device_info(self, device_path):
1051 if device_path.startswith('/dev/'):
1052 return ('block', None, None)
1053 elif ':' in device_path:
1054 (protocol, name) = device_path.split(':')
1055 return ('network', protocol, name)
1056 else:
1057 raise exception.InvalidDevicePath(path=device_path)
10581079
1059 def _prepare_xml_info(self, instance, network_info, rescue,1080 def _prepare_xml_info(self, instance, network_info, rescue,
1060 block_device_info=None):1081 block_device_info=None):
@@ -1073,10 +1094,14 @@
1073 else:1094 else:
1074 driver_type = 'raw'1095 driver_type = 'raw'
10751096
1097 volumes = []
1076 for vol in block_device_mapping:1098 for vol in block_device_mapping:
1077 vol['mount_device'] = block_device.strip_dev(vol['mount_device'])1099 connection_info = vol['connection_info']
1078 (vol['type'], vol['protocol'], vol['name']) = \1100 mountpoint = vol['mount_device']
1079 self._get_volume_device_info(vol['device_path'])1101 xml = self.volume_driver_method('connect_volume',
1102 connection_info,
1103 mountpoint)
1104 volumes.append(xml)
10801105
1081 ebs_root = self._volume_in_mapping(self.default_root_device,1106 ebs_root = self._volume_in_mapping(self.default_root_device,
1082 block_device_info)1107 block_device_info)
@@ -1109,7 +1134,7 @@
1109 'nics': nics,1134 'nics': nics,
1110 'ebs_root': ebs_root,1135 'ebs_root': ebs_root,
1111 'local_device': local_device,1136 'local_device': local_device,
1112 'volumes': block_device_mapping,1137 'volumes': volumes,
1113 'use_virtio_for_bridges':1138 'use_virtio_for_bridges':
1114 FLAGS.libvirt_use_virtio_for_bridges,1139 FLAGS.libvirt_use_virtio_for_bridges,
1115 'ephemerals': ephemerals}1140 'ephemerals': ephemerals}
@@ -1705,6 +1730,24 @@
1705 timer.f = wait_for_live_migration1730 timer.f = wait_for_live_migration
1706 timer.start(interval=0.5, now=True)1731 timer.start(interval=0.5, now=True)
17071732
1733 def pre_live_migration(self, block_device_info):
1734 """Preparation live migration.
1735
1736 :params block_device_info:
1737 It must be the result of _get_instance_volume_bdms()
1738 at compute manager.
1739 """
1740
1741 # Establishing connection to volume server.
1742 block_device_mapping = driver.block_device_info_get_mapping(
1743 block_device_info)
1744 for vol in block_device_mapping:
1745 connection_info = vol['connection_info']
1746 mountpoint = vol['mount_device']
1747 xml = self.volume_driver_method('connect_volume',
1748 connection_info,
1749 mountpoint)
1750
1708 def pre_block_migration(self, ctxt, instance_ref, disk_info_json):1751 def pre_block_migration(self, ctxt, instance_ref, disk_info_json):
1709 """Preparation block migration.1752 """Preparation block migration.
17101753
17111754
=== added file 'nova/virt/libvirt/volume.py'
--- nova/virt/libvirt/volume.py 1970-01-01 00:00:00 +0000
+++ nova/virt/libvirt/volume.py 2011-09-20 16:57:39 +0000
@@ -0,0 +1,149 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2011 OpenStack LLC.
4# All Rights Reserved.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18"""Volume drivers for libvirt."""
19
20import os
21import time
22
23from nova import exception
24from nova import flags
25from nova import log as logging
26from nova import utils
27
28LOG = logging.getLogger('nova.virt.libvirt.volume')
29
30FLAGS = flags.FLAGS
31flags.DECLARE('num_iscsi_scan_tries', 'nova.volume.driver')
32
33class LibvirtVolumeDriver(object):
34 """Base class for volume drivers."""
35 def __init__(self, connection):
36 self.connection = connection
37
38 def connect_volume(self, connection_info, mount_device):
39 """Connect the volume. Returns xml for libvirt."""
40 device_path = connection_info['data']['device_path']
41 xml = """<disk type='block'>
42 <driver name='qemu' type='raw'/>
43 <source dev='%s'/>
44 <target dev='%s' bus='virtio'/>
45 </disk>""" % (device_path, mount_device)
46 return xml
47
48 def disconnect_volume(self, connection_info, mount_device):
49 """Disconnect the volume"""
50 pass
51
52
53class LibvirtNetVolumeDriver(LibvirtVolumeDriver):
54 """Driver to attach Network volumes to libvirt."""
55
56 def connect_volume(self, connection_info, mount_device):
57 protocol = connection_info['driver_volume_type']
58 name = connection_info['data']['name']
59 xml = """<disk type='network'>
60 <driver name='qemu' type='raw'/>
61 <source protocol='%s' name='%s'/>
62 <target dev='%s' bus='virtio'/>
63 </disk>""" % (protocol, name, mount_device)
64 return xml
65
66
67class LibvirtISCSIVolumeDriver(LibvirtVolumeDriver):
68 """Driver to attach Network volumes to libvirt."""
69
70 def _run_iscsiadm(self, iscsi_properties, iscsi_command):
71 (out, err) = utils.execute('iscsiadm', '-m', 'node', '-T',
72 iscsi_properties['target_iqn'],
73 '-p', iscsi_properties['target_portal'],
74 *iscsi_command, run_as_root=True)
75 LOG.debug("iscsiadm %s: stdout=%s stderr=%s" %
76 (iscsi_command, out, err))
77 return (out, err)
78
79 def _iscsiadm_update(self, iscsi_properties, property_key, property_value):
80 iscsi_command = ('--op', 'update', '-n', property_key,
81 '-v', property_value)
82 return self._run_iscsiadm(iscsi_properties, iscsi_command)
83
84 def connect_volume(self, connection_info, mount_device):
85 """Attach the volume to instance_name"""
86 iscsi_properties = connection_info['data']
87 try:
88 # NOTE(vish): if we are on the same host as nova volume, the
89 # discovery makes the target so we don't need to
90 # run --op new
91 self._run_iscsiadm(iscsi_properties, ())
92 except exception.ProcessExecutionError:
93 self._run_iscsiadm(iscsi_properties, ('--op', 'new'))
94
95 if iscsi_properties.get('auth_method'):
96 self._iscsiadm_update(iscsi_properties,
97 "node.session.auth.authmethod",
98 iscsi_properties['auth_method'])
99 self._iscsiadm_update(iscsi_properties,
100 "node.session.auth.username",
101 iscsi_properties['auth_username'])
102 self._iscsiadm_update(iscsi_properties,
103 "node.session.auth.password",
104 iscsi_properties['auth_password'])
105
106 self._run_iscsiadm(iscsi_properties, ("--login",))
107
108 self._iscsiadm_update(iscsi_properties, "node.startup", "automatic")
109
110 host_device = ("/dev/disk/by-path/ip-%s-iscsi-%s-lun-0" %
111 (iscsi_properties['target_portal'],
112 iscsi_properties['target_iqn']))
113
114 # The /dev/disk/by-path/... node is not always present immediately
115 # TODO(justinsb): This retry-with-delay is a pattern, move to utils?
116 tries = 0
117 while not os.path.exists(host_device):
118 if tries >= FLAGS.num_iscsi_scan_tries:
119 raise exception.Error(_("iSCSI device not found at %s") %
120 (host_device))
121
122 LOG.warn(_("ISCSI volume not yet found at: %(mount_device)s. "
123 "Will rescan & retry. Try number: %(tries)s") %
124 locals())
125
126 # The rescan isn't documented as being necessary(?), but it helps
127 self._run_iscsiadm(iscsi_properties, ("--rescan",))
128
129 tries = tries + 1
130 if not os.path.exists(host_device):
131 time.sleep(tries ** 2)
132
133 if tries != 0:
134 LOG.debug(_("Found iSCSI node %(mount_device)s "
135 "(after %(tries)s rescans)") %
136 locals())
137
138 connection_info['data']['device_path'] = host_device
139 sup = super(LibvirtISCSIVolumeDriver, self)
140 return sup.connect_volume(connection_info, mount_device)
141
142 def disconnect_volume(self, connection_info, mount_device):
143 """Detach the volume from instance_name"""
144 sup = super(LibvirtISCSIVolumeDriver, self)
145 sup.disconnect_volume(connection_info, mount_device)
146 iscsi_properties = connection_info['data']
147 self._iscsiadm_update(iscsi_properties, "node.startup", "manual")
148 self._run_iscsiadm(iscsi_properties, ("--logout",))
149 self._run_iscsiadm(iscsi_properties, ('--op', 'delete'))
0150
=== modified file 'nova/virt/vmwareapi_conn.py'
--- nova/virt/vmwareapi_conn.py 2011-09-08 21:10:03 +0000
+++ nova/virt/vmwareapi_conn.py 2011-09-20 16:57:39 +0000
@@ -137,7 +137,8 @@
137 """Reboot VM instance."""137 """Reboot VM instance."""
138 self._vmops.reboot(instance, network_info)138 self._vmops.reboot(instance, network_info)
139139
140 def destroy(self, instance, network_info, cleanup=True):140 def destroy(self, instance, network_info, block_device_info=None,
141 cleanup=True):
141 """Destroy VM instance."""142 """Destroy VM instance."""
142 self._vmops.destroy(instance, network_info)143 self._vmops.destroy(instance, network_info)
143144
@@ -173,11 +174,11 @@
173 """Return link to instance's ajax console."""174 """Return link to instance's ajax console."""
174 return self._vmops.get_ajax_console(instance)175 return self._vmops.get_ajax_console(instance)
175176
176 def attach_volume(self, instance_name, device_path, mountpoint):177 def attach_volume(self, connection_info, instance_name, mountpoint):
177 """Attach volume storage to VM instance."""178 """Attach volume storage to VM instance."""
178 pass179 pass
179180
180 def detach_volume(self, instance_name, mountpoint):181 def detach_volume(self, connection_info, instance_name, mountpoint):
181 """Detach volume storage to VM instance."""182 """Detach volume storage to VM instance."""
182 pass183 pass
183184
184185
=== modified file 'nova/virt/xenapi/volume_utils.py'
--- nova/virt/xenapi/volume_utils.py 2011-08-05 14:23:48 +0000
+++ nova/virt/xenapi/volume_utils.py 2011-09-20 16:57:39 +0000
@@ -147,7 +147,7 @@
147 % sr_ref)147 % sr_ref)
148148
149 @classmethod149 @classmethod
150 def parse_volume_info(cls, device_path, mountpoint):150 def parse_volume_info(cls, connection_info, mountpoint):
151 """151 """
152 Parse device_path and mountpoint as they can be used by XenAPI.152 Parse device_path and mountpoint as they can be used by XenAPI.
153 In particular, the mountpoint (e.g. /dev/sdc) must be translated153 In particular, the mountpoint (e.g. /dev/sdc) must be translated
@@ -161,11 +161,12 @@
161 the iscsi driver to set them.161 the iscsi driver to set them.
162 """162 """
163 device_number = VolumeHelper.mountpoint_to_number(mountpoint)163 device_number = VolumeHelper.mountpoint_to_number(mountpoint)
164 volume_id = _get_volume_id(device_path)164 data = connection_info['data']
165 (iscsi_name, iscsi_portal) = _get_target(volume_id)165 volume_id = data['volume_id']
166 target_host = _get_target_host(iscsi_portal)166 target_portal = data['target_portal']
167 target_port = _get_target_port(iscsi_portal)167 target_host = _get_target_host(target_portal)
168 target_iqn = _get_iqn(iscsi_name, volume_id)168 target_port = _get_target_port(target_portal)
169 target_iqn = data['target_iqn']
169 LOG.debug('(vol_id,number,host,port,iqn): (%s,%s,%s,%s)',170 LOG.debug('(vol_id,number,host,port,iqn): (%s,%s,%s,%s)',
170 volume_id, target_host, target_port, target_iqn)171 volume_id, target_host, target_port, target_iqn)
171 if (device_number < 0) or \172 if (device_number < 0) or \
@@ -173,7 +174,7 @@
173 (target_host is None) or \174 (target_host is None) or \
174 (target_iqn is None):175 (target_iqn is None):
175 raise StorageError(_('Unable to obtain target information'176 raise StorageError(_('Unable to obtain target information'
176 ' %(device_path)s, %(mountpoint)s') % locals())177 ' %(data)s, %(mountpoint)s') % locals())
177 volume_info = {}178 volume_info = {}
178 volume_info['deviceNumber'] = device_number179 volume_info['deviceNumber'] = device_number
179 volume_info['volumeId'] = volume_id180 volume_info['volumeId'] = volume_id
180181
=== modified file 'nova/virt/xenapi/volumeops.py'
--- nova/virt/xenapi/volumeops.py 2011-04-21 19:50:04 +0000
+++ nova/virt/xenapi/volumeops.py 2011-09-20 16:57:39 +0000
@@ -40,18 +40,21 @@
40 VolumeHelper.XenAPI = self.XenAPI40 VolumeHelper.XenAPI = self.XenAPI
41 VMHelper.XenAPI = self.XenAPI41 VMHelper.XenAPI = self.XenAPI
4242
43 def attach_volume(self, instance_name, device_path, mountpoint):43 def attach_volume(self, connection_info, instance_name, mountpoint):
44 """Attach volume storage to VM instance"""44 """Attach volume storage to VM instance"""
45 # Before we start, check that the VM exists45 # Before we start, check that the VM exists
46 vm_ref = VMHelper.lookup(self._session, instance_name)46 vm_ref = VMHelper.lookup(self._session, instance_name)
47 if vm_ref is None:47 if vm_ref is None:
48 raise exception.InstanceNotFound(instance_id=instance_name)48 raise exception.InstanceNotFound(instance_id=instance_name)
49 # NOTE: No Resource Pool concept so far49 # NOTE: No Resource Pool concept so far
50 LOG.debug(_("Attach_volume: %(instance_name)s, %(device_path)s,"50 LOG.debug(_("Attach_volume: %(connection_info)s, %(instance_name)s,"
51 " %(mountpoint)s") % locals())51 " %(mountpoint)s") % locals())
52 driver_type = connection_info['driver_volume_type']
53 if driver_type != 'iscsi':
54 raise exception.VolumeDriverNotFound(driver_type=driver_type)
52 # Create the iSCSI SR, and the PDB through which hosts access SRs.55 # Create the iSCSI SR, and the PDB through which hosts access SRs.
53 # But first, retrieve target info, like Host, IQN, LUN and SCSIID56 # But first, retrieve target info, like Host, IQN, LUN and SCSIID
54 vol_rec = VolumeHelper.parse_volume_info(device_path, mountpoint)57 vol_rec = VolumeHelper.parse_volume_info(connection_info, mountpoint)
55 label = 'SR-%s' % vol_rec['volumeId']58 label = 'SR-%s' % vol_rec['volumeId']
56 description = 'Disk-for:%s' % instance_name59 description = 'Disk-for:%s' % instance_name
57 # Create SR60 # Create SR
@@ -92,7 +95,7 @@
92 LOG.info(_('Mountpoint %(mountpoint)s attached to'95 LOG.info(_('Mountpoint %(mountpoint)s attached to'
93 ' instance %(instance_name)s') % locals())96 ' instance %(instance_name)s') % locals())
9497
95 def detach_volume(self, instance_name, mountpoint):98 def detach_volume(self, connection_info, instance_name, mountpoint):
96 """Detach volume storage to VM instance"""99 """Detach volume storage to VM instance"""
97 # Before we start, check that the VM exists100 # Before we start, check that the VM exists
98 vm_ref = VMHelper.lookup(self._session, instance_name)101 vm_ref = VMHelper.lookup(self._session, instance_name)
99102
=== modified file 'nova/virt/xenapi_conn.py'
--- nova/virt/xenapi_conn.py 2011-09-15 18:44:49 +0000
+++ nova/virt/xenapi_conn.py 2011-09-20 16:57:39 +0000
@@ -217,7 +217,8 @@
217 """217 """
218 self._vmops.inject_file(instance, b64_path, b64_contents)218 self._vmops.inject_file(instance, b64_path, b64_contents)
219219
220 def destroy(self, instance, network_info, cleanup=True):220 def destroy(self, instance, network_info, block_device_info=None,
221 cleanup=True):
221 """Destroy VM instance"""222 """Destroy VM instance"""
222 self._vmops.destroy(instance, network_info)223 self._vmops.destroy(instance, network_info)
223224
@@ -289,15 +290,17 @@
289 xs_url = urlparse.urlparse(FLAGS.xenapi_connection_url)290 xs_url = urlparse.urlparse(FLAGS.xenapi_connection_url)
290 return xs_url.netloc291 return xs_url.netloc
291292
292 def attach_volume(self, instance_name, device_path, mountpoint):293 def attach_volume(self, connection_info, instance_name, mountpoint):
293 """Attach volume storage to VM instance"""294 """Attach volume storage to VM instance"""
294 return self._volumeops.attach_volume(instance_name,295 return self._volumeops.attach_volume(connection_info,
295 device_path,296 instance_name,
296 mountpoint)297 mountpoint)
297298
298 def detach_volume(self, instance_name, mountpoint):299 def detach_volume(self, connection_info, instance_name, mountpoint):
299 """Detach volume storage to VM instance"""300 """Detach volume storage to VM instance"""
300 return self._volumeops.detach_volume(instance_name, mountpoint)301 return self._volumeops.detach_volume(connection_info,
302 instance_name,
303 mountpoint)
301304
302 def get_console_pool_info(self, console_type):305 def get_console_pool_info(self, console_type):
303 xs_url = urlparse.urlparse(FLAGS.xenapi_connection_url)306 xs_url = urlparse.urlparse(FLAGS.xenapi_connection_url)
304307
=== modified file 'nova/volume/api.py'
--- nova/volume/api.py 2011-08-26 01:38:35 +0000
+++ nova/volume/api.py 2011-09-20 16:57:39 +0000
@@ -23,7 +23,6 @@
2323
24from eventlet import greenthread24from eventlet import greenthread
2525
26from nova import db
27from nova import exception26from nova import exception
28from nova import flags27from nova import flags
29from nova import log as logging28from nova import log as logging
@@ -180,12 +179,49 @@
180 if volume['status'] == "available":179 if volume['status'] == "available":
181 raise exception.ApiError(_("Volume is already detached"))180 raise exception.ApiError(_("Volume is already detached"))
182181
183 def remove_from_compute(self, context, volume_id, host):182 def remove_from_compute(self, context, instance_id, volume_id, host):
184 """Remove volume from specified compute host."""183 """Remove volume from specified compute host."""
185 rpc.call(context,184 rpc.call(context,
186 self.db.queue_get_for(context, FLAGS.compute_topic, host),185 self.db.queue_get_for(context, FLAGS.compute_topic, host),
187 {"method": "remove_volume",186 {"method": "remove_volume_connection",
188 "args": {'volume_id': volume_id}})187 "args": {'instance_id': instance_id,
188 'volume_id': volume_id}})
189
190 def attach(self, context, volume_id, instance_id, mountpoint):
191 volume = self.get(context, volume_id)
192 host = volume['host']
193 queue = self.db.queue_get_for(context, FLAGS.volume_topic, host)
194 return rpc.call(context, queue,
195 {"method": "attach_volume",
196 "args": {"volume_id": volume_id,
197 "instance_id": instance_id,
198 "mountpoint": mountpoint}})
199
200 def detach(self, context, volume_id):
201 volume = self.get(context, volume_id)
202 host = volume['host']
203 queue = self.db.queue_get_for(context, FLAGS.volume_topic, host)
204 return rpc.call(context, queue,
205 {"method": "detach_volume",
206 "args": {"volume_id": volume_id}})
207
208 def initialize_connection(self, context, volume_id, address):
209 volume = self.get(context, volume_id)
210 host = volume['host']
211 queue = self.db.queue_get_for(context, FLAGS.volume_topic, host)
212 return rpc.call(context, queue,
213 {"method": "initialize_connection",
214 "args": {"volume_id": volume_id,
215 "address": address}})
216
217 def terminate_connection(self, context, volume_id, address):
218 volume = self.get(context, volume_id)
219 host = volume['host']
220 queue = self.db.queue_get_for(context, FLAGS.volume_topic, host)
221 return rpc.call(context, queue,
222 {"method": "terminate_connection",
223 "args": {"volume_id": volume_id,
224 "address": address}})
189225
190 def _create_snapshot(self, context, volume_id, name, description,226 def _create_snapshot(self, context, volume_id, name, description,
191 force=False):227 force=False):
192228
=== modified file 'nova/volume/driver.py'
--- nova/volume/driver.py 2011-09-13 21:32:24 +0000
+++ nova/volume/driver.py 2011-09-20 16:57:39 +0000
@@ -20,8 +20,8 @@
2020
21"""21"""
2222
23import os
23import time24import time
24import os
25from xml.etree import ElementTree25from xml.etree import ElementTree
2626
27from nova import exception27from nova import exception
@@ -35,25 +35,17 @@
35FLAGS = flags.FLAGS35FLAGS = flags.FLAGS
36flags.DEFINE_string('volume_group', 'nova-volumes',36flags.DEFINE_string('volume_group', 'nova-volumes',
37 'Name for the VG that will contain exported volumes')37 'Name for the VG that will contain exported volumes')
38flags.DEFINE_string('aoe_eth_dev', 'eth0',
39 'Which device to export the volumes on')
40flags.DEFINE_string('num_shell_tries', 3,38flags.DEFINE_string('num_shell_tries', 3,
41 'number of times to attempt to run flakey shell commands')39 'number of times to attempt to run flakey shell commands')
42flags.DEFINE_string('num_iscsi_scan_tries', 3,40flags.DEFINE_string('num_iscsi_scan_tries', 3,
43 'number of times to rescan iSCSI target to find volume')41 'number of times to rescan iSCSI target to find volume')
44flags.DEFINE_integer('num_shelves',
45 100,
46 'Number of vblade shelves')
47flags.DEFINE_integer('blades_per_shelf',
48 16,
49 'Number of vblade blades per shelf')
50flags.DEFINE_integer('iscsi_num_targets',42flags.DEFINE_integer('iscsi_num_targets',
51 100,43 100,
52 'Number of iscsi target ids per host')44 'Number of iscsi target ids per host')
53flags.DEFINE_string('iscsi_target_prefix', 'iqn.2010-10.org.openstack:',45flags.DEFINE_string('iscsi_target_prefix', 'iqn.2010-10.org.openstack:',
54 'prefix for iscsi volumes')46 'prefix for iscsi volumes')
55flags.DEFINE_string('iscsi_ip_prefix', '$my_ip',47flags.DEFINE_string('iscsi_ip_address', '$my_ip',
56 'discover volumes on the ip that starts with this prefix')48 'use this ip for iscsi')
57flags.DEFINE_string('rbd_pool', 'rbd',49flags.DEFINE_string('rbd_pool', 'rbd',
58 'the rbd pool in which volumes are stored')50 'the rbd pool in which volumes are stored')
5951
@@ -202,146 +194,24 @@
202 """Removes an export for a logical volume."""194 """Removes an export for a logical volume."""
203 raise NotImplementedError()195 raise NotImplementedError()
204196
205 def discover_volume(self, context, volume):
206 """Discover volume on a remote host."""
207 raise NotImplementedError()
208
209 def undiscover_volume(self, volume):
210 """Undiscover volume on a remote host."""
211 raise NotImplementedError()
212
213 def check_for_export(self, context, volume_id):197 def check_for_export(self, context, volume_id):
214 """Make sure volume is exported."""198 """Make sure volume is exported."""
215 raise NotImplementedError()199 raise NotImplementedError()
216200
201 def initialize_connection(self, volume, address):
202 """Allow connection to ip and return connection info."""
203 raise NotImplementedError()
204
205 def terminate_connection(self, volume, address):
206 """Disallow connection from ip"""
207 raise NotImplementedError()
208
217 def get_volume_stats(self, refresh=False):209 def get_volume_stats(self, refresh=False):
218 """Return the current state of the volume service. If 'refresh' is210 """Return the current state of the volume service. If 'refresh' is
219 True, run the update first."""211 True, run the update first."""
220 return None212 return None
221213
222214
223class AOEDriver(VolumeDriver):
224 """WARNING! Deprecated. This driver will be removed in Essex. Its use
225 is not recommended.
226
227 Implements AOE specific volume commands."""
228
229 def __init__(self, *args, **kwargs):
230 LOG.warn(_("AOEDriver is deprecated and will be removed in Essex"))
231 super(AOEDriver, self).__init__(*args, **kwargs)
232
233 def ensure_export(self, context, volume):
234 # NOTE(vish): we depend on vblade-persist for recreating exports
235 pass
236
237 def _ensure_blades(self, context):
238 """Ensure that blades have been created in datastore."""
239 total_blades = FLAGS.num_shelves * FLAGS.blades_per_shelf
240 if self.db.export_device_count(context) >= total_blades:
241 return
242 for shelf_id in xrange(FLAGS.num_shelves):
243 for blade_id in xrange(FLAGS.blades_per_shelf):
244 dev = {'shelf_id': shelf_id, 'blade_id': blade_id}
245 self.db.export_device_create_safe(context, dev)
246
247 def create_export(self, context, volume):
248 """Creates an export for a logical volume."""
249 self._ensure_blades(context)
250 (shelf_id,
251 blade_id) = self.db.volume_allocate_shelf_and_blade(context,
252 volume['id'])
253 self._try_execute(
254 'vblade-persist', 'setup',
255 shelf_id,
256 blade_id,
257 FLAGS.aoe_eth_dev,
258 "/dev/%s/%s" %
259 (FLAGS.volume_group,
260 volume['name']),
261 run_as_root=True)
262 # NOTE(vish): The standard _try_execute does not work here
263 # because these methods throw errors if other
264 # volumes on this host are in the process of
265 # being created. The good news is the command
266 # still works for the other volumes, so we
267 # just wait a bit for the current volume to
268 # be ready and ignore any errors.
269 time.sleep(2)
270 self._execute('vblade-persist', 'auto', 'all',
271 check_exit_code=False, run_as_root=True)
272 self._execute('vblade-persist', 'start', 'all',
273 check_exit_code=False, run_as_root=True)
274
275 def remove_export(self, context, volume):
276 """Removes an export for a logical volume."""
277 (shelf_id,
278 blade_id) = self.db.volume_get_shelf_and_blade(context,
279 volume['id'])
280 self._try_execute('vblade-persist', 'stop',
281 shelf_id, blade_id, run_as_root=True)
282 self._try_execute('vblade-persist', 'destroy',
283 shelf_id, blade_id, run_as_root=True)
284
285 def discover_volume(self, context, _volume):
286 """Discover volume on a remote host."""
287 (shelf_id,
288 blade_id) = self.db.volume_get_shelf_and_blade(context,
289 _volume['id'])
290 self._execute('aoe-discover', run_as_root=True)
291 out, err = self._execute('aoe-stat', check_exit_code=False,
292 run_as_root=True)
293 device_path = 'e%(shelf_id)d.%(blade_id)d' % locals()
294 if out.find(device_path) >= 0:
295 return "/dev/etherd/%s" % device_path
296 else:
297 return
298
299 def undiscover_volume(self, _volume):
300 """Undiscover volume on a remote host."""
301 pass
302
303 def check_for_export(self, context, volume_id):
304 """Make sure volume is exported."""
305 (shelf_id,
306 blade_id) = self.db.volume_get_shelf_and_blade(context,
307 volume_id)
308 cmd = ('vblade-persist', 'ls', '--no-header')
309 out, _err = self._execute(*cmd, run_as_root=True)
310 exported = False
311 for line in out.split('\n'):
312 param = line.split(' ')
313 if len(param) == 6 and param[0] == str(shelf_id) \
314 and param[1] == str(blade_id) and param[-1] == "run":
315 exported = True
316 break
317 if not exported:
318 # Instance will be terminated in this case.
319 desc = _("Cannot confirm exported volume id:%(volume_id)s. "
320 "vblade process for e%(shelf_id)s.%(blade_id)s "
321 "isn't running.") % locals()
322 raise exception.ProcessExecutionError(out, _err, cmd=cmd,
323 description=desc)
324
325
326class FakeAOEDriver(AOEDriver):
327 """Logs calls instead of executing."""
328
329 def __init__(self, *args, **kwargs):
330 super(FakeAOEDriver, self).__init__(execute=self.fake_execute,
331 sync_exec=self.fake_execute,
332 *args, **kwargs)
333
334 def check_for_setup_error(self):
335 """No setup necessary in fake mode."""
336 pass
337
338 @staticmethod
339 def fake_execute(cmd, *_args, **_kwargs):
340 """Execute that simply logs the command."""
341 LOG.debug(_("FAKE AOE: %s"), cmd)
342 return (None, None)
343
344
345class ISCSIDriver(VolumeDriver):215class ISCSIDriver(VolumeDriver):
346 """Executes commands relating to ISCSI volumes.216 """Executes commands relating to ISCSI volumes.
347217
@@ -445,7 +315,7 @@
445 '-t', 'sendtargets', '-p', volume['host'],315 '-t', 'sendtargets', '-p', volume['host'],
446 run_as_root=True)316 run_as_root=True)
447 for target in out.splitlines():317 for target in out.splitlines():
448 if FLAGS.iscsi_ip_prefix in target and volume_name in target:318 if FLAGS.iscsi_ip_address in target and volume_name in target:
449 return target319 return target
450 return None320 return None
451321
@@ -462,6 +332,8 @@
462332
463 :target_portal: the portal of the iSCSI target333 :target_portal: the portal of the iSCSI target
464334
335 :volume_id: the id of the volume (currently used by xen)
336
465 :auth_method:, :auth_username:, :auth_password:337 :auth_method:, :auth_username:, :auth_password:
466338
467 the authentication details. Right now, either auth_method is not339 the authentication details. Right now, either auth_method is not
@@ -491,6 +363,7 @@
491363
492 iscsi_portal = iscsi_target.split(",")[0]364 iscsi_portal = iscsi_target.split(",")[0]
493365
366 properties['volume_id'] = volume['id']
494 properties['target_iqn'] = iscsi_name367 properties['target_iqn'] = iscsi_name
495 properties['target_portal'] = iscsi_portal368 properties['target_portal'] = iscsi_portal
496369
@@ -519,64 +392,17 @@
519 '-v', property_value)392 '-v', property_value)
520 return self._run_iscsiadm(iscsi_properties, iscsi_command)393 return self._run_iscsiadm(iscsi_properties, iscsi_command)
521394
522 def discover_volume(self, context, volume):395 def initialize_connection(self, volume, address):
523 """Discover volume on a remote host."""396 iscsi_properties = self._get_iscsi_properties(volume)
524 iscsi_properties = self._get_iscsi_properties(volume)397 return {
525398 'driver_volume_type' : 'iscsi',
526 if not iscsi_properties['target_discovered']:399 'data' : iscsi_properties
527 self._run_iscsiadm(iscsi_properties, ('--op', 'new'))400 }
528401
529 if iscsi_properties.get('auth_method'):402
530 self._iscsiadm_update(iscsi_properties,403
531 "node.session.auth.authmethod",404 def terminate_connection(self, volume, address):
532 iscsi_properties['auth_method'])405 pass
533 self._iscsiadm_update(iscsi_properties,
534 "node.session.auth.username",
535 iscsi_properties['auth_username'])
536 self._iscsiadm_update(iscsi_properties,
537 "node.session.auth.password",
538 iscsi_properties['auth_password'])
539
540 self._run_iscsiadm(iscsi_properties, ("--login", ))
541
542 self._iscsiadm_update(iscsi_properties, "node.startup", "automatic")
543
544 mount_device = ("/dev/disk/by-path/ip-%s-iscsi-%s-lun-0" %
545 (iscsi_properties['target_portal'],
546 iscsi_properties['target_iqn']))
547
548 # The /dev/disk/by-path/... node is not always present immediately
549 # TODO(justinsb): This retry-with-delay is a pattern, move to utils?
550 tries = 0
551 while not os.path.exists(mount_device):
552 if tries >= FLAGS.num_iscsi_scan_tries:
553 raise exception.Error(_("iSCSI device not found at %s") %
554 (mount_device))
555
556 LOG.warn(_("ISCSI volume not yet found at: %(mount_device)s. "
557 "Will rescan & retry. Try number: %(tries)s") %
558 locals())
559
560 # The rescan isn't documented as being necessary(?), but it helps
561 self._run_iscsiadm(iscsi_properties, ("--rescan", ))
562
563 tries = tries + 1
564 if not os.path.exists(mount_device):
565 time.sleep(tries ** 2)
566
567 if tries != 0:
568 LOG.debug(_("Found iSCSI node %(mount_device)s "
569 "(after %(tries)s rescans)") %
570 locals())
571
572 return mount_device
573
574 def undiscover_volume(self, volume):
575 """Undiscover volume on a remote host."""
576 iscsi_properties = self._get_iscsi_properties(volume)
577 self._iscsiadm_update(iscsi_properties, "node.startup", "manual")
578 self._run_iscsiadm(iscsi_properties, ("--logout", ))
579 self._run_iscsiadm(iscsi_properties, ('--op', 'delete'))
580406
581 def check_for_export(self, context, volume_id):407 def check_for_export(self, context, volume_id):
582 """Make sure volume is exported."""408 """Make sure volume is exported."""
@@ -605,12 +431,13 @@
605 """No setup necessary in fake mode."""431 """No setup necessary in fake mode."""
606 pass432 pass
607433
608 def discover_volume(self, context, volume):434 def initialize_connection(self, volume, address):
609 """Discover volume on a remote host."""435 return {
610 return "/dev/disk/by-path/volume-id-%d" % volume['id']436 'driver_volume_type' : 'iscsi',
437 'data' : {}
438 }
611439
612 def undiscover_volume(self, volume):440 def terminate_connection(self, volume, address):
613 """Undiscover volume on a remote host."""
614 pass441 pass
615442
616 @staticmethod443 @staticmethod
@@ -675,12 +502,16 @@
675 """Removes an export for a logical volume"""502 """Removes an export for a logical volume"""
676 pass503 pass
677504
678 def discover_volume(self, context, volume):505 def initialize_connection(self, volume, address):
679 """Discover volume on a remote host"""506 return {
680 return "rbd:%s/%s" % (FLAGS.rbd_pool, volume['name'])507 'driver_volume_type' : 'rbd',
681508 'data' : {
682 def undiscover_volume(self, volume):509 'name' : '%s/%s' % (FLAGS.rbd_pool, volume['name'])
683 """Undiscover volume on a remote host"""510 }
511 }
512
513
514 def terminate_connection(self, volume, address):
684 pass515 pass
685516
686517
@@ -738,12 +569,15 @@
738 """Removes an export for a logical volume"""569 """Removes an export for a logical volume"""
739 pass570 pass
740571
741 def discover_volume(self, context, volume):572 def initialize_connection(self, volume, address):
742 """Discover volume on a remote host"""573 return {
743 return "sheepdog:%s" % volume['name']574 'driver_volume_type' : 'sheepdog',
575 'data' : {
576 'name' : volume['name']
577 }
578 }
744579
745 def undiscover_volume(self, volume):580 def terminate_connection(self, volume, address):
746 """Undiscover volume on a remote host"""
747 pass581 pass
748582
749583
@@ -772,11 +606,11 @@
772 def remove_export(self, context, volume):606 def remove_export(self, context, volume):
773 self.log_action('remove_export', volume)607 self.log_action('remove_export', volume)
774608
775 def discover_volume(self, context, volume):609 def initialize_connection(self, volume, address):
776 self.log_action('discover_volume', volume)610 self.log_action('initialize_connection', volume)
777611
778 def undiscover_volume(self, volume):612 def terminate_connection(self, volume, address):
779 self.log_action('undiscover_volume', volume)613 self.log_action('terminate_connection', volume)
780614
781 def check_for_export(self, context, volume_id):615 def check_for_export(self, context, volume_id):
782 self.log_action('check_for_export', volume_id)616 self.log_action('check_for_export', volume_id)
@@ -906,6 +740,58 @@
906740
907 LOG.debug(_("VSA BE delete_volume for %s suceeded"), volume['name'])741 LOG.debug(_("VSA BE delete_volume for %s suceeded"), volume['name'])
908742
743 def _discover_volume(self, context, volume):
744 """Discover volume on a remote host."""
745 iscsi_properties = self._get_iscsi_properties(volume)
746
747 if not iscsi_properties['target_discovered']:
748 self._run_iscsiadm(iscsi_properties, ('--op', 'new'))
749
750 if iscsi_properties.get('auth_method'):
751 self._iscsiadm_update(iscsi_properties,
752 "node.session.auth.authmethod",
753 iscsi_properties['auth_method'])
754 self._iscsiadm_update(iscsi_properties,
755 "node.session.auth.username",
756 iscsi_properties['auth_username'])
757 self._iscsiadm_update(iscsi_properties,
758 "node.session.auth.password",
759 iscsi_properties['auth_password'])
760
761 self._run_iscsiadm(iscsi_properties, ("--login", ))
762
763 self._iscsiadm_update(iscsi_properties, "node.startup", "automatic")
764
765 mount_device = ("/dev/disk/by-path/ip-%s-iscsi-%s-lun-0" %
766 (iscsi_properties['target_portal'],
767 iscsi_properties['target_iqn']))
768
769 # The /dev/disk/by-path/... node is not always present immediately
770 # TODO(justinsb): This retry-with-delay is a pattern, move to utils?
771 tries = 0
772 while not os.path.exists(mount_device):
773 if tries >= FLAGS.num_iscsi_scan_tries:
774 raise exception.Error(_("iSCSI device not found at %s") %
775 (mount_device))
776
777 LOG.warn(_("ISCSI volume not yet found at: %(mount_device)s. "
778 "Will rescan & retry. Try number: %(tries)s") %
779 locals())
780
781 # The rescan isn't documented as being necessary(?), but it helps
782 self._run_iscsiadm(iscsi_properties, ("--rescan", ))
783
784 tries = tries + 1
785 if not os.path.exists(mount_device):
786 time.sleep(tries ** 2)
787
788 if tries != 0:
789 LOG.debug(_("Found iSCSI node %(mount_device)s "
790 "(after %(tries)s rescans)") %
791 locals())
792
793 return mount_device
794
909 def local_path(self, volume):795 def local_path(self, volume):
910 if self._not_vsa_volume_or_drive(volume):796 if self._not_vsa_volume_or_drive(volume):
911 return super(ZadaraBEDriver, self).local_path(volume)797 return super(ZadaraBEDriver, self).local_path(volume)
@@ -913,7 +799,10 @@
913 if self._is_vsa_volume(volume):799 if self._is_vsa_volume(volume):
914 LOG.debug(_("\tFE VSA Volume %s local path call - call discover"),800 LOG.debug(_("\tFE VSA Volume %s local path call - call discover"),
915 volume['name'])801 volume['name'])
916 return super(ZadaraBEDriver, self).discover_volume(None, volume)802 # NOTE(vish): Copied discover from iscsi_driver since it is used
803 # but this should probably be refactored into a common
804 # area because it is used in libvirt driver.
805 return self._discover_volume(None, volume)
917806
918 raise exception.Error(_("local_path not supported"))807 raise exception.Error(_("local_path not supported"))
919808
920809
=== modified file 'nova/volume/manager.py'
--- nova/volume/manager.py 2011-08-26 20:55:43 +0000
+++ nova/volume/manager.py 2011-09-20 16:57:39 +0000
@@ -28,20 +28,17 @@
28:volume_topic: What :mod:`rpc` topic to listen to (default: `volume`).28:volume_topic: What :mod:`rpc` topic to listen to (default: `volume`).
29:volume_manager: The module name of a class derived from29:volume_manager: The module name of a class derived from
30 :class:`manager.Manager` (default:30 :class:`manager.Manager` (default:
31 :class:`nova.volume.manager.AOEManager`).31 :class:`nova.volume.manager.Manager`).
32:storage_availability_zone: Defaults to `nova`.32:storage_availability_zone: Defaults to `nova`.
33:volume_driver: Used by :class:`AOEManager`. Defaults to33:volume_driver: Used by :class:`Manager`. Defaults to
34 :class:`nova.volume.driver.AOEDriver`.34 :class:`nova.volume.driver.ISCSIDriver`.
35:num_shelves: Number of shelves for AoE (default: 100).
36:num_blades: Number of vblades per shelf to allocate AoE storage from
37 (default: 16).
38:volume_group: Name of the group that will contain exported volumes (default:35:volume_group: Name of the group that will contain exported volumes (default:
39 `nova-volumes`)36 `nova-volumes`)
40:aoe_eth_dev: Device name the volumes will be exported on (default: `eth0`).37:num_shell_tries: Number of times to attempt to run commands (default: 3)
41:num_shell_tries: Number of times to attempt to run AoE commands (default: 3)
4238
43"""39"""
4440
41import sys
4542
46from nova import context43from nova import context
47from nova import exception44from nova import exception
@@ -126,10 +123,11 @@
126 if model_update:123 if model_update:
127 self.db.volume_update(context, volume_ref['id'], model_update)124 self.db.volume_update(context, volume_ref['id'], model_update)
128 except Exception:125 except Exception:
126 exc_info = sys.exc_info()
129 self.db.volume_update(context,127 self.db.volume_update(context,
130 volume_ref['id'], {'status': 'error'})128 volume_ref['id'], {'status': 'error'})
131 self._notify_vsa(context, volume_ref, 'error')129 self._notify_vsa(context, volume_ref, 'error')
132 raise130 raise exc_info
133131
134 now = utils.utcnow()132 now = utils.utcnow()
135 self.db.volume_update(context,133 self.db.volume_update(context,
@@ -181,10 +179,11 @@
181 {'status': 'available'})179 {'status': 'available'})
182 return True180 return True
183 except Exception:181 except Exception:
182 exc_info = sys.exc_info()
184 self.db.volume_update(context,183 self.db.volume_update(context,
185 volume_ref['id'],184 volume_ref['id'],
186 {'status': 'error_deleting'})185 {'status': 'error_deleting'})
187 raise186 raise exc_info
188187
189 self.db.volume_destroy(context, volume_id)188 self.db.volume_destroy(context, volume_id)
190 LOG.debug(_("volume %s: deleted successfully"), volume_ref['name'])189 LOG.debug(_("volume %s: deleted successfully"), volume_ref['name'])
@@ -233,26 +232,26 @@
233 LOG.debug(_("snapshot %s: deleted successfully"), snapshot_ref['name'])232 LOG.debug(_("snapshot %s: deleted successfully"), snapshot_ref['name'])
234 return True233 return True
235234
236 def setup_compute_volume(self, context, volume_id):235 def attach_volume(self, context, volume_id, instance_id, mountpoint):
237 """Setup remote volume on compute host.236 """Updates db to show volume is attached"""
238237 # TODO(vish): refactor this into a more general "reserve"
239 Returns path to device."""238 self.db.volume_attached(context,
240 context = context.elevated()239 volume_id,
241 volume_ref = self.db.volume_get(context, volume_id)240 instance_id,
242 if volume_ref['host'] == self.host and FLAGS.use_local_volumes:241 mountpoint)
243 path = self.driver.local_path(volume_ref)242
244 else:243 def detach_volume(self, context, volume_id):
245 path = self.driver.discover_volume(context, volume_ref)244 """Updates db to show volume is detached"""
246 return path245 # TODO(vish): refactor this into a more general "unreserve"
247246 self.db.volume_detached(context, volume_id)
248 def remove_compute_volume(self, context, volume_id):247
249 """Remove remote volume on compute host."""248 def initialize_connection(self, context, volume_id, address):
250 context = context.elevated()249 volume_ref = self.db.volume_get(context, volume_id)
251 volume_ref = self.db.volume_get(context, volume_id)250 return self.driver.initialize_connection(volume_ref, address)
252 if volume_ref['host'] == self.host and FLAGS.use_local_volumes:251
253 return True252 def terminate_connection(self, context, volume_id, address):
254 else:253 volume_ref = self.db.volume_get(context, volume_id)
255 self.driver.undiscover_volume(volume_ref)254 self.driver.terminate_connection(volume_ref, address)
256255
257 def check_for_export(self, context, instance_id):256 def check_for_export(self, context, instance_id):
258 """Make sure whether volume is exported."""257 """Make sure whether volume is exported."""
259258
=== modified file 'nova/volume/san.py'
--- nova/volume/san.py 2011-08-26 01:38:35 +0000
+++ nova/volume/san.py 2011-09-20 16:57:39 +0000
@@ -61,9 +61,6 @@
61 def _build_iscsi_target_name(self, volume):61 def _build_iscsi_target_name(self, volume):
62 return "%s%s" % (FLAGS.iscsi_target_prefix, volume['name'])62 return "%s%s" % (FLAGS.iscsi_target_prefix, volume['name'])
6363
64 # discover_volume is still OK
65 # undiscover_volume is still OK
66
67 def _connect_to_ssh(self):64 def _connect_to_ssh(self):
68 ssh = paramiko.SSHClient()65 ssh = paramiko.SSHClient()
69 #TODO(justinsb): We need a better SSH key policy66 #TODO(justinsb): We need a better SSH key policy
7067
=== modified file 'tools/euca-get-ajax-console' (properties changed: +x to -x)