Merge lp:~vishvananda/nova/volume-cleanup-2 into lp:~hudson-openstack/nova/trunk
- volume-cleanup-2
- Merge into trunk
Status: | Needs review |
---|---|
Proposed branch: | lp:~vishvananda/nova/volume-cleanup-2 |
Merge into: | lp:~hudson-openstack/nova/trunk |
Prerequisite: | lp:~mcgrue/nova/volume-cleanup |
Diff against target: |
3176 lines (+1006/-933) 36 files modified
Authors (+1/-0) bin/nova-manage (+2/-3) doc/source/runnova/getting.started.rst (+0/-1) nova/compute/api.py (+3/-2) nova/compute/manager.py (+146/-82) nova/compute/utils.py (+0/-29) nova/db/api.py (+1/-35) nova/db/sqlalchemy/api.py (+5/-55) nova/db/sqlalchemy/migrate_repo/versions/048_kill_export_devices.py (+51/-0) nova/db/sqlalchemy/migrate_repo/versions/049_add_connection_info_to_block_device_mapping.py (+35/-0) nova/db/sqlalchemy/models.py (+2/-15) nova/exception.py (+4/-4) nova/rpc/common.py (+4/-5) nova/tests/api/ec2/test_cloud.py (+11/-10) nova/tests/fake_flags.py (+0/-4) nova/tests/integrated/test_volumes.py (+5/-5) nova/tests/scheduler/test_scheduler.py (+3/-2) nova/tests/test_compute.py (+95/-227) nova/tests/test_libvirt.py (+127/-13) nova/tests/test_virt_drivers.py (+5/-3) nova/tests/test_volume.py (+2/-80) nova/tests/test_xenapi.py (+20/-4) nova/virt/driver.py (+6/-5) nova/virt/fake.py (+19/-4) nova/virt/hyperv.py (+4/-3) nova/virt/libvirt.xml.template (+7/-15) nova/virt/libvirt/connection.py (+91/-48) nova/virt/libvirt/volume.py (+149/-0) nova/virt/vmwareapi_conn.py (+4/-3) nova/virt/xenapi/volume_utils.py (+8/-7) nova/virt/xenapi/volumeops.py (+7/-4) nova/virt/xenapi_conn.py (+10/-7) nova/volume/api.py (+40/-4) nova/volume/driver.py (+110/-221) nova/volume/manager.py (+29/-30) nova/volume/san.py (+0/-3) |
To merge this branch: | bzr merge lp:~vishvananda/nova/volume-cleanup-2 |
Related bugs: | |
Related blueprints: |
Volume Code cleanup
(Medium)
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Thierry Carrez (community) | ffe | Abstain | |
Christopher MacGown (community) | Needs Resubmitting | ||
Review via email: mp+72270@code.launchpad.net |
Commit message
Description of the change
This is an initial proposal just to get on the radar, and potentially start collecting feedback. I'm trying to decouple the interactions between compute and volume and allow new drivers to be written for each hypervisor. This code is not expected to run nor pass tests yet. The goal is to allow volumes to be used generically and easily support other services like Lunr and VSA. I'm cleaning it up still, but here is the curent progress:
* Removes discover and undiscover volume
* Implements a generic driver model for libvirt volume attachment (something can be done for xen as well, but right now it only supports iscsi)
* Adds initialize_
Christopher MacGown (0x44) wrote : | # |
If this is FFE refused, we should put it back into WIP.
Thierry Carrez (ttx) wrote : | # |
Essex is open
Unmerged revisions
- 1402. By Vish Ishaya
-
make it work when we are on the same host
- 1401. By Vish Ishaya
-
use tuples for login and logout
- 1400. By Vish Ishaya
-
renumber migrations
- 1399. By Vish Ishaya
-
merged trunk
- 1398. By Vish Ishaya
-
fix rescan and messed up permissions
- 1397. By Vish Ishaya
-
compare as string instead of converting to int
- 1396. By Vish Ishaya
-
fix integrated attach volume test
- 1395. By Vish Ishaya
-
fix scheduler test
- 1394. By Vish Ishaya
-
changes from volume api
- 1393. By Vish Ishaya
-
pull in changes from manager
Preview Diff
1 | === modified file 'Authors' |
2 | --- Authors 2011-09-20 03:21:10 +0000 |
3 | +++ Authors 2011-09-20 16:57:39 +0000 |
4 | @@ -10,6 +10,7 @@ |
5 | Antony Messerli <ant@openstack.org> |
6 | Armando Migliaccio <Armando.Migliaccio@eu.citrix.com> |
7 | Arvind Somya <asomya@cisco.com> |
8 | +Ben McGraw <ben@pistoncloud.com> |
9 | Bilal Akhtar <bilalakhtar@ubuntu.com> |
10 | Brad Hall <brad@nicira.com> |
11 | Brad McConnell <bmcconne@rackspace.com> |
12 | |
13 | === modified file 'bin/nova-manage' |
14 | --- bin/nova-manage 2011-09-20 06:50:27 +0000 |
15 | +++ bin/nova-manage 2011-09-20 16:57:39 +0000 |
16 | @@ -962,9 +962,8 @@ |
17 | msg = _('Only KVM and QEmu are supported for now. Sorry!') |
18 | raise exception.Error(msg) |
19 | |
20 | - if (FLAGS.volume_driver != 'nova.volume.driver.AOEDriver' and \ |
21 | - FLAGS.volume_driver != 'nova.volume.driver.ISCSIDriver'): |
22 | - msg = _("Support only AOEDriver and ISCSIDriver. Sorry!") |
23 | + if FLAGS.volume_driver != 'nova.volume.driver.ISCSIDriver': |
24 | + msg = _("Support only ISCSIDriver. Sorry!") |
25 | raise exception.Error(msg) |
26 | |
27 | rpc.call(ctxt, |
28 | |
29 | === modified file 'bin/nova-spoolsentry' (properties changed: -x to +x) |
30 | === modified file 'builddeb.sh' (properties changed: +x to -x) |
31 | === modified file 'contrib/nova.sh' (properties changed: +x to -x) |
32 | === modified file 'doc/find_autodoc_modules.sh' (properties changed: +x to -x) |
33 | === modified file 'doc/generate_autodoc_index.sh' (properties changed: +x to -x) |
34 | === modified file 'doc/source/image_src/zones_distsched_illustrations.odp' (properties changed: +x to -x) |
35 | === modified file 'doc/source/images/nova.compute.api.create.png' (properties changed: +x to -x) |
36 | === modified file 'doc/source/images/nova.compute.api.create_all_at_once.png' (properties changed: +x to -x) |
37 | === modified file 'doc/source/images/zone_aware_overview.png' (properties changed: +x to -x) |
38 | === modified file 'doc/source/images/zone_overview.png' (properties changed: +x to -x) |
39 | === modified file 'doc/source/runnova/getting.started.rst' |
40 | --- doc/source/runnova/getting.started.rst 2011-02-21 20:30:20 +0000 |
41 | +++ doc/source/runnova/getting.started.rst 2011-09-20 16:57:39 +0000 |
42 | @@ -73,7 +73,6 @@ |
43 | * dnsmasq |
44 | * vlan |
45 | * open-iscsi and iscsitarget (if you use iscsi volumes) |
46 | -* aoetools and vblade-persist (if you use aoe-volumes) |
47 | |
48 | Nova uses cutting-edge versions of many packages. There are ubuntu packages in |
49 | the nova-core trunk ppa. You can use add this ppa to your sources list on an |
50 | |
51 | === modified file 'nova/CA/geninter.sh' (properties changed: +x to -x) |
52 | === modified file 'nova/CA/genrootca.sh' (properties changed: +x to -x) |
53 | === modified file 'nova/CA/genvpn.sh' (properties changed: +x to -x) |
54 | === modified file 'nova/auth/opendj.sh' (properties changed: +x to -x) |
55 | === modified file 'nova/auth/slap.sh' (properties changed: +x to -x) |
56 | === modified file 'nova/cloudpipe/bootscript.template' (properties changed: +x to -x) |
57 | === modified file 'nova/compute/api.py' |
58 | --- nova/compute/api.py 2011-09-19 21:53:17 +0000 |
59 | +++ nova/compute/api.py 2011-09-20 16:57:39 +0000 |
60 | @@ -37,7 +37,6 @@ |
61 | from nova.compute import power_state |
62 | from nova.compute import task_states |
63 | from nova.compute import vm_states |
64 | -from nova.compute.utils import terminate_volumes |
65 | from nova.scheduler import api as scheduler_api |
66 | from nova.db import base |
67 | |
68 | @@ -770,7 +769,9 @@ |
69 | self._cast_compute_message('terminate_instance', context, |
70 | instance_id, host) |
71 | else: |
72 | - terminate_volumes(self.db, context, instance_id) |
73 | + for bdm in self.db.block_device_mapping_get_all_by_instance( |
74 | + context, instance_id): |
75 | + self.db.block_device_mapping_destroy(context, bdm['id']) |
76 | self.db.instance_destroy(context, instance_id) |
77 | |
78 | @scheduler_api.reroute_compute("stop") |
79 | |
80 | === modified file 'nova/compute/manager.py' |
81 | --- nova/compute/manager.py 2011-09-19 15:25:00 +0000 |
82 | +++ nova/compute/manager.py 2011-09-20 16:57:39 +0000 |
83 | @@ -30,8 +30,6 @@ |
84 | :instances_path: Where instances are kept on disk |
85 | :compute_driver: Name of class that is used to handle virtualization, loaded |
86 | by :func:`nova.utils.import_object` |
87 | -:volume_manager: Name of class that handles persistent storage, loaded by |
88 | - :func:`nova.utils.import_object` |
89 | |
90 | """ |
91 | |
92 | @@ -59,7 +57,6 @@ |
93 | from nova.compute import task_states |
94 | from nova.compute import vm_states |
95 | from nova.notifier import api as notifier |
96 | -from nova.compute.utils import terminate_volumes |
97 | from nova.virt import driver |
98 | |
99 | |
100 | @@ -144,7 +141,6 @@ |
101 | |
102 | self.network_api = network.API() |
103 | self.network_manager = utils.import_object(FLAGS.network_manager) |
104 | - self.volume_manager = utils.import_object(FLAGS.volume_manager) |
105 | self._last_host_check = 0 |
106 | super(ComputeManager, self).__init__(service_name="compute", |
107 | *args, **kwargs) |
108 | @@ -282,8 +278,8 @@ |
109 | if not ((bdm['snapshot_id'] is None) or |
110 | (bdm['volume_id'] is not None)): |
111 | LOG.error(_('corrupted state of block device mapping ' |
112 | - 'id: %(id)s ' |
113 | - 'snapshot: %(snapshot_id) volume: %(vollume_id)') % |
114 | + 'id: %(id)s snapshot: %(snapshot_id)s ' |
115 | + 'volume: %(volume_id)s') % |
116 | {'id': bdm['id'], |
117 | 'snapshot_id': bdm['snapshot'], |
118 | 'volume_id': bdm['volume_id']}) |
119 | @@ -293,10 +289,13 @@ |
120 | if bdm['volume_id'] is not None: |
121 | volume_api.check_attach(context, |
122 | volume_id=bdm['volume_id']) |
123 | - dev_path = self._attach_volume_boot(context, instance_id, |
124 | + cinfo = self._attach_volume_boot(context, instance_id, |
125 | bdm['volume_id'], |
126 | bdm['device_name']) |
127 | - block_device_mapping.append({'device_path': dev_path, |
128 | + self.db.block_device_mapping_update( |
129 | + context, bdm['id'], |
130 | + {'connection_info': utils.dumps(cinfo)}) |
131 | + block_device_mapping.append({'connection_info': cinfo, |
132 | 'mount_device': |
133 | bdm['device_name']}) |
134 | |
135 | @@ -450,6 +449,23 @@ |
136 | # be fixed once we have no-db-messaging |
137 | pass |
138 | |
139 | + def _get_instance_volume_bdms(self, context, instance_id): |
140 | + bdms = self.db.block_device_mapping_get_all_by_instance(context, |
141 | + instance_id) |
142 | + return [bdm for bdm in bdms if bdm['volume_id']] |
143 | + |
144 | + def _get_instance_volume_block_device_info(self, context, instance_id): |
145 | + bdms = self._get_instance_volume_bdms(context, instance_id) |
146 | + block_device_mapping = [] |
147 | + for bdm in bdms: |
148 | + cinfo = utils.loads(bdm['connection_info']) |
149 | + block_device_mapping.append({'connection_info': cinfo, |
150 | + 'mount_device': |
151 | + bdm['device_name']}) |
152 | + ## NOTE(vish): The mapping is passed in so the driver can disconnect |
153 | + ## from remote volumes if necessary |
154 | + return {'block_device_mapping': block_device_mapping} |
155 | + |
156 | @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id()) |
157 | def run_instance(self, context, instance_id, **kwargs): |
158 | self._run_instance(context, instance_id, **kwargs) |
159 | @@ -460,9 +476,11 @@ |
160 | """Starting an instance on this host.""" |
161 | # TODO(yamahata): injected_files isn't supported. |
162 | # Anyway OSAPI doesn't support stop/start yet |
163 | + # FIXME(vish): I've kept the files during stop instance, but |
164 | + # I think start will fail due to the files still |
165 | self._run_instance(context, instance_id) |
166 | |
167 | - def _shutdown_instance(self, context, instance_id, action_str): |
168 | + def _shutdown_instance(self, context, instance_id, action_str, cleanup): |
169 | """Shutdown an instance on this host.""" |
170 | context = context.elevated() |
171 | instance = self.db.instance_get(context, instance_id) |
172 | @@ -474,24 +492,37 @@ |
173 | if not FLAGS.stub_network: |
174 | self.network_api.deallocate_for_instance(context, instance) |
175 | |
176 | - volumes = instance.get('volumes') or [] |
177 | - for volume in volumes: |
178 | - self._detach_volume(context, instance_id, volume['id'], False) |
179 | + for bdm in self._get_instance_volume_bdms(context, instance_id): |
180 | + volume_id = bdm['volume_id'] |
181 | + try: |
182 | + self._detach_volume(context, instance_id, volume_id) |
183 | + except exception.DiskNotFound as exc: |
184 | + LOG.warn(_("Ignoring DiskNotFound: %s") % exc) |
185 | |
186 | if instance['power_state'] == power_state.SHUTOFF: |
187 | self.db.instance_destroy(context, instance_id) |
188 | raise exception.Error(_('trying to destroy already destroyed' |
189 | ' instance: %s') % instance_id) |
190 | - self.driver.destroy(instance, network_info) |
191 | + block_device_info = self._get_instance_volume_block_device_info( |
192 | + context, instance_id) |
193 | + self.driver.destroy(instance, network_info, block_device_info, cleanup) |
194 | |
195 | - if action_str == 'Terminating': |
196 | - terminate_volumes(self.db, context, instance_id) |
197 | + def _cleanup_volumes(self, context, instance_id): |
198 | + volume_api = volume.API() |
199 | + bdms = self.db.block_device_mapping_get_all_by_instance(context, |
200 | + instance_id) |
201 | + for bdm in bdms: |
202 | + LOG.debug(_("terminating bdm %s") % bdm) |
203 | + if bdm['volume_id'] and bdm['delete_on_termination']: |
204 | + volume_api.delete(context, bdm['volume_id']) |
205 | + # NOTE(vish): bdms will be deleted on instance destroy |
206 | |
207 | @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id()) |
208 | @checks_instance_lock |
209 | def terminate_instance(self, context, instance_id): |
210 | """Terminate an instance on this host.""" |
211 | - self._shutdown_instance(context, instance_id, 'Terminating') |
212 | + self._shutdown_instance(context, instance_id, 'Terminating', True) |
213 | + self._cleanup_volumes(context, instance_id) |
214 | instance = self.db.instance_get(context.elevated(), instance_id) |
215 | self._instance_update(context, |
216 | instance_id, |
217 | @@ -510,7 +541,11 @@ |
218 | @checks_instance_lock |
219 | def stop_instance(self, context, instance_id): |
220 | """Stopping an instance on this host.""" |
221 | - self._shutdown_instance(context, instance_id, 'Stopping') |
222 | + # FIXME(vish): I've kept the files during stop instance, but |
223 | + # I think start will fail due to the files still |
224 | + # existing. I don't really know what the purpose of |
225 | + # stop and start are when compared to pause and unpause |
226 | + self._shutdown_instance(context, instance_id, 'Stopping', False) |
227 | self._instance_update(context, |
228 | instance_id, |
229 | vm_state=vm_states.STOPPED, |
230 | @@ -558,7 +593,6 @@ |
231 | instance_id, |
232 | vm_state=vm_states.REBUILDING, |
233 | task_state=task_states.SPAWNING) |
234 | - |
235 | # pull in new password here since the original password isn't in the db |
236 | instance_ref.admin_pass = kwargs.get('new_pass', |
237 | utils.generate_password(FLAGS.password_length)) |
238 | @@ -1226,17 +1260,17 @@ |
239 | """Attach a volume to an instance at boot time. So actual attach |
240 | is done by instance creation""" |
241 | |
242 | - # TODO(yamahata): |
243 | - # should move check_attach to volume manager? |
244 | - volume.API().check_attach(context, volume_id) |
245 | - |
246 | context = context.elevated() |
247 | LOG.audit(_("instance %(instance_id)s: booting with " |
248 | "volume %(volume_id)s at %(mountpoint)s") % |
249 | locals(), context=context) |
250 | - dev_path = self.volume_manager.setup_compute_volume(context, volume_id) |
251 | - self.db.volume_attached(context, volume_id, instance_id, mountpoint) |
252 | - return dev_path |
253 | + address = FLAGS.my_ip |
254 | + volume_api = volume.API() |
255 | + connection_info = volume_api.initialize_connection(context, |
256 | + volume_id, |
257 | + address) |
258 | + volume_api.attach(context, volume_id, instance_id, mountpoint) |
259 | + return connection_info |
260 | |
261 | @checks_instance_lock |
262 | def attach_volume(self, context, instance_id, volume_id, mountpoint): |
263 | @@ -1245,56 +1279,73 @@ |
264 | instance_ref = self.db.instance_get(context, instance_id) |
265 | LOG.audit(_("instance %(instance_id)s: attaching volume %(volume_id)s" |
266 | " to %(mountpoint)s") % locals(), context=context) |
267 | - dev_path = self.volume_manager.setup_compute_volume(context, |
268 | - volume_id) |
269 | + volume_api = volume.API() |
270 | + address = FLAGS.my_ip |
271 | + connection_info = volume_api.initialize_connection(context, |
272 | + volume_id, |
273 | + address) |
274 | try: |
275 | - self.driver.attach_volume(instance_ref['name'], |
276 | - dev_path, |
277 | + self.driver.attach_volume(connection_info, |
278 | + instance_ref['name'], |
279 | mountpoint) |
280 | - self.db.volume_attached(context, |
281 | - volume_id, |
282 | - instance_id, |
283 | - mountpoint) |
284 | - values = { |
285 | - 'instance_id': instance_id, |
286 | - 'device_name': mountpoint, |
287 | - 'delete_on_termination': False, |
288 | - 'virtual_name': None, |
289 | - 'snapshot_id': None, |
290 | - 'volume_id': volume_id, |
291 | - 'volume_size': None, |
292 | - 'no_device': None} |
293 | - self.db.block_device_mapping_create(context, values) |
294 | - except Exception as exc: # pylint: disable=W0702 |
295 | + except Exception: # pylint: disable=W0702 |
296 | + exc = sys.exc_info() |
297 | # NOTE(vish): The inline callback eats the exception info so we |
298 | # log the traceback here and reraise the same |
299 | # ecxception below. |
300 | LOG.exception(_("instance %(instance_id)s: attach failed" |
301 | " %(mountpoint)s, removing") % locals(), context=context) |
302 | - self.volume_manager.remove_compute_volume(context, |
303 | - volume_id) |
304 | + volume_api.terminate_connection(context, volume_id, address) |
305 | raise exc |
306 | |
307 | + volume_api.attach(context, volume_id, instance_id, mountpoint) |
308 | + values = { |
309 | + 'instance_id': instance_id, |
310 | + 'connection_info': utils.dumps(connection_info), |
311 | + 'device_name': mountpoint, |
312 | + 'delete_on_termination': False, |
313 | + 'virtual_name': None, |
314 | + 'snapshot_id': None, |
315 | + 'volume_id': volume_id, |
316 | + 'volume_size': None, |
317 | + 'no_device': None} |
318 | + self.db.block_device_mapping_create(context, values) |
319 | return True |
320 | |
321 | @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id()) |
322 | @checks_instance_lock |
323 | - def _detach_volume(self, context, instance_id, volume_id, destroy_bdm): |
324 | + def _detach_volume(self, context, instance_id, volume_id, |
325 | + destroy_bdm=False, mark_detached=True, |
326 | + force_detach=False): |
327 | """Detach a volume from an instance.""" |
328 | context = context.elevated() |
329 | instance_ref = self.db.instance_get(context, instance_id) |
330 | - volume_ref = self.db.volume_get(context, volume_id) |
331 | - mp = volume_ref['mountpoint'] |
332 | + bdms = self.db.block_device_mapping_get_all_by_instance( |
333 | + context, instance_id) |
334 | + for item in bdms: |
335 | + # NOTE(vish): Comparing as strings because the os_api doesn't |
336 | + # convert to integer and we may wish to support uuids |
337 | + # in the future. |
338 | + if str(item['volume_id']) == str(volume_id): |
339 | + bdm = item |
340 | + break |
341 | + mp = bdm['device_name'] |
342 | + |
343 | LOG.audit(_("Detach volume %(volume_id)s from mountpoint %(mp)s" |
344 | " on instance %(instance_id)s") % locals(), context=context) |
345 | - if instance_ref['name'] not in self.driver.list_instances(): |
346 | + volume_api = volume.API() |
347 | + if (instance_ref['name'] not in self.driver.list_instances() and |
348 | + not force_detach): |
349 | LOG.warn(_("Detaching volume from unknown instance %s"), |
350 | instance_id, context=context) |
351 | else: |
352 | - self.driver.detach_volume(instance_ref['name'], |
353 | - volume_ref['mountpoint']) |
354 | - self.volume_manager.remove_compute_volume(context, volume_id) |
355 | - self.db.volume_detached(context, volume_id) |
356 | + self.driver.detach_volume(utils.loads(bdm['connection_info']), |
357 | + instance_ref['name'], |
358 | + bdm['device_name']) |
359 | + address = FLAGS.my_ip |
360 | + volume_api.terminate_connection(context, volume_id, address) |
361 | + if mark_detached: |
362 | + volume_api.detach(context, volume_id) |
363 | if destroy_bdm: |
364 | self.db.block_device_mapping_destroy_by_instance_and_volume( |
365 | context, instance_id, volume_id) |
366 | @@ -1304,13 +1355,17 @@ |
367 | """Detach a volume from an instance.""" |
368 | return self._detach_volume(context, instance_id, volume_id, True) |
369 | |
370 | - def remove_volume(self, context, volume_id): |
371 | - """Remove volume on compute host. |
372 | - |
373 | - :param context: security context |
374 | - :param volume_id: volume ID |
375 | - """ |
376 | - self.volume_manager.remove_compute_volume(context, volume_id) |
377 | + @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id()) |
378 | + def remove_volume_connection(self, context, instance_id, volume_id): |
379 | + """Detach a volume from an instance.,""" |
380 | + # NOTE(vish): We don't want to actually mark the volume |
381 | + # detached, or delete the bdm, just remove the |
382 | + # connection from this host. |
383 | + try: |
384 | + self._detach_volume(context, instance_id, volume_id, |
385 | + False, False, True) |
386 | + except exception.NotFound: |
387 | + pass |
388 | |
389 | @exception.wrap_exception(notifier=notifier, publisher_id=publisher_id()) |
390 | def compare_cpu(self, context, cpu_info): |
391 | @@ -1393,14 +1448,14 @@ |
392 | |
393 | # Getting instance info |
394 | instance_ref = self.db.instance_get(context, instance_id) |
395 | - hostname = instance_ref['hostname'] |
396 | |
397 | # If any volume is mounted, prepare here. |
398 | - if not instance_ref['volumes']: |
399 | - LOG.info(_("%s has no volume."), hostname) |
400 | - else: |
401 | - for v in instance_ref['volumes']: |
402 | - self.volume_manager.setup_compute_volume(context, v['id']) |
403 | + block_device_info = \ |
404 | + self._get_instance_volume_block_device_info(context, instance_id) |
405 | + if not block_device_info['block_device_mapping']: |
406 | + LOG.info(_("%s has no volume."), instance_ref.name) |
407 | + |
408 | + self.driver.pre_live_migration(block_device_info) |
409 | |
410 | # Bridge settings. |
411 | # Call this method prior to ensure_filtering_rules_for_instance, |
412 | @@ -1436,7 +1491,7 @@ |
413 | # In addition, this method is creating filtering rule |
414 | # onto destination host. |
415 | self.driver.ensure_filtering_rules_for_instance(instance_ref, |
416 | - network_info) |
417 | + network_info) |
418 | |
419 | # Preparation for block migration |
420 | if block_migration: |
421 | @@ -1460,7 +1515,7 @@ |
422 | try: |
423 | # Checking volume node is working correctly when any volumes |
424 | # are attached to instances. |
425 | - if instance_ref['volumes']: |
426 | + if self._get_instance_volume_bdms(context, instance_id): |
427 | rpc.call(context, |
428 | FLAGS.volume_topic, |
429 | {"method": "check_for_export", |
430 | @@ -1480,12 +1535,13 @@ |
431 | 'disk': disk}}) |
432 | |
433 | except Exception: |
434 | + exc = sys.exc_info() |
435 | i_name = instance_ref.name |
436 | msg = _("Pre live migration for %(i_name)s failed at %(dest)s") |
437 | - LOG.error(msg % locals()) |
438 | + LOG.exception(msg % locals()) |
439 | self.rollback_live_migration(context, instance_ref, |
440 | dest, block_migration) |
441 | - raise |
442 | + raise exc |
443 | |
444 | # Executing live migration |
445 | # live_migration might raises exceptions, but |
446 | @@ -1513,11 +1569,12 @@ |
447 | instance_id = instance_ref['id'] |
448 | |
449 | # Detaching volumes. |
450 | - try: |
451 | - for vol in self.db.volume_get_all_by_instance(ctxt, instance_id): |
452 | - self.volume_manager.remove_compute_volume(ctxt, vol['id']) |
453 | - except exception.NotFound: |
454 | - pass |
455 | + for bdm in self._get_instance_volume_bdms(ctxt, instance_id): |
456 | + # NOTE(vish): We don't want to actually mark the volume |
457 | + # detached, or delete the bdm, just remove the |
458 | + # connection from this host. |
459 | + self.remove_volume_connection(ctxt, instance_id, |
460 | + bdm['volume_id']) |
461 | |
462 | # Releasing vlan. |
463 | # (not necessary in current implementation?) |
464 | @@ -1616,10 +1673,11 @@ |
465 | vm_state=vm_states.ACTIVE, |
466 | task_state=None) |
467 | |
468 | - for volume_ref in instance_ref['volumes']: |
469 | - volume_id = volume_ref['id'] |
470 | + for bdm in self._get_instance_volume_bdms(context, instance_ref['id']): |
471 | + volume_id = bdm['volume_id'] |
472 | self.db.volume_update(context, volume_id, {'status': 'in-use'}) |
473 | - volume.API().remove_from_compute(context, volume_id, dest) |
474 | + volume.API().remove_from_compute(context, instance_ref['id'], |
475 | + volume_id, dest) |
476 | |
477 | # Block migration needs empty image at destination host |
478 | # before migration starts, so if any failure occurs, |
479 | @@ -1636,9 +1694,15 @@ |
480 | :param context: security context |
481 | :param instance_id: nova.db.sqlalchemy.models.Instance.Id |
482 | """ |
483 | - instances_ref = self.db.instance_get(context, instance_id) |
484 | - network_info = self._get_instance_nw_info(context, instances_ref) |
485 | - self.driver.destroy(instances_ref, network_info) |
486 | + instance_ref = self.db.instance_get(context, instance_id) |
487 | + network_info = self._get_instance_nw_info(context, instance_ref) |
488 | + |
489 | + # NOTE(vish): The mapping is passed in so the driver can disconnect |
490 | + # from remote volumes if necessary |
491 | + block_device_info = \ |
492 | + self._get_instance_volume_block_device_info(context, instance_id) |
493 | + instance = instance_ref['name'] |
494 | + self.driver.destroy(instance, network_info, block_device_info, True) |
495 | |
496 | def periodic_tasks(self, context=None): |
497 | """Tasks to be run at a periodic interval.""" |
498 | |
499 | === removed file 'nova/compute/utils.py' |
500 | --- nova/compute/utils.py 2011-06-15 15:32:03 +0000 |
501 | +++ nova/compute/utils.py 1970-01-01 00:00:00 +0000 |
502 | @@ -1,29 +0,0 @@ |
503 | -# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
504 | - |
505 | -# Copyright (c) 2011 VA Linux Systems Japan K.K |
506 | -# Copyright (c) 2011 Isaku Yamahata |
507 | -# |
508 | -# Licensed under the Apache License, Version 2.0 (the "License"); you may |
509 | -# not use this file except in compliance with the License. You may obtain |
510 | -# a copy of the License at |
511 | -# |
512 | -# http://www.apache.org/licenses/LICENSE-2.0 |
513 | -# |
514 | -# Unless required by applicable law or agreed to in writing, software |
515 | -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
516 | -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
517 | -# License for the specific language governing permissions and limitations |
518 | -# under the License. |
519 | - |
520 | -from nova import volume |
521 | - |
522 | - |
523 | -def terminate_volumes(db, context, instance_id): |
524 | - """delete volumes of delete_on_termination=True in block device mapping""" |
525 | - volume_api = volume.API() |
526 | - for bdm in db.block_device_mapping_get_all_by_instance(context, |
527 | - instance_id): |
528 | - #LOG.debug(_("terminating bdm %s") % bdm) |
529 | - if bdm['volume_id'] and bdm['delete_on_termination']: |
530 | - volume_api.delete(context, bdm['volume_id']) |
531 | - db.block_device_mapping_destroy(context, bdm['id']) |
532 | |
533 | === modified file 'nova/db/api.py' |
534 | --- nova/db/api.py 2011-09-19 22:32:45 +0000 |
535 | +++ nova/db/api.py 2011-09-20 16:57:39 +0000 |
536 | @@ -56,18 +56,13 @@ |
537 | sqlalchemy='nova.db.sqlalchemy.api') |
538 | |
539 | |
540 | -class NoMoreBlades(exception.Error): |
541 | - """No more available blades.""" |
542 | - pass |
543 | - |
544 | - |
545 | class NoMoreNetworks(exception.Error): |
546 | """No more available networks.""" |
547 | pass |
548 | |
549 | |
550 | class NoMoreTargets(exception.Error): |
551 | - """No more available blades""" |
552 | + """No more available targets""" |
553 | pass |
554 | |
555 | |
556 | @@ -804,25 +799,6 @@ |
557 | ################### |
558 | |
559 | |
560 | -def export_device_count(context): |
561 | - """Return count of export devices.""" |
562 | - return IMPL.export_device_count(context) |
563 | - |
564 | - |
565 | -def export_device_create_safe(context, values): |
566 | - """Create an export_device from the values dictionary. |
567 | - |
568 | - The device is not returned. If the create violates the unique |
569 | - constraints because the shelf_id and blade_id already exist, |
570 | - no exception is raised. |
571 | - |
572 | - """ |
573 | - return IMPL.export_device_create_safe(context, values) |
574 | - |
575 | - |
576 | -################### |
577 | - |
578 | - |
579 | def iscsi_target_count_by_host(context, host): |
580 | """Return count of export devices.""" |
581 | return IMPL.iscsi_target_count_by_host(context, host) |
582 | @@ -898,11 +874,6 @@ |
583 | ################### |
584 | |
585 | |
586 | -def volume_allocate_shelf_and_blade(context, volume_id): |
587 | - """Atomically allocate a free shelf and blade from the pool.""" |
588 | - return IMPL.volume_allocate_shelf_and_blade(context, volume_id) |
589 | - |
590 | - |
591 | def volume_allocate_iscsi_target(context, volume_id, host): |
592 | """Atomically allocate a free iscsi_target from the pool.""" |
593 | return IMPL.volume_allocate_iscsi_target(context, volume_id, host) |
594 | @@ -968,11 +939,6 @@ |
595 | return IMPL.volume_get_instance(context, volume_id) |
596 | |
597 | |
598 | -def volume_get_shelf_and_blade(context, volume_id): |
599 | - """Get the shelf and blade allocated to the volume.""" |
600 | - return IMPL.volume_get_shelf_and_blade(context, volume_id) |
601 | - |
602 | - |
603 | def volume_get_iscsi_target_num(context, volume_id): |
604 | """Get the target num (tid) allocated to the volume.""" |
605 | return IMPL.volume_get_iscsi_target_num(context, volume_id) |
606 | |
607 | === modified file 'nova/db/sqlalchemy/api.py' |
608 | --- nova/db/sqlalchemy/api.py 2011-09-19 22:32:45 +0000 |
609 | +++ nova/db/sqlalchemy/api.py 2011-09-20 16:57:39 +0000 |
610 | @@ -1127,6 +1127,11 @@ |
611 | update({'deleted': True, |
612 | 'deleted_at': utils.utcnow(), |
613 | 'updated_at': literal_column('updated_at')}) |
614 | + session.query(models.BlockDeviceMapping).\ |
615 | + filter_by(instance_id=instance_id).\ |
616 | + update({'deleted': True, |
617 | + 'deleted_at': utils.utcnow(), |
618 | + 'updated_at': literal_column('updated_at')}) |
619 | |
620 | |
621 | @require_context |
622 | @@ -1954,28 +1959,6 @@ |
623 | |
624 | |
625 | @require_admin_context |
626 | -def export_device_count(context): |
627 | - session = get_session() |
628 | - return session.query(models.ExportDevice).\ |
629 | - filter_by(deleted=can_read_deleted(context)).\ |
630 | - count() |
631 | - |
632 | - |
633 | -@require_admin_context |
634 | -def export_device_create_safe(context, values): |
635 | - export_device_ref = models.ExportDevice() |
636 | - export_device_ref.update(values) |
637 | - try: |
638 | - export_device_ref.save() |
639 | - return export_device_ref |
640 | - except IntegrityError: |
641 | - return None |
642 | - |
643 | - |
644 | -################### |
645 | - |
646 | - |
647 | -@require_admin_context |
648 | def iscsi_target_count_by_host(context, host): |
649 | session = get_session() |
650 | return session.query(models.IscsiTarget).\ |
651 | @@ -2111,24 +2094,6 @@ |
652 | |
653 | |
654 | @require_admin_context |
655 | -def volume_allocate_shelf_and_blade(context, volume_id): |
656 | - session = get_session() |
657 | - with session.begin(): |
658 | - export_device = session.query(models.ExportDevice).\ |
659 | - filter_by(volume=None).\ |
660 | - filter_by(deleted=False).\ |
661 | - with_lockmode('update').\ |
662 | - first() |
663 | - # NOTE(vish): if with_lockmode isn't supported, as in sqlite, |
664 | - # then this has concurrency issues |
665 | - if not export_device: |
666 | - raise db.NoMoreBlades() |
667 | - export_device.volume_id = volume_id |
668 | - session.add(export_device) |
669 | - return (export_device.shelf_id, export_device.blade_id) |
670 | - |
671 | - |
672 | -@require_admin_context |
673 | def volume_allocate_iscsi_target(context, volume_id, host): |
674 | session = get_session() |
675 | with session.begin(): |
676 | @@ -2194,9 +2159,6 @@ |
677 | update({'deleted': True, |
678 | 'deleted_at': utils.utcnow(), |
679 | 'updated_at': literal_column('updated_at')}) |
680 | - session.query(models.ExportDevice).\ |
681 | - filter_by(volume_id=volume_id).\ |
682 | - update({'volume_id': None}) |
683 | session.query(models.IscsiTarget).\ |
684 | filter_by(volume_id=volume_id).\ |
685 | update({'volume_id': None}) |
686 | @@ -2316,18 +2278,6 @@ |
687 | |
688 | |
689 | @require_admin_context |
690 | -def volume_get_shelf_and_blade(context, volume_id): |
691 | - session = get_session() |
692 | - result = session.query(models.ExportDevice).\ |
693 | - filter_by(volume_id=volume_id).\ |
694 | - first() |
695 | - if not result: |
696 | - raise exception.ExportDeviceNotFoundForVolume(volume_id=volume_id) |
697 | - |
698 | - return (result.shelf_id, result.blade_id) |
699 | - |
700 | - |
701 | -@require_admin_context |
702 | def volume_get_iscsi_target_num(context, volume_id): |
703 | session = get_session() |
704 | result = session.query(models.IscsiTarget).\ |
705 | |
706 | === added file 'nova/db/sqlalchemy/migrate_repo/versions/048_kill_export_devices.py' |
707 | --- nova/db/sqlalchemy/migrate_repo/versions/048_kill_export_devices.py 1970-01-01 00:00:00 +0000 |
708 | +++ nova/db/sqlalchemy/migrate_repo/versions/048_kill_export_devices.py 2011-09-20 16:57:39 +0000 |
709 | @@ -0,0 +1,51 @@ |
710 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
711 | + |
712 | +# Copyright 2011 University of Southern California |
713 | +# |
714 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
715 | +# not use this file except in compliance with the License. You may obtain |
716 | +# a copy of the License at |
717 | +# |
718 | +# http://www.apache.org/licenses/LICENSE-2.0 |
719 | +# |
720 | +# Unless required by applicable law or agreed to in writing, software |
721 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
722 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
723 | +# License for the specific language governing permissions and limitations |
724 | +# under the License. |
725 | + |
726 | +from sqlalchemy import Boolean, Column, DateTime, ForeignKey, Integer |
727 | +from sqlalchemy import MetaData, String, Table |
728 | +from nova import log as logging |
729 | + |
730 | +meta = MetaData() |
731 | + |
732 | +# Table definition |
733 | +export_devices = Table('export_devices', meta, |
734 | + Column('created_at', DateTime(timezone=False)), |
735 | + Column('updated_at', DateTime(timezone=False)), |
736 | + Column('deleted_at', DateTime(timezone=False)), |
737 | + Column('deleted', Boolean(create_constraint=True, name=None)), |
738 | + Column('id', Integer(), primary_key=True, nullable=False), |
739 | + Column('shelf_id', Integer()), |
740 | + Column('blade_id', Integer()), |
741 | + Column('volume_id', |
742 | + Integer(), |
743 | + ForeignKey('volumes.id'), |
744 | + nullable=True), |
745 | + ) |
746 | + |
747 | + |
748 | +def downgrade(migrate_engine): |
749 | + meta.bind = migrate_engine |
750 | + try: |
751 | + export_devices.create() |
752 | + except Exception: |
753 | + logging.info(repr(export_devices)) |
754 | + logging.exception('Exception while creating table') |
755 | + raise |
756 | + |
757 | + |
758 | +def upgrade(migrate_engine): |
759 | + meta.bind = migrate_engine |
760 | + export_devices.drop() |
761 | |
762 | === added file 'nova/db/sqlalchemy/migrate_repo/versions/049_add_connection_info_to_block_device_mapping.py' |
763 | --- nova/db/sqlalchemy/migrate_repo/versions/049_add_connection_info_to_block_device_mapping.py 1970-01-01 00:00:00 +0000 |
764 | +++ nova/db/sqlalchemy/migrate_repo/versions/049_add_connection_info_to_block_device_mapping.py 2011-09-20 16:57:39 +0000 |
765 | @@ -0,0 +1,35 @@ |
766 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
767 | + |
768 | +# Copyright 2011 OpenStack LLC. |
769 | +# All Rights Reserved. |
770 | +# |
771 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
772 | +# not use this file except in compliance with the License. You may obtain |
773 | +# a copy of the License at |
774 | +# |
775 | +# http://www.apache.org/licenses/LICENSE-2.0 |
776 | +# |
777 | +# Unless required by applicable law or agreed to in writing, software |
778 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
779 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
780 | +# License for the specific language governing permissions and limitations |
781 | +# under the License.from sqlalchemy import * |
782 | + |
783 | +from sqlalchemy import Column, MetaData, Table, Text |
784 | + |
785 | + |
786 | +meta = MetaData() |
787 | + |
788 | +new_column = Column('connection_info', Text()) |
789 | + |
790 | + |
791 | +def upgrade(migrate_engine): |
792 | + meta.bind = migrate_engine |
793 | + table = Table('block_device_mapping', meta, autoload=True) |
794 | + table.create_column(new_column) |
795 | + |
796 | + |
797 | +def downgrade(migrate_engine): |
798 | + meta.bind = migrate_engine |
799 | + table = Table('block_device_mapping', meta, autoload=True) |
800 | + table.c.connection_info.drop() |
801 | |
802 | === modified file 'nova/db/sqlalchemy/models.py' |
803 | --- nova/db/sqlalchemy/models.py 2011-09-14 15:19:03 +0000 |
804 | +++ nova/db/sqlalchemy/models.py 2011-09-20 16:57:39 +0000 |
805 | @@ -467,21 +467,8 @@ |
806 | # for no device to suppress devices. |
807 | no_device = Column(Boolean, nullable=True) |
808 | |
809 | - |
810 | -class ExportDevice(BASE, NovaBase): |
811 | - """Represates a shelf and blade that a volume can be exported on.""" |
812 | - __tablename__ = 'export_devices' |
813 | - __table_args__ = (schema.UniqueConstraint("shelf_id", "blade_id"), |
814 | - {'mysql_engine': 'InnoDB'}) |
815 | - id = Column(Integer, primary_key=True) |
816 | - shelf_id = Column(Integer) |
817 | - blade_id = Column(Integer) |
818 | - volume_id = Column(Integer, ForeignKey('volumes.id'), nullable=True) |
819 | - volume = relationship(Volume, |
820 | - backref=backref('export_device', uselist=False), |
821 | - foreign_keys=volume_id, |
822 | - primaryjoin='and_(ExportDevice.volume_id==Volume.id,' |
823 | - 'ExportDevice.deleted==False)') |
824 | + # dur, it's information about the connection! |
825 | + connection_info = Column(Text, nullable=True) |
826 | |
827 | |
828 | class IscsiTarget(BASE, NovaBase): |
829 | |
830 | === modified file 'nova/exception.py' |
831 | --- nova/exception.py 2011-09-15 21:58:22 +0000 |
832 | +++ nova/exception.py 2011-09-20 16:57:39 +0000 |
833 | @@ -374,10 +374,6 @@ |
834 | message = _("deleting volume %(volume_name)s that has snapshot") |
835 | |
836 | |
837 | -class ExportDeviceNotFoundForVolume(NotFound): |
838 | - message = _("No export device found for volume %(volume_id)s.") |
839 | - |
840 | - |
841 | class ISCSITargetNotFoundForVolume(NotFound): |
842 | message = _("No target id found for volume %(volume_id)s.") |
843 | |
844 | @@ -386,6 +382,10 @@ |
845 | message = _("No disk at %(location)s") |
846 | |
847 | |
848 | +class VolumeDriverNotFound(NotFound): |
849 | + message = _("Could not find a handler for %(driver_type)s volume.") |
850 | + |
851 | + |
852 | class InvalidImageRef(Invalid): |
853 | message = _("Invalid image href %(image_href)s.") |
854 | |
855 | |
856 | === modified file 'nova/rpc/common.py' |
857 | --- nova/rpc/common.py 2011-08-29 02:22:53 +0000 |
858 | +++ nova/rpc/common.py 2011-09-20 16:57:39 +0000 |
859 | @@ -10,7 +10,7 @@ |
860 | 'Size of RPC connection pool') |
861 | |
862 | |
863 | -class RemoteError(exception.Error): |
864 | +class RemoteError(exception.NovaException): |
865 | """Signifies that a remote class has raised an exception. |
866 | |
867 | Containes a string representation of the type of the original exception, |
868 | @@ -19,11 +19,10 @@ |
869 | contains all of the relevent info. |
870 | |
871 | """ |
872 | + message = _("Remote error: %(exc_type)s %(value)s\n%(traceback)s.") |
873 | |
874 | - def __init__(self, exc_type, value, traceback): |
875 | + def __init__(self, exc_type=None, value=None, traceback=None): |
876 | self.exc_type = exc_type |
877 | self.value = value |
878 | self.traceback = traceback |
879 | - super(RemoteError, self).__init__('%s %s\n%s' % (exc_type, |
880 | - value, |
881 | - traceback)) |
882 | + super(RemoteError, self).__init__(**self.__dict__) |
883 | |
884 | === modified file 'nova/tests/api/ec2/test_cloud.py' |
885 | --- nova/tests/api/ec2/test_cloud.py 2011-09-16 15:17:34 +0000 |
886 | +++ nova/tests/api/ec2/test_cloud.py 2011-09-20 16:57:39 +0000 |
887 | @@ -1218,7 +1218,7 @@ |
888 | LOG.debug(info) |
889 | if predicate(info): |
890 | break |
891 | - greenthread.sleep(1) |
892 | + greenthread.sleep(0.5) |
893 | |
894 | def _wait_for_running(self, instance_id): |
895 | def is_running(info): |
896 | @@ -1237,6 +1237,16 @@ |
897 | def _wait_for_terminate(self, instance_id): |
898 | def is_deleted(info): |
899 | return info['deleted'] |
900 | + id = ec2utils.ec2_id_to_id(instance_id) |
901 | + # NOTE(vish): Wait for InstanceNotFound, then verify that |
902 | + # the instance is actually deleted. |
903 | + while True: |
904 | + try: |
905 | + self.cloud.compute_api.get(self.context, instance_id=id) |
906 | + except exception.InstanceNotFound: |
907 | + break |
908 | + greenthread.sleep(0.1) |
909 | + |
910 | elevated = self.context.elevated(read_deleted=True) |
911 | self._wait_for_state(elevated, instance_id, is_deleted) |
912 | |
913 | @@ -1252,26 +1262,21 @@ |
914 | |
915 | # a running instance can't be started. It is just ignored. |
916 | result = self.cloud.start_instances(self.context, [instance_id]) |
917 | - greenthread.sleep(0.3) |
918 | self.assertTrue(result) |
919 | |
920 | result = self.cloud.stop_instances(self.context, [instance_id]) |
921 | - greenthread.sleep(0.3) |
922 | self.assertTrue(result) |
923 | self._wait_for_stopped(instance_id) |
924 | |
925 | result = self.cloud.start_instances(self.context, [instance_id]) |
926 | - greenthread.sleep(0.3) |
927 | self.assertTrue(result) |
928 | self._wait_for_running(instance_id) |
929 | |
930 | result = self.cloud.stop_instances(self.context, [instance_id]) |
931 | - greenthread.sleep(0.3) |
932 | self.assertTrue(result) |
933 | self._wait_for_stopped(instance_id) |
934 | |
935 | result = self.cloud.terminate_instances(self.context, [instance_id]) |
936 | - greenthread.sleep(0.3) |
937 | self.assertTrue(result) |
938 | |
939 | self._restart_compute_service() |
940 | @@ -1483,24 +1488,20 @@ |
941 | self.assertTrue(vol2_id) |
942 | |
943 | self.cloud.terminate_instances(self.context, [ec2_instance_id]) |
944 | - greenthread.sleep(0.3) |
945 | self._wait_for_terminate(ec2_instance_id) |
946 | |
947 | - greenthread.sleep(0.3) |
948 | admin_ctxt = context.get_admin_context(read_deleted=False) |
949 | vol = db.volume_get(admin_ctxt, vol1_id) |
950 | self._assert_volume_detached(vol) |
951 | self.assertFalse(vol['deleted']) |
952 | db.volume_destroy(self.context, vol1_id) |
953 | |
954 | - greenthread.sleep(0.3) |
955 | admin_ctxt = context.get_admin_context(read_deleted=True) |
956 | vol = db.volume_get(admin_ctxt, vol2_id) |
957 | self.assertTrue(vol['deleted']) |
958 | |
959 | for snapshot_id in (ec2_snapshot1_id, ec2_snapshot2_id): |
960 | self.cloud.delete_snapshot(self.context, snapshot_id) |
961 | - greenthread.sleep(0.3) |
962 | db.volume_destroy(self.context, vol['id']) |
963 | |
964 | def test_create_image(self): |
965 | |
966 | === modified file 'nova/tests/fake_flags.py' |
967 | --- nova/tests/fake_flags.py 2011-07-27 16:44:14 +0000 |
968 | +++ nova/tests/fake_flags.py 2011-09-20 16:57:39 +0000 |
969 | @@ -33,11 +33,7 @@ |
970 | FLAGS['num_networks'].SetDefault(2) |
971 | FLAGS['fake_network'].SetDefault(True) |
972 | FLAGS['image_service'].SetDefault('nova.image.fake.FakeImageService') |
973 | -flags.DECLARE('num_shelves', 'nova.volume.driver') |
974 | -flags.DECLARE('blades_per_shelf', 'nova.volume.driver') |
975 | flags.DECLARE('iscsi_num_targets', 'nova.volume.driver') |
976 | -FLAGS['num_shelves'].SetDefault(2) |
977 | -FLAGS['blades_per_shelf'].SetDefault(4) |
978 | FLAGS['iscsi_num_targets'].SetDefault(8) |
979 | FLAGS['verbose'].SetDefault(True) |
980 | FLAGS['sqlite_db'].SetDefault("tests.sqlite") |
981 | |
982 | === modified file 'nova/tests/integrated/test_volumes.py' |
983 | --- nova/tests/integrated/test_volumes.py 2011-08-24 16:18:53 +0000 |
984 | +++ nova/tests/integrated/test_volumes.py 2011-09-20 16:57:39 +0000 |
985 | @@ -262,22 +262,22 @@ |
986 | |
987 | LOG.debug("Logs: %s" % driver.LoggingVolumeDriver.all_logs()) |
988 | |
989 | - # Discover_volume and undiscover_volume are called from compute |
990 | + # prepare_attach and prepare_detach are called from compute |
991 | # on attach/detach |
992 | |
993 | disco_moves = driver.LoggingVolumeDriver.logs_like( |
994 | - 'discover_volume', |
995 | + 'initialize_connection', |
996 | id=volume_id) |
997 | - LOG.debug("discover_volume actions: %s" % disco_moves) |
998 | + LOG.debug("initialize_connection actions: %s" % disco_moves) |
999 | |
1000 | self.assertEquals(1, len(disco_moves)) |
1001 | disco_move = disco_moves[0] |
1002 | self.assertEquals(disco_move['id'], volume_id) |
1003 | |
1004 | last_days_of_disco_moves = driver.LoggingVolumeDriver.logs_like( |
1005 | - 'undiscover_volume', |
1006 | + 'terminate_connection', |
1007 | id=volume_id) |
1008 | - LOG.debug("undiscover_volume actions: %s" % last_days_of_disco_moves) |
1009 | + LOG.debug("terminate_connection actions: %s" % last_days_of_disco_moves) |
1010 | |
1011 | self.assertEquals(1, len(last_days_of_disco_moves)) |
1012 | undisco_move = last_days_of_disco_moves[0] |
1013 | |
1014 | === modified file 'nova/tests/scheduler/test_scheduler.py' |
1015 | --- nova/tests/scheduler/test_scheduler.py 2011-09-08 08:09:22 +0000 |
1016 | +++ nova/tests/scheduler/test_scheduler.py 2011-09-20 16:57:39 +0000 |
1017 | @@ -919,7 +919,8 @@ |
1018 | rpc.call(mox.IgnoreArg(), mox.IgnoreArg(), |
1019 | {"method": 'compare_cpu', |
1020 | "args": {'cpu_info': s_ref2['compute_node'][0]['cpu_info']}}).\ |
1021 | - AndRaise(rpc.RemoteError("doesn't have compatibility to", "", "")) |
1022 | + AndRaise(rpc.RemoteError(exception.InvalidCPUInfo, |
1023 | + exception.InvalidCPUInfo(reason='fake'))) |
1024 | |
1025 | self.mox.ReplayAll() |
1026 | try: |
1027 | @@ -928,7 +929,7 @@ |
1028 | dest, |
1029 | False) |
1030 | except rpc.RemoteError, e: |
1031 | - c = (e.message.find(_("doesn't have compatibility to")) >= 0) |
1032 | + c = (e.exc_type == exception.InvalidCPUInfo) |
1033 | |
1034 | self.assertTrue(c) |
1035 | db.instance_destroy(self.context, instance_id) |
1036 | |
1037 | === modified file 'nova/tests/test_compute.py' |
1038 | --- nova/tests/test_compute.py 2011-09-19 21:53:17 +0000 |
1039 | +++ nova/tests/test_compute.py 2011-09-20 16:57:39 +0000 |
1040 | @@ -20,6 +20,7 @@ |
1041 | Tests For Compute |
1042 | """ |
1043 | |
1044 | +import mox |
1045 | from nova import compute |
1046 | from nova import context |
1047 | from nova import db |
1048 | @@ -120,21 +121,6 @@ |
1049 | 'project_id': self.project_id} |
1050 | return db.security_group_create(self.context, values) |
1051 | |
1052 | - def _get_dummy_instance(self): |
1053 | - """Get mock-return-value instance object |
1054 | - Use this when any testcase executed later than test_run_terminate |
1055 | - """ |
1056 | - vol1 = models.Volume() |
1057 | - vol1['id'] = 1 |
1058 | - vol2 = models.Volume() |
1059 | - vol2['id'] = 2 |
1060 | - instance_ref = models.Instance() |
1061 | - instance_ref['id'] = 1 |
1062 | - instance_ref['volumes'] = [vol1, vol2] |
1063 | - instance_ref['hostname'] = 'hostname-1' |
1064 | - instance_ref['host'] = 'dummy' |
1065 | - return instance_ref |
1066 | - |
1067 | def test_create_instance_defaults_display_name(self): |
1068 | """Verify that an instance cannot be created without a display_name.""" |
1069 | cases = [dict(), dict(display_name=None)] |
1070 | @@ -657,235 +643,123 @@ |
1071 | |
1072 | def test_pre_live_migration_instance_has_no_fixed_ip(self): |
1073 | """Confirm raising exception if instance doesn't have fixed_ip.""" |
1074 | - instance_ref = self._get_dummy_instance() |
1075 | + # creating instance testdata |
1076 | + instance_id = self._create_instance({'host': 'dummy'}) |
1077 | c = context.get_admin_context() |
1078 | - i_id = instance_ref['id'] |
1079 | - |
1080 | - dbmock = self.mox.CreateMock(db) |
1081 | - dbmock.instance_get(c, i_id).AndReturn(instance_ref) |
1082 | - |
1083 | - self.compute.db = dbmock |
1084 | - self.mox.ReplayAll() |
1085 | - self.assertRaises(exception.NotFound, |
1086 | + inst_ref = db.instance_get(c, instance_id) |
1087 | + topic = db.queue_get_for(c, FLAGS.compute_topic, inst_ref['host']) |
1088 | + |
1089 | + # start test |
1090 | + self.assertRaises(exception.FixedIpNotFoundForInstance, |
1091 | self.compute.pre_live_migration, |
1092 | - c, instance_ref['id'], time=FakeTime()) |
1093 | + c, inst_ref['id'], time=FakeTime()) |
1094 | + # cleanup |
1095 | + db.instance_destroy(c, instance_id) |
1096 | |
1097 | - def test_pre_live_migration_instance_has_volume(self): |
1098 | + def test_pre_live_migration_works_correctly(self): |
1099 | """Confirm setup_compute_volume is called when volume is mounted.""" |
1100 | - def fake_nw_info(*args, **kwargs): |
1101 | - return [(0, {'ips':['dummy']})] |
1102 | - |
1103 | - i_ref = self._get_dummy_instance() |
1104 | - c = context.get_admin_context() |
1105 | - |
1106 | - self._setup_other_managers() |
1107 | - dbmock = self.mox.CreateMock(db) |
1108 | - volmock = self.mox.CreateMock(self.volume_manager) |
1109 | - drivermock = self.mox.CreateMock(self.compute_driver) |
1110 | - |
1111 | - dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref) |
1112 | - for i in range(len(i_ref['volumes'])): |
1113 | - vid = i_ref['volumes'][i]['id'] |
1114 | - volmock.setup_compute_volume(c, vid).InAnyOrder('g1') |
1115 | - drivermock.plug_vifs(i_ref, fake_nw_info()) |
1116 | - drivermock.ensure_filtering_rules_for_instance(i_ref, fake_nw_info()) |
1117 | - |
1118 | - self.stubs.Set(self.compute, '_get_instance_nw_info', fake_nw_info) |
1119 | - self.compute.db = dbmock |
1120 | - self.compute.volume_manager = volmock |
1121 | - self.compute.driver = drivermock |
1122 | - |
1123 | - self.mox.ReplayAll() |
1124 | - ret = self.compute.pre_live_migration(c, i_ref['id']) |
1125 | - self.assertEqual(ret, None) |
1126 | - |
1127 | - def test_pre_live_migration_instance_has_no_volume(self): |
1128 | - """Confirm log meg when instance doesn't mount any volumes.""" |
1129 | - def fake_nw_info(*args, **kwargs): |
1130 | - return [(0, {'ips':['dummy']})] |
1131 | - |
1132 | - i_ref = self._get_dummy_instance() |
1133 | - i_ref['volumes'] = [] |
1134 | - c = context.get_admin_context() |
1135 | - |
1136 | - self._setup_other_managers() |
1137 | - dbmock = self.mox.CreateMock(db) |
1138 | - drivermock = self.mox.CreateMock(self.compute_driver) |
1139 | - |
1140 | - dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref) |
1141 | - self.mox.StubOutWithMock(compute_manager.LOG, 'info') |
1142 | - compute_manager.LOG.info(_("%s has no volume."), i_ref['hostname']) |
1143 | - drivermock.plug_vifs(i_ref, fake_nw_info()) |
1144 | - drivermock.ensure_filtering_rules_for_instance(i_ref, fake_nw_info()) |
1145 | - |
1146 | - self.stubs.Set(self.compute, '_get_instance_nw_info', fake_nw_info) |
1147 | - self.compute.db = dbmock |
1148 | - self.compute.driver = drivermock |
1149 | - |
1150 | - self.mox.ReplayAll() |
1151 | - ret = self.compute.pre_live_migration(c, i_ref['id'], time=FakeTime()) |
1152 | - self.assertEqual(ret, None) |
1153 | - |
1154 | - def test_pre_live_migration_setup_compute_node_fail(self): |
1155 | - """Confirm operation setup_compute_network() fails. |
1156 | - |
1157 | - It retries and raise exception when timeout exceeded. |
1158 | - |
1159 | - """ |
1160 | - def fake_nw_info(*args, **kwargs): |
1161 | - return [(0, {'ips':['dummy']})] |
1162 | - |
1163 | - i_ref = self._get_dummy_instance() |
1164 | - c = context.get_admin_context() |
1165 | - |
1166 | - self._setup_other_managers() |
1167 | - dbmock = self.mox.CreateMock(db) |
1168 | - netmock = self.mox.CreateMock(self.network_manager) |
1169 | - volmock = self.mox.CreateMock(self.volume_manager) |
1170 | - drivermock = self.mox.CreateMock(self.compute_driver) |
1171 | - |
1172 | - dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref) |
1173 | - for i in range(len(i_ref['volumes'])): |
1174 | - volmock.setup_compute_volume(c, i_ref['volumes'][i]['id']) |
1175 | - for i in range(FLAGS.live_migration_retry_count): |
1176 | - drivermock.plug_vifs(i_ref, fake_nw_info()).\ |
1177 | - AndRaise(exception.ProcessExecutionError()) |
1178 | - |
1179 | - self.stubs.Set(self.compute, '_get_instance_nw_info', fake_nw_info) |
1180 | - self.compute.db = dbmock |
1181 | - self.compute.network_manager = netmock |
1182 | - self.compute.volume_manager = volmock |
1183 | - self.compute.driver = drivermock |
1184 | - |
1185 | - self.mox.ReplayAll() |
1186 | - self.assertRaises(exception.ProcessExecutionError, |
1187 | - self.compute.pre_live_migration, |
1188 | - c, i_ref['id'], time=FakeTime()) |
1189 | - |
1190 | - def test_live_migration_works_correctly_with_volume(self): |
1191 | - """Confirm check_for_export to confirm volume health check.""" |
1192 | - i_ref = self._get_dummy_instance() |
1193 | - c = context.get_admin_context() |
1194 | - topic = db.queue_get_for(c, FLAGS.compute_topic, i_ref['host']) |
1195 | - |
1196 | - dbmock = self.mox.CreateMock(db) |
1197 | - dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref) |
1198 | - self.mox.StubOutWithMock(rpc, 'call') |
1199 | - rpc.call(c, FLAGS.volume_topic, {"method": "check_for_export", |
1200 | - "args": {'instance_id': i_ref['id']}}) |
1201 | - dbmock.queue_get_for(c, FLAGS.compute_topic, i_ref['host']).\ |
1202 | - AndReturn(topic) |
1203 | - rpc.call(c, topic, {"method": "pre_live_migration", |
1204 | - "args": {'instance_id': i_ref['id'], |
1205 | - 'block_migration': False, |
1206 | - 'disk': None}}) |
1207 | - |
1208 | - self.mox.StubOutWithMock(self.compute.driver, 'live_migration') |
1209 | - self.compute.driver.live_migration(c, i_ref, i_ref['host'], |
1210 | - self.compute.post_live_migration, |
1211 | - self.compute.rollback_live_migration, |
1212 | - False) |
1213 | - |
1214 | - self.compute.db = dbmock |
1215 | - self.mox.ReplayAll() |
1216 | - ret = self.compute.live_migration(c, i_ref['id'], i_ref['host']) |
1217 | - self.assertEqual(ret, None) |
1218 | + # creating instance testdata |
1219 | + instance_id = self._create_instance({'host': 'dummy'}) |
1220 | + c = context.get_admin_context() |
1221 | + inst_ref = db.instance_get(c, instance_id) |
1222 | + topic = db.queue_get_for(c, FLAGS.compute_topic, inst_ref['host']) |
1223 | + |
1224 | + # creating mocks |
1225 | + self.mox.StubOutWithMock(self.compute.db, |
1226 | + 'instance_get_fixed_addresses') |
1227 | + self.compute.db.instance_get_fixed_addresses(c, instance_id |
1228 | + ).AndReturn(['1.1.1.1']) |
1229 | + self.mox.StubOutWithMock(self.compute.driver, 'pre_live_migration') |
1230 | + self.compute.driver.pre_live_migration({'block_device_mapping': []}) |
1231 | + self.mox.StubOutWithMock(self.compute.driver, 'plug_vifs') |
1232 | + self.compute.driver.plug_vifs(mox.IsA(inst_ref), []) |
1233 | + self.mox.StubOutWithMock(self.compute.driver, |
1234 | + 'ensure_filtering_rules_for_instance') |
1235 | + self.compute.driver.ensure_filtering_rules_for_instance( |
1236 | + mox.IsA(inst_ref), []) |
1237 | + |
1238 | + # start test |
1239 | + self.mox.ReplayAll() |
1240 | + ret = self.compute.pre_live_migration(c, inst_ref['id']) |
1241 | + self.assertEqual(ret, None) |
1242 | + |
1243 | + # cleanup |
1244 | + db.instance_destroy(c, instance_id) |
1245 | |
1246 | def test_live_migration_dest_raises_exception(self): |
1247 | """Confirm exception when pre_live_migration fails.""" |
1248 | - i_ref = self._get_dummy_instance() |
1249 | + # creating instance testdata |
1250 | + instance_id = self._create_instance({'host': 'dummy'}) |
1251 | c = context.get_admin_context() |
1252 | - topic = db.queue_get_for(c, FLAGS.compute_topic, i_ref['host']) |
1253 | + inst_ref = db.instance_get(c, instance_id) |
1254 | + topic = db.queue_get_for(c, FLAGS.compute_topic, inst_ref['host']) |
1255 | + # creating volume testdata |
1256 | + volume_id = 1 |
1257 | + db.volume_create(c, {'id': volume_id}) |
1258 | + values = {'instance_id': instance_id, 'device_name': '/dev/vdc', |
1259 | + 'delete_on_termination': False, 'volume_id': volume_id} |
1260 | + db.block_device_mapping_create(c, values) |
1261 | |
1262 | - dbmock = self.mox.CreateMock(db) |
1263 | - dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref) |
1264 | + # creating mocks |
1265 | self.mox.StubOutWithMock(rpc, 'call') |
1266 | rpc.call(c, FLAGS.volume_topic, {"method": "check_for_export", |
1267 | - "args": {'instance_id': i_ref['id']}}) |
1268 | - dbmock.queue_get_for(c, FLAGS.compute_topic, i_ref['host']).\ |
1269 | - AndReturn(topic) |
1270 | - rpc.call(c, topic, {"method": "pre_live_migration", |
1271 | - "args": {'instance_id': i_ref['id'], |
1272 | - 'block_migration': False, |
1273 | - 'disk': None}}).\ |
1274 | - AndRaise(rpc.RemoteError('', '', '')) |
1275 | - dbmock.instance_update(c, i_ref['id'], {'vm_state': vm_states.ACTIVE, |
1276 | - 'task_state': None, |
1277 | - 'host': i_ref['host']}) |
1278 | - for v in i_ref['volumes']: |
1279 | - dbmock.volume_update(c, v['id'], {'status': 'in-use'}) |
1280 | - # mock for volume_api.remove_from_compute |
1281 | - rpc.call(c, topic, {"method": "remove_volume", |
1282 | - "args": {'volume_id': v['id']}}) |
1283 | - |
1284 | - self.compute.db = dbmock |
1285 | - self.mox.ReplayAll() |
1286 | - self.assertRaises(rpc.RemoteError, |
1287 | - self.compute.live_migration, |
1288 | - c, i_ref['id'], i_ref['host']) |
1289 | - |
1290 | - def test_live_migration_dest_raises_exception_no_volume(self): |
1291 | - """Same as above test(input pattern is different) """ |
1292 | - i_ref = self._get_dummy_instance() |
1293 | - i_ref['volumes'] = [] |
1294 | - c = context.get_admin_context() |
1295 | - topic = db.queue_get_for(c, FLAGS.compute_topic, i_ref['host']) |
1296 | - |
1297 | - dbmock = self.mox.CreateMock(db) |
1298 | - dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref) |
1299 | - dbmock.queue_get_for(c, FLAGS.compute_topic, i_ref['host']).\ |
1300 | - AndReturn(topic) |
1301 | - self.mox.StubOutWithMock(rpc, 'call') |
1302 | - rpc.call(c, topic, {"method": "pre_live_migration", |
1303 | - "args": {'instance_id': i_ref['id'], |
1304 | - 'block_migration': False, |
1305 | - 'disk': None}}).\ |
1306 | - AndRaise(rpc.RemoteError('', '', '')) |
1307 | - dbmock.instance_update(c, i_ref['id'], {'vm_state': vm_states.ACTIVE, |
1308 | - 'task_state': None, |
1309 | - 'host': i_ref['host']}) |
1310 | - |
1311 | - self.compute.db = dbmock |
1312 | - self.mox.ReplayAll() |
1313 | - self.assertRaises(rpc.RemoteError, |
1314 | - self.compute.live_migration, |
1315 | - c, i_ref['id'], i_ref['host']) |
1316 | - |
1317 | - def test_live_migration_works_correctly_no_volume(self): |
1318 | + "args": {'instance_id': instance_id}}) |
1319 | + rpc.call(c, topic, {"method": "pre_live_migration", |
1320 | + "args": {'instance_id': instance_id, |
1321 | + 'block_migration': True, |
1322 | + 'disk': None}}).\ |
1323 | + AndRaise(rpc.common.RemoteError('', '', '')) |
1324 | + # mocks for rollback |
1325 | + rpc.call(c, topic, {"method": "remove_volume_connection", |
1326 | + "args": {'instance_id': instance_id, |
1327 | + 'volume_id': volume_id}}) |
1328 | + rpc.cast(c, topic, {"method": "rollback_live_migration_at_destination", |
1329 | + "args": {'instance_id': inst_ref['id']}}) |
1330 | + |
1331 | + # start test |
1332 | + self.mox.ReplayAll() |
1333 | + self.assertRaises(rpc.RemoteError, |
1334 | + self.compute.live_migration, |
1335 | + c, instance_id, inst_ref['host'], True) |
1336 | + |
1337 | + # cleanup |
1338 | + for bdms in db.block_device_mapping_get_all_by_instance(c, instance_id): |
1339 | + db.block_device_mapping_destroy(c, bdms['id']) |
1340 | + db.volume_destroy(c, volume_id) |
1341 | + db.instance_destroy(c, instance_id) |
1342 | + |
1343 | + def test_live_migration_works_correctly(self): |
1344 | """Confirm live_migration() works as expected correctly.""" |
1345 | - i_ref = self._get_dummy_instance() |
1346 | - i_ref['volumes'] = [] |
1347 | + # creating instance testdata |
1348 | + instance_id = self._create_instance({'host': 'dummy'}) |
1349 | c = context.get_admin_context() |
1350 | - topic = db.queue_get_for(c, FLAGS.compute_topic, i_ref['host']) |
1351 | + inst_ref = db.instance_get(c, instance_id) |
1352 | + topic = db.queue_get_for(c, FLAGS.compute_topic, inst_ref['host']) |
1353 | |
1354 | - dbmock = self.mox.CreateMock(db) |
1355 | - dbmock.instance_get(c, i_ref['id']).AndReturn(i_ref) |
1356 | + # create |
1357 | self.mox.StubOutWithMock(rpc, 'call') |
1358 | - dbmock.queue_get_for(c, FLAGS.compute_topic, i_ref['host']).\ |
1359 | - AndReturn(topic) |
1360 | rpc.call(c, topic, {"method": "pre_live_migration", |
1361 | - "args": {'instance_id': i_ref['id'], |
1362 | + "args": {'instance_id': instance_id, |
1363 | 'block_migration': False, |
1364 | 'disk': None}}) |
1365 | - self.mox.StubOutWithMock(self.compute.driver, 'live_migration') |
1366 | - self.compute.driver.live_migration(c, i_ref, i_ref['host'], |
1367 | - self.compute.post_live_migration, |
1368 | - self.compute.rollback_live_migration, |
1369 | - False) |
1370 | |
1371 | - self.compute.db = dbmock |
1372 | + # start test |
1373 | self.mox.ReplayAll() |
1374 | - ret = self.compute.live_migration(c, i_ref['id'], i_ref['host']) |
1375 | + ret = self.compute.live_migration(c, inst_ref['id'], inst_ref['host']) |
1376 | self.assertEqual(ret, None) |
1377 | |
1378 | + # cleanup |
1379 | + db.instance_destroy(c, instance_id) |
1380 | + |
1381 | def test_post_live_migration_working_correctly(self): |
1382 | """Confirm post_live_migration() works as expected correctly.""" |
1383 | dest = 'desthost' |
1384 | flo_addr = '1.2.1.2' |
1385 | |
1386 | - # Preparing datas |
1387 | + # creating testdata |
1388 | c = context.get_admin_context() |
1389 | - instance_id = self._create_instance() |
1390 | + instance_id = self._create_instance({'state_description': 'migrating', |
1391 | + 'state': power_state.PAUSED}) |
1392 | i_ref = db.instance_get(c, instance_id) |
1393 | db.instance_update(c, i_ref['id'], {'vm_state': vm_states.MIGRATING, |
1394 | 'power_state': power_state.PAUSED}) |
1395 | @@ -895,14 +769,8 @@ |
1396 | fix_ref = db.fixed_ip_get_by_address(c, fix_addr) |
1397 | flo_ref = db.floating_ip_create(c, {'address': flo_addr, |
1398 | 'fixed_ip_id': fix_ref['id']}) |
1399 | - # reload is necessary before setting mocks |
1400 | - i_ref = db.instance_get(c, instance_id) |
1401 | |
1402 | - # Preparing mocks |
1403 | - self.mox.StubOutWithMock(self.compute.volume_manager, |
1404 | - 'remove_compute_volume') |
1405 | - for v in i_ref['volumes']: |
1406 | - self.compute.volume_manager.remove_compute_volume(c, v['id']) |
1407 | + # creating mocks |
1408 | self.mox.StubOutWithMock(self.compute.driver, 'unfilter_instance') |
1409 | self.compute.driver.unfilter_instance(i_ref, []) |
1410 | self.mox.StubOutWithMock(rpc, 'call') |
1411 | @@ -910,18 +778,18 @@ |
1412 | {"method": "post_live_migration_at_destination", |
1413 | "args": {'instance_id': i_ref['id'], 'block_migration': False}}) |
1414 | |
1415 | - # executing |
1416 | + # start test |
1417 | self.mox.ReplayAll() |
1418 | ret = self.compute.post_live_migration(c, i_ref, dest) |
1419 | |
1420 | - # make sure every data is rewritten to dest |
1421 | + # make sure every data is rewritten to destinatioin hostname. |
1422 | i_ref = db.instance_get(c, i_ref['id']) |
1423 | c1 = (i_ref['host'] == dest) |
1424 | flo_refs = db.floating_ip_get_all_by_host(c, dest) |
1425 | c2 = (len(flo_refs) != 0 and flo_refs[0]['address'] == flo_addr) |
1426 | - |
1427 | - # post operaton |
1428 | self.assertTrue(c1 and c2) |
1429 | + |
1430 | + # cleanup |
1431 | db.instance_destroy(c, instance_id) |
1432 | db.volume_destroy(c, v_ref['id']) |
1433 | db.floating_ip_destroy(c, flo_addr) |
1434 | |
1435 | === modified file 'nova/tests/test_libvirt.py' |
1436 | --- nova/tests/test_libvirt.py 2011-09-19 14:22:34 +0000 |
1437 | +++ nova/tests/test_libvirt.py 2011-09-20 16:57:39 +0000 |
1438 | @@ -30,6 +30,7 @@ |
1439 | from nova import db |
1440 | from nova import exception |
1441 | from nova import flags |
1442 | +from nova import log as logging |
1443 | from nova import test |
1444 | from nova import utils |
1445 | from nova.api.ec2 import cloud |
1446 | @@ -38,10 +39,13 @@ |
1447 | from nova.virt import driver |
1448 | from nova.virt.libvirt import connection |
1449 | from nova.virt.libvirt import firewall |
1450 | +from nova.virt.libvirt import volume |
1451 | +from nova.volume import driver as volume_driver |
1452 | from nova.tests import fake_network |
1453 | |
1454 | libvirt = None |
1455 | FLAGS = flags.FLAGS |
1456 | +LOG = logging.getLogger('nova.tests.test_libvirt') |
1457 | |
1458 | _fake_network_info = fake_network.fake_get_instance_nw_info |
1459 | _ipv4_like = fake_network.ipv4_like |
1460 | @@ -87,6 +91,72 @@ |
1461 | return self._fake_dom_xml |
1462 | |
1463 | |
1464 | +class LibvirtVolumeTestCase(test.TestCase): |
1465 | + |
1466 | + @staticmethod |
1467 | + def fake_execute(*cmd, **kwargs): |
1468 | + LOG.debug("FAKE EXECUTE: %s" % ' '.join(cmd)) |
1469 | + return None, None |
1470 | + |
1471 | + def setUp(self): |
1472 | + super(LibvirtVolumeTestCase, self).setUp() |
1473 | + self.stubs.Set(utils, 'execute', self.fake_execute) |
1474 | + |
1475 | + def test_libvirt_iscsi_driver(self): |
1476 | + # NOTE(vish) exists is to make driver assume connecting worked |
1477 | + self.stubs.Set(os.path, 'exists', lambda x: True) |
1478 | + vol_driver = volume_driver.ISCSIDriver() |
1479 | + libvirt_driver = volume.LibvirtISCSIVolumeDriver('fake') |
1480 | + name = 'volume-00000001' |
1481 | + vol = {'id': 1, |
1482 | + 'name': name, |
1483 | + 'provider_auth': None, |
1484 | + 'provider_location': '10.0.2.15:3260,fake ' |
1485 | + 'iqn.2010-10.org.openstack:volume-00000001'} |
1486 | + address = '127.0.0.1' |
1487 | + connection_info = vol_driver.initialize_connection(vol, address) |
1488 | + mount_device = "vde" |
1489 | + xml = libvirt_driver.connect_volume(connection_info, mount_device) |
1490 | + tree = xml_to_tree(xml) |
1491 | + dev_str = '/dev/disk/by-path/ip-10.0.2.15:3260-iscsi-iqn.' \ |
1492 | + '2010-10.org.openstack:%s-lun-0' % name |
1493 | + self.assertEqual(tree.get('type'), 'block') |
1494 | + self.assertEqual(tree.find('./source').get('dev'), dev_str) |
1495 | + libvirt_driver.disconnect_volume(connection_info, mount_device) |
1496 | + |
1497 | + |
1498 | + def test_libvirt_sheepdog_driver(self): |
1499 | + vol_driver = volume_driver.SheepdogDriver() |
1500 | + libvirt_driver = volume.LibvirtNetVolumeDriver('fake') |
1501 | + name = 'volume-00000001' |
1502 | + vol = {'id': 1, 'name': name} |
1503 | + address = '127.0.0.1' |
1504 | + connection_info = vol_driver.initialize_connection(vol, address) |
1505 | + mount_device = "vde" |
1506 | + xml = libvirt_driver.connect_volume(connection_info, mount_device) |
1507 | + tree = xml_to_tree(xml) |
1508 | + self.assertEqual(tree.get('type'), 'network') |
1509 | + self.assertEqual(tree.find('./source').get('protocol'), 'sheepdog') |
1510 | + self.assertEqual(tree.find('./source').get('name'), name) |
1511 | + libvirt_driver.disconnect_volume(connection_info, mount_device) |
1512 | + |
1513 | + def test_libvirt_rbd_driver(self): |
1514 | + vol_driver = volume_driver.RBDDriver() |
1515 | + libvirt_driver = volume.LibvirtNetVolumeDriver('fake') |
1516 | + name = 'volume-00000001' |
1517 | + vol = {'id': 1, 'name': name} |
1518 | + address = '127.0.0.1' |
1519 | + connection_info = vol_driver.initialize_connection(vol, address) |
1520 | + mount_device = "vde" |
1521 | + xml = libvirt_driver.connect_volume(connection_info, mount_device) |
1522 | + tree = xml_to_tree(xml) |
1523 | + self.assertEqual(tree.get('type'), 'network') |
1524 | + self.assertEqual(tree.find('./source').get('protocol'), 'rbd') |
1525 | + rbd_name ='%s/%s' % (FLAGS.rbd_pool, name) |
1526 | + self.assertEqual(tree.find('./source').get('name'), rbd_name) |
1527 | + libvirt_driver.disconnect_volume(connection_info, mount_device) |
1528 | + |
1529 | + |
1530 | class CacheConcurrencyTestCase(test.TestCase): |
1531 | def setUp(self): |
1532 | super(CacheConcurrencyTestCase, self).setUp() |
1533 | @@ -145,6 +215,20 @@ |
1534 | eventlet.sleep(0) |
1535 | |
1536 | |
1537 | +class FakeVolumeDriver(object): |
1538 | + def __init__(self, *args, **kwargs): |
1539 | + pass |
1540 | + |
1541 | + def attach_volume(self, *args): |
1542 | + pass |
1543 | + |
1544 | + def detach_volume(self, *args): |
1545 | + pass |
1546 | + |
1547 | + def get_xml(self, *args): |
1548 | + return "" |
1549 | + |
1550 | + |
1551 | class LibvirtConnTestCase(test.TestCase): |
1552 | |
1553 | def setUp(self): |
1554 | @@ -192,14 +276,14 @@ |
1555 | return FakeVirtDomain() |
1556 | |
1557 | # Creating mocks |
1558 | + volume_driver = 'iscsi=nova.tests.test_libvirt.FakeVolumeDriver' |
1559 | + self.flags(libvirt_volume_drivers=[volume_driver]) |
1560 | fake = FakeLibvirtConnection() |
1561 | # Customizing above fake if necessary |
1562 | for key, val in kwargs.items(): |
1563 | fake.__setattr__(key, val) |
1564 | |
1565 | self.flags(image_service='nova.image.fake.FakeImageService') |
1566 | - fw_driver = "nova.tests.fake_network.FakeIptablesFirewallDriver" |
1567 | - self.flags(firewall_driver=fw_driver) |
1568 | self.flags(libvirt_vif_driver="nova.tests.fake_network.FakeVIFDriver") |
1569 | |
1570 | self.mox.StubOutWithMock(connection.LibvirtConnection, '_conn') |
1571 | @@ -382,14 +466,16 @@ |
1572 | self.assertEquals(snapshot['status'], 'active') |
1573 | self.assertEquals(snapshot['name'], snapshot_name) |
1574 | |
1575 | - def test_attach_invalid_device(self): |
1576 | + def test_attach_invalid_volume_type(self): |
1577 | self.create_fake_libvirt_mock() |
1578 | connection.LibvirtConnection._conn.lookupByName = self.fake_lookup |
1579 | self.mox.ReplayAll() |
1580 | conn = connection.LibvirtConnection(False) |
1581 | - self.assertRaises(exception.InvalidDevicePath, |
1582 | + self.assertRaises(exception.VolumeDriverNotFound, |
1583 | conn.attach_volume, |
1584 | - "fake", "bad/device/path", "/dev/fake") |
1585 | + {"driver_volume_type": "badtype"}, |
1586 | + "fake", |
1587 | + "/dev/fake") |
1588 | |
1589 | def test_multi_nic(self): |
1590 | instance_data = dict(self.test_instance) |
1591 | @@ -637,9 +723,15 @@ |
1592 | self.mox.ReplayAll() |
1593 | try: |
1594 | conn = connection.LibvirtConnection(False) |
1595 | - conn.firewall_driver.setattr('setup_basic_filtering', fake_none) |
1596 | - conn.firewall_driver.setattr('prepare_instance_filter', fake_none) |
1597 | - conn.firewall_driver.setattr('instance_filter_exists', fake_none) |
1598 | + self.stubs.Set(conn.firewall_driver, |
1599 | + 'setup_basic_filtering', |
1600 | + fake_none) |
1601 | + self.stubs.Set(conn.firewall_driver, |
1602 | + 'prepare_instance_filter', |
1603 | + fake_none) |
1604 | + self.stubs.Set(conn.firewall_driver, |
1605 | + 'instance_filter_exists', |
1606 | + fake_none) |
1607 | conn.ensure_filtering_rules_for_instance(instance_ref, |
1608 | network_info, |
1609 | time=fake_timer) |
1610 | @@ -684,10 +776,7 @@ |
1611 | return vdmock |
1612 | |
1613 | self.create_fake_libvirt_mock(lookupByName=fake_lookup) |
1614 | -# self.mox.StubOutWithMock(self.compute, "recover_live_migration") |
1615 | self.mox.StubOutWithMock(self.compute, "rollback_live_migration") |
1616 | -# self.compute.recover_live_migration(self.context, instance_ref, |
1617 | -# dest='dest') |
1618 | self.compute.rollback_live_migration(self.context, instance_ref, |
1619 | 'dest', False) |
1620 | |
1621 | @@ -708,6 +797,27 @@ |
1622 | db.volume_destroy(self.context, volume_ref['id']) |
1623 | db.instance_destroy(self.context, instance_ref['id']) |
1624 | |
1625 | + def test_pre_live_migration_works_correctly(self): |
1626 | + """Confirms pre_block_migration works correctly.""" |
1627 | + # Creating testdata |
1628 | + vol = {'block_device_mapping':[ |
1629 | + {'connection_info': 'dummy', 'mount_device': '/dev/sda'}, |
1630 | + {'connection_info': 'dummy', 'mount_device': '/dev/sdb'}]} |
1631 | + conn = connection.LibvirtConnection(False) |
1632 | + |
1633 | + # Creating mocks |
1634 | + self.mox.StubOutWithMock(driver, "block_device_info_get_mapping") |
1635 | + driver.block_device_info_get_mapping(vol |
1636 | + ).AndReturn(vol['block_device_mapping']) |
1637 | + self.mox.StubOutWithMock(conn, "volume_driver_method") |
1638 | + for v in vol['block_device_mapping']: |
1639 | + conn.volume_driver_method('connect_volume', |
1640 | + v['connection_info'], v['mount_device']) |
1641 | + |
1642 | + # Starting test |
1643 | + self.mox.ReplayAll() |
1644 | + self.assertEqual(conn.pre_live_migration(vol), None) |
1645 | + |
1646 | def test_pre_block_migration_works_correctly(self): |
1647 | """Confirms pre_block_migration works correctly.""" |
1648 | |
1649 | @@ -812,8 +922,12 @@ |
1650 | # Start test |
1651 | self.mox.ReplayAll() |
1652 | conn = connection.LibvirtConnection(False) |
1653 | - conn.firewall_driver.setattr('setup_basic_filtering', fake_none) |
1654 | - conn.firewall_driver.setattr('prepare_instance_filter', fake_none) |
1655 | + self.stubs.Set(conn.firewall_driver, |
1656 | + 'setup_basic_filtering', |
1657 | + fake_none) |
1658 | + self.stubs.Set(conn.firewall_driver, |
1659 | + 'prepare_instance_filter', |
1660 | + fake_none) |
1661 | |
1662 | network_info = _fake_network_info(self.stubs, 1) |
1663 | |
1664 | |
1665 | === modified file 'nova/tests/test_virt_drivers.py' |
1666 | --- nova/tests/test_virt_drivers.py 2011-09-15 19:09:14 +0000 |
1667 | +++ nova/tests/test_virt_drivers.py 2011-09-20 16:57:39 +0000 |
1668 | @@ -253,9 +253,11 @@ |
1669 | network_info = test_utils.get_test_network_info() |
1670 | instance_ref = test_utils.get_test_instance() |
1671 | self.connection.spawn(self.ctxt, instance_ref, network_info) |
1672 | - self.connection.attach_volume(instance_ref['name'], |
1673 | - '/dev/null', '/mnt/nova/something') |
1674 | - self.connection.detach_volume(instance_ref['name'], |
1675 | + self.connection.attach_volume({'driver_volume_type': 'fake'}, |
1676 | + instance_ref['name'], |
1677 | + '/mnt/nova/something') |
1678 | + self.connection.detach_volume({'driver_volume_type': 'fake'}, |
1679 | + instance_ref['name'], |
1680 | '/mnt/nova/something') |
1681 | |
1682 | @catch_notimplementederror |
1683 | |
1684 | === modified file 'nova/tests/test_volume.py' |
1685 | --- nova/tests/test_volume.py 2011-08-05 14:23:48 +0000 |
1686 | +++ nova/tests/test_volume.py 2011-09-20 16:57:39 +0000 |
1687 | @@ -257,7 +257,7 @@ |
1688 | |
1689 | class DriverTestCase(test.TestCase): |
1690 | """Base Test class for Drivers.""" |
1691 | - driver_name = "nova.volume.driver.FakeAOEDriver" |
1692 | + driver_name = "nova.volume.driver.FakeBaseDriver" |
1693 | |
1694 | def setUp(self): |
1695 | super(DriverTestCase, self).setUp() |
1696 | @@ -295,83 +295,6 @@ |
1697 | self.volume.delete_volume(self.context, volume_id) |
1698 | |
1699 | |
1700 | -class AOETestCase(DriverTestCase): |
1701 | - """Test Case for AOEDriver""" |
1702 | - driver_name = "nova.volume.driver.AOEDriver" |
1703 | - |
1704 | - def setUp(self): |
1705 | - super(AOETestCase, self).setUp() |
1706 | - |
1707 | - def tearDown(self): |
1708 | - super(AOETestCase, self).tearDown() |
1709 | - |
1710 | - def _attach_volume(self): |
1711 | - """Attach volumes to an instance. This function also sets |
1712 | - a fake log message.""" |
1713 | - volume_id_list = [] |
1714 | - for index in xrange(3): |
1715 | - vol = {} |
1716 | - vol['size'] = 0 |
1717 | - volume_id = db.volume_create(self.context, |
1718 | - vol)['id'] |
1719 | - self.volume.create_volume(self.context, volume_id) |
1720 | - |
1721 | - # each volume has a different mountpoint |
1722 | - mountpoint = "/dev/sd" + chr((ord('b') + index)) |
1723 | - db.volume_attached(self.context, volume_id, self.instance_id, |
1724 | - mountpoint) |
1725 | - |
1726 | - (shelf_id, blade_id) = db.volume_get_shelf_and_blade(self.context, |
1727 | - volume_id) |
1728 | - self.output += "%s %s eth0 /dev/nova-volumes/vol-foo auto run\n" \ |
1729 | - % (shelf_id, blade_id) |
1730 | - |
1731 | - volume_id_list.append(volume_id) |
1732 | - |
1733 | - return volume_id_list |
1734 | - |
1735 | - def test_check_for_export_with_no_volume(self): |
1736 | - """No log message when no volume is attached to an instance.""" |
1737 | - self.stream.truncate(0) |
1738 | - self.volume.check_for_export(self.context, self.instance_id) |
1739 | - self.assertEqual(self.stream.getvalue(), '') |
1740 | - |
1741 | - def test_check_for_export_with_all_vblade_processes(self): |
1742 | - """No log message when all the vblade processes are running.""" |
1743 | - volume_id_list = self._attach_volume() |
1744 | - |
1745 | - self.stream.truncate(0) |
1746 | - self.volume.check_for_export(self.context, self.instance_id) |
1747 | - self.assertEqual(self.stream.getvalue(), '') |
1748 | - |
1749 | - self._detach_volume(volume_id_list) |
1750 | - |
1751 | - def test_check_for_export_with_vblade_process_missing(self): |
1752 | - """Output a warning message when some vblade processes aren't |
1753 | - running.""" |
1754 | - volume_id_list = self._attach_volume() |
1755 | - |
1756 | - # the first vblade process isn't running |
1757 | - self.output = self.output.replace("run", "down", 1) |
1758 | - (shelf_id, blade_id) = db.volume_get_shelf_and_blade(self.context, |
1759 | - volume_id_list[0]) |
1760 | - |
1761 | - msg_is_match = False |
1762 | - self.stream.truncate(0) |
1763 | - try: |
1764 | - self.volume.check_for_export(self.context, self.instance_id) |
1765 | - except exception.ProcessExecutionError, e: |
1766 | - volume_id = volume_id_list[0] |
1767 | - msg = _("Cannot confirm exported volume id:%(volume_id)s. " |
1768 | - "vblade process for e%(shelf_id)s.%(blade_id)s " |
1769 | - "isn't running.") % locals() |
1770 | - |
1771 | - msg_is_match = (0 <= e.message.find(msg)) |
1772 | - |
1773 | - self.assertTrue(msg_is_match) |
1774 | - self._detach_volume(volume_id_list) |
1775 | - |
1776 | - |
1777 | class ISCSITestCase(DriverTestCase): |
1778 | """Test Case for ISCSIDriver""" |
1779 | driver_name = "nova.volume.driver.ISCSIDriver" |
1780 | @@ -408,7 +331,7 @@ |
1781 | self.assertEqual(self.stream.getvalue(), '') |
1782 | |
1783 | def test_check_for_export_with_all_volume_exported(self): |
1784 | - """No log message when all the vblade processes are running.""" |
1785 | + """No log message when all the processes are running.""" |
1786 | volume_id_list = self._attach_volume() |
1787 | |
1788 | self.mox.StubOutWithMock(self.volume.driver, '_execute') |
1789 | @@ -431,7 +354,6 @@ |
1790 | by ietd.""" |
1791 | volume_id_list = self._attach_volume() |
1792 | |
1793 | - # the first vblade process isn't running |
1794 | tid = db.volume_get_iscsi_target_num(self.context, volume_id_list[0]) |
1795 | self.mox.StubOutWithMock(self.volume.driver, '_execute') |
1796 | self.volume.driver._execute("ietadm", "--op", "show", |
1797 | |
1798 | === modified file 'nova/tests/test_xenapi.py' |
1799 | --- nova/tests/test_xenapi.py 2011-09-13 20:33:34 +0000 |
1800 | +++ nova/tests/test_xenapi.py 2011-09-20 16:57:39 +0000 |
1801 | @@ -98,6 +98,20 @@ |
1802 | vol['attach_status'] = "detached" |
1803 | return db.volume_create(self.context, vol) |
1804 | |
1805 | + @staticmethod |
1806 | + def _make_info(): |
1807 | + return { |
1808 | + 'driver_volume_type': 'iscsi', |
1809 | + 'data': { |
1810 | + 'volume_id': 1, |
1811 | + 'target_iqn': 'iqn.2010-10.org.openstack:volume-00000001', |
1812 | + 'target_portal': '127.0.0.1:3260,fake', |
1813 | + 'auth_method': 'CHAP', |
1814 | + 'auth_method': 'fake', |
1815 | + 'auth_method': 'fake', |
1816 | + } |
1817 | + } |
1818 | + |
1819 | def test_create_iscsi_storage(self): |
1820 | """This shows how to test helper classes' methods.""" |
1821 | stubs.stubout_session(self.stubs, stubs.FakeSessionForVolumeTests) |
1822 | @@ -105,7 +119,7 @@ |
1823 | helper = volume_utils.VolumeHelper |
1824 | helper.XenAPI = session.get_imported_xenapi() |
1825 | vol = self._create_volume() |
1826 | - info = helper.parse_volume_info(vol['id'], '/dev/sdc') |
1827 | + info = helper.parse_volume_info(self._make_info(), '/dev/sdc') |
1828 | label = 'SR-%s' % vol['id'] |
1829 | description = 'Test-SR' |
1830 | sr_ref = helper.create_iscsi_storage(session, info, label, description) |
1831 | @@ -123,8 +137,9 @@ |
1832 | # oops, wrong mount point! |
1833 | self.assertRaises(volume_utils.StorageError, |
1834 | helper.parse_volume_info, |
1835 | - vol['id'], |
1836 | - '/dev/sd') |
1837 | + self._make_info(), |
1838 | + 'dev/sd' |
1839 | + ) |
1840 | db.volume_destroy(context.get_admin_context(), vol['id']) |
1841 | |
1842 | def test_attach_volume(self): |
1843 | @@ -134,7 +149,8 @@ |
1844 | volume = self._create_volume() |
1845 | instance = db.instance_create(self.context, self.values) |
1846 | vm = xenapi_fake.create_vm(instance.name, 'Running') |
1847 | - result = conn.attach_volume(instance.name, volume['id'], '/dev/sdc') |
1848 | + result = conn.attach_volume(self._make_info(), |
1849 | + instance.name, '/dev/sdc') |
1850 | |
1851 | def check(): |
1852 | # check that the VM has a VBD attached to it |
1853 | |
1854 | === modified file 'nova/virt/driver.py' |
1855 | --- nova/virt/driver.py 2011-09-15 18:44:49 +0000 |
1856 | +++ nova/virt/driver.py 2011-09-20 16:57:39 +0000 |
1857 | @@ -149,7 +149,8 @@ |
1858 | """ |
1859 | raise NotImplementedError() |
1860 | |
1861 | - def destroy(self, instance, network_info, cleanup=True): |
1862 | + def destroy(self, instance, network_info, block_device_info=None, |
1863 | + cleanup=True): |
1864 | """Destroy (shutdown and delete) the specified instance. |
1865 | |
1866 | If the instance is not found (for example if networking failed), this |
1867 | @@ -203,12 +204,12 @@ |
1868 | # TODO(Vek): Need to pass context in for access to auth_token |
1869 | raise NotImplementedError() |
1870 | |
1871 | - def attach_volume(self, context, instance_id, volume_id, mountpoint): |
1872 | - """Attach the disk at device_path to the instance at mountpoint""" |
1873 | + def attach_volume(self, connection_info, instance_name, mountpoint): |
1874 | + """Attach the disk to the instance at mountpoint using info""" |
1875 | raise NotImplementedError() |
1876 | |
1877 | - def detach_volume(self, context, instance_id, volume_id): |
1878 | - """Detach the disk attached to the instance at mountpoint""" |
1879 | + def detach_volume(self, connection_info, instance_name, mountpoint): |
1880 | + """Detach the disk attached to the instance""" |
1881 | raise NotImplementedError() |
1882 | |
1883 | def compare_cpu(self, cpu_info): |
1884 | |
1885 | === modified file 'nova/virt/fake.py' |
1886 | --- nova/virt/fake.py 2011-09-15 18:44:49 +0000 |
1887 | +++ nova/virt/fake.py 2011-09-20 16:57:39 +0000 |
1888 | @@ -92,6 +92,10 @@ |
1889 | info_list.append(self._map_to_instance_info(instance)) |
1890 | return info_list |
1891 | |
1892 | + def plug_vifs(self, instance, network_info): |
1893 | + """Plugin VIFs into networks.""" |
1894 | + pass |
1895 | + |
1896 | def spawn(self, context, instance, |
1897 | network_info=None, block_device_info=None): |
1898 | name = instance.name |
1899 | @@ -148,7 +152,8 @@ |
1900 | def resume(self, instance, callback): |
1901 | pass |
1902 | |
1903 | - def destroy(self, instance, network_info, cleanup=True): |
1904 | + def destroy(self, instance, network_info, block_device_info=None, |
1905 | + cleanup=True): |
1906 | key = instance['name'] |
1907 | if key in self.instances: |
1908 | del self.instances[key] |
1909 | @@ -156,13 +161,15 @@ |
1910 | LOG.warning("Key '%s' not in instances '%s'" % |
1911 | (key, self.instances)) |
1912 | |
1913 | - def attach_volume(self, instance_name, device_path, mountpoint): |
1914 | + def attach_volume(self, connection_info, instance_name, mountpoint): |
1915 | + """Attach the disk to the instance at mountpoint using info""" |
1916 | if not instance_name in self._mounts: |
1917 | self._mounts[instance_name] = {} |
1918 | - self._mounts[instance_name][mountpoint] = device_path |
1919 | + self._mounts[instance_name][mountpoint] = connection_info |
1920 | return True |
1921 | |
1922 | - def detach_volume(self, instance_name, mountpoint): |
1923 | + def detach_volume(self, connection_info, instance_name, mountpoint): |
1924 | + """Detach the disk attached to the instance""" |
1925 | try: |
1926 | del self._mounts[instance_name][mountpoint] |
1927 | except KeyError: |
1928 | @@ -233,11 +240,19 @@ |
1929 | """This method is supported only by libvirt.""" |
1930 | raise NotImplementedError('This method is supported only by libvirt.') |
1931 | |
1932 | + def get_instance_disk_info(self, ctxt, instance_ref): |
1933 | + """This method is supported only by libvirt.""" |
1934 | + return |
1935 | + |
1936 | def live_migration(self, context, instance_ref, dest, |
1937 | post_method, recover_method, block_migration=False): |
1938 | """This method is supported only by libvirt.""" |
1939 | return |
1940 | |
1941 | + def pre_live_migration(self, block_device_info): |
1942 | + """This method is supported only by libvirt.""" |
1943 | + return |
1944 | + |
1945 | def unfilter_instance(self, instance_ref, network_info): |
1946 | """This method is supported only by libvirt.""" |
1947 | raise NotImplementedError('This method is supported only by libvirt.') |
1948 | |
1949 | === modified file 'nova/virt/hyperv.py' |
1950 | --- nova/virt/hyperv.py 2011-09-15 18:44:49 +0000 |
1951 | +++ nova/virt/hyperv.py 2011-09-20 16:57:39 +0000 |
1952 | @@ -374,7 +374,8 @@ |
1953 | raise exception.InstanceNotFound(instance_id=instance.id) |
1954 | self._set_vm_state(instance.name, 'Reboot') |
1955 | |
1956 | - def destroy(self, instance, network_info, cleanup=True): |
1957 | + def destroy(self, instance, network_info, block_device_info=None, |
1958 | + cleanup=True): |
1959 | """Destroy the VM. Also destroy the associated VHD disk files""" |
1960 | LOG.debug(_("Got request to destroy vm %s"), instance.name) |
1961 | vm = self._lookup(instance.name) |
1962 | @@ -474,12 +475,12 @@ |
1963 | LOG.error(msg) |
1964 | raise Exception(msg) |
1965 | |
1966 | - def attach_volume(self, instance_name, device_path, mountpoint): |
1967 | + def attach_volume(self, connection_info, instance_name, mountpoint): |
1968 | vm = self._lookup(instance_name) |
1969 | if vm is None: |
1970 | raise exception.InstanceNotFound(instance_id=instance_name) |
1971 | |
1972 | - def detach_volume(self, instance_name, mountpoint): |
1973 | + def detach_volume(self, connection_info, instance_name, mountpoint): |
1974 | vm = self._lookup(instance_name) |
1975 | if vm is None: |
1976 | raise exception.InstanceNotFound(instance_id=instance_name) |
1977 | |
1978 | === modified file 'nova/virt/libvirt.xml.template' |
1979 | --- nova/virt/libvirt.xml.template 2011-08-24 23:48:04 +0000 |
1980 | +++ nova/virt/libvirt.xml.template 2011-09-20 16:57:39 +0000 |
1981 | @@ -80,30 +80,22 @@ |
1982 | <target dev='${local_device}' bus='${disk_bus}'/> |
1983 | </disk> |
1984 | #end if |
1985 | - #for $eph in $ephemerals |
1986 | - <disk type='block'> |
1987 | + #for $eph in $ephemerals |
1988 | + <disk type='block'> |
1989 | <driver type='${driver_type}'/> |
1990 | <source dev='${basepath}/${eph.device_path}'/> |
1991 | <target dev='${eph.device}' bus='${disk_bus}'/> |
1992 | - </disk> |
1993 | - #end for |
1994 | - #if $getVar('swap_device', False) |
1995 | + </disk> |
1996 | + #end for |
1997 | + #if $getVar('swap_device', False) |
1998 | <disk type='file'> |
1999 | <driver type='${driver_type}'/> |
2000 | <source file='${basepath}/disk.swap'/> |
2001 | <target dev='${swap_device}' bus='${disk_bus}'/> |
2002 | </disk> |
2003 | - #end if |
2004 | + #end if |
2005 | #for $vol in $volumes |
2006 | - <disk type='${vol.type}'> |
2007 | - <driver type='raw'/> |
2008 | - #if $vol.type == 'network' |
2009 | - <source protocol='${vol.protocol}' name='${vol.name}'/> |
2010 | - #else |
2011 | - <source dev='${vol.device_path}'/> |
2012 | - #end if |
2013 | - <target dev='${vol.mount_device}' bus='${disk_bus}'/> |
2014 | - </disk> |
2015 | + ${vol} |
2016 | #end for |
2017 | #end if |
2018 | #if $getVar('config_drive', False) |
2019 | |
2020 | === modified file 'nova/virt/libvirt/connection.py' |
2021 | --- nova/virt/libvirt/connection.py 2011-09-20 10:12:01 +0000 |
2022 | +++ nova/virt/libvirt/connection.py 2011-09-20 16:57:39 +0000 |
2023 | @@ -134,6 +134,12 @@ |
2024 | flags.DEFINE_string('libvirt_vif_driver', |
2025 | 'nova.virt.libvirt.vif.LibvirtBridgeDriver', |
2026 | 'The libvirt VIF driver to configure the VIFs.') |
2027 | +flags.DEFINE_list('libvirt_volume_drivers', |
2028 | + ['iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver', |
2029 | + 'local=nova.virt.libvirt.volume.LibvirtVolumeDriver', |
2030 | + 'rdb=nova.virt.libvirt.volume.LibvirtNetVolumeDriver', |
2031 | + 'sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver'], |
2032 | + 'Libvirt handlers for remote volumes.') |
2033 | flags.DEFINE_string('default_local_format', |
2034 | None, |
2035 | 'The default format a local_volume will be formatted with ' |
2036 | @@ -184,6 +190,11 @@ |
2037 | fw_class = utils.import_class(FLAGS.firewall_driver) |
2038 | self.firewall_driver = fw_class(get_connection=self._get_connection) |
2039 | self.vif_driver = utils.import_object(FLAGS.libvirt_vif_driver) |
2040 | + self.volume_drivers = {} |
2041 | + for driver_str in FLAGS.libvirt_volume_drivers: |
2042 | + driver_type, _sep, driver = driver_str.partition('=') |
2043 | + driver_class = utils.import_class(driver) |
2044 | + self.volume_drivers[driver_type] = driver_class(self) |
2045 | |
2046 | def init_host(self, host): |
2047 | # NOTE(nsokolov): moved instance restarting to ComputeManager |
2048 | @@ -261,7 +272,8 @@ |
2049 | for (network, mapping) in network_info: |
2050 | self.vif_driver.plug(instance, network, mapping) |
2051 | |
2052 | - def destroy(self, instance, network_info, cleanup=True): |
2053 | + def destroy(self, instance, network_info, block_device_info=None, |
2054 | + cleanup=True): |
2055 | instance_name = instance['name'] |
2056 | |
2057 | try: |
2058 | @@ -292,21 +304,21 @@ |
2059 | locals()) |
2060 | raise |
2061 | |
2062 | - try: |
2063 | - # NOTE(justinsb): We remove the domain definition. We probably |
2064 | - # would do better to keep it if cleanup=False (e.g. volumes?) |
2065 | - # (e.g. #2 - not losing machines on failure) |
2066 | - virt_dom.undefine() |
2067 | - except libvirt.libvirtError as e: |
2068 | - errcode = e.get_error_code() |
2069 | - LOG.warning(_("Error from libvirt during undefine of " |
2070 | - "%(instance_name)s. Code=%(errcode)s " |
2071 | - "Error=%(e)s") % |
2072 | - locals()) |
2073 | - raise |
2074 | + try: |
2075 | + # NOTE(justinsb): We remove the domain definition. We probably |
2076 | + # would do better to keep it if cleanup=False (e.g. volumes?) |
2077 | + # (e.g. #2 - not losing machines on failure) |
2078 | + virt_dom.undefine() |
2079 | + except libvirt.libvirtError as e: |
2080 | + errcode = e.get_error_code() |
2081 | + LOG.warning(_("Error from libvirt during undefine of " |
2082 | + "%(instance_name)s. Code=%(errcode)s " |
2083 | + "Error=%(e)s") % |
2084 | + locals()) |
2085 | + raise |
2086 | |
2087 | - for (network, mapping) in network_info: |
2088 | - self.vif_driver.unplug(instance, network, mapping) |
2089 | + for (network, mapping) in network_info: |
2090 | + self.vif_driver.unplug(instance, network, mapping) |
2091 | |
2092 | def _wait_for_destroy(): |
2093 | """Called at an interval until the VM is gone.""" |
2094 | @@ -325,6 +337,15 @@ |
2095 | self.firewall_driver.unfilter_instance(instance, |
2096 | network_info=network_info) |
2097 | |
2098 | + # NOTE(vish): we disconnect from volumes regardless |
2099 | + block_device_mapping = driver.block_device_info_get_mapping( |
2100 | + block_device_info) |
2101 | + for vol in block_device_mapping: |
2102 | + connection_info = vol['connection_info'] |
2103 | + mountpoint = vol['mount_device'] |
2104 | + xml = self.volume_driver_method('disconnect_volume', |
2105 | + connection_info, |
2106 | + mountpoint) |
2107 | if cleanup: |
2108 | self._cleanup(instance) |
2109 | |
2110 | @@ -340,24 +361,22 @@ |
2111 | if os.path.exists(target): |
2112 | shutil.rmtree(target) |
2113 | |
2114 | + def volume_driver_method(self, method_name, connection_info, |
2115 | + *args, **kwargs): |
2116 | + driver_type = connection_info.get('driver_volume_type') |
2117 | + if not driver_type in self.volume_drivers: |
2118 | + raise exception.VolumeDriverNotFound(driver_type=driver_type) |
2119 | + driver = self.volume_drivers[driver_type] |
2120 | + method = getattr(driver, method_name) |
2121 | + return method(connection_info, *args, **kwargs) |
2122 | + |
2123 | @exception.wrap_exception() |
2124 | - def attach_volume(self, instance_name, device_path, mountpoint): |
2125 | + def attach_volume(self, connection_info, instance_name, mountpoint): |
2126 | virt_dom = self._lookup_by_name(instance_name) |
2127 | mount_device = mountpoint.rpartition("/")[2] |
2128 | - (type, protocol, name) = \ |
2129 | - self._get_volume_device_info(device_path) |
2130 | - if type == 'block': |
2131 | - xml = """<disk type='block'> |
2132 | - <driver name='qemu' type='raw'/> |
2133 | - <source dev='%s'/> |
2134 | - <target dev='%s' bus='virtio'/> |
2135 | - </disk>""" % (device_path, mount_device) |
2136 | - elif type == 'network': |
2137 | - xml = """<disk type='network'> |
2138 | - <driver name='qemu' type='raw'/> |
2139 | - <source protocol='%s' name='%s'/> |
2140 | - <target dev='%s' bus='virtio'/> |
2141 | - </disk>""" % (protocol, name, mount_device) |
2142 | + xml = self.volume_driver_method('connect_volume', |
2143 | + connection_info, |
2144 | + mount_device) |
2145 | virt_dom.attachDevice(xml) |
2146 | |
2147 | def _get_disk_xml(self, xml, device): |
2148 | @@ -380,14 +399,24 @@ |
2149 | if doc is not None: |
2150 | doc.freeDoc() |
2151 | |
2152 | + |
2153 | @exception.wrap_exception() |
2154 | - def detach_volume(self, instance_name, mountpoint): |
2155 | - virt_dom = self._lookup_by_name(instance_name) |
2156 | + def detach_volume(self, connection_info, instance_name, mountpoint): |
2157 | mount_device = mountpoint.rpartition("/")[2] |
2158 | - xml = self._get_disk_xml(virt_dom.XMLDesc(0), mount_device) |
2159 | - if not xml: |
2160 | - raise exception.DiskNotFound(location=mount_device) |
2161 | - virt_dom.detachDevice(xml) |
2162 | + try: |
2163 | + # NOTE(vish): This is called to cleanup volumes after live migration, |
2164 | + # so we should still logout even if the instance doesn't |
2165 | + # exist here anymore. |
2166 | + virt_dom = self._lookup_by_name(instance_name) |
2167 | + xml = self._get_disk_xml(virt_dom.XMLDesc(0), mount_device) |
2168 | + if not xml: |
2169 | + raise exception.DiskNotFound(location=mount_device) |
2170 | + virt_dom.detachDevice(xml) |
2171 | + finally: |
2172 | + self.volume_driver_method('disconnect_volume', |
2173 | + connection_info, |
2174 | + mount_device) |
2175 | + |
2176 | |
2177 | @exception.wrap_exception() |
2178 | def snapshot(self, context, instance, image_href): |
2179 | @@ -1047,14 +1076,6 @@ |
2180 | LOG.debug(_("block_device_list %s"), block_device_list) |
2181 | return block_device.strip_dev(mount_device) in block_device_list |
2182 | |
2183 | - def _get_volume_device_info(self, device_path): |
2184 | - if device_path.startswith('/dev/'): |
2185 | - return ('block', None, None) |
2186 | - elif ':' in device_path: |
2187 | - (protocol, name) = device_path.split(':') |
2188 | - return ('network', protocol, name) |
2189 | - else: |
2190 | - raise exception.InvalidDevicePath(path=device_path) |
2191 | |
2192 | def _prepare_xml_info(self, instance, network_info, rescue, |
2193 | block_device_info=None): |
2194 | @@ -1073,10 +1094,14 @@ |
2195 | else: |
2196 | driver_type = 'raw' |
2197 | |
2198 | + volumes = [] |
2199 | for vol in block_device_mapping: |
2200 | - vol['mount_device'] = block_device.strip_dev(vol['mount_device']) |
2201 | - (vol['type'], vol['protocol'], vol['name']) = \ |
2202 | - self._get_volume_device_info(vol['device_path']) |
2203 | + connection_info = vol['connection_info'] |
2204 | + mountpoint = vol['mount_device'] |
2205 | + xml = self.volume_driver_method('connect_volume', |
2206 | + connection_info, |
2207 | + mountpoint) |
2208 | + volumes.append(xml) |
2209 | |
2210 | ebs_root = self._volume_in_mapping(self.default_root_device, |
2211 | block_device_info) |
2212 | @@ -1109,7 +1134,7 @@ |
2213 | 'nics': nics, |
2214 | 'ebs_root': ebs_root, |
2215 | 'local_device': local_device, |
2216 | - 'volumes': block_device_mapping, |
2217 | + 'volumes': volumes, |
2218 | 'use_virtio_for_bridges': |
2219 | FLAGS.libvirt_use_virtio_for_bridges, |
2220 | 'ephemerals': ephemerals} |
2221 | @@ -1705,6 +1730,24 @@ |
2222 | timer.f = wait_for_live_migration |
2223 | timer.start(interval=0.5, now=True) |
2224 | |
2225 | + def pre_live_migration(self, block_device_info): |
2226 | + """Preparation live migration. |
2227 | + |
2228 | + :params block_device_info: |
2229 | + It must be the result of _get_instance_volume_bdms() |
2230 | + at compute manager. |
2231 | + """ |
2232 | + |
2233 | + # Establishing connection to volume server. |
2234 | + block_device_mapping = driver.block_device_info_get_mapping( |
2235 | + block_device_info) |
2236 | + for vol in block_device_mapping: |
2237 | + connection_info = vol['connection_info'] |
2238 | + mountpoint = vol['mount_device'] |
2239 | + xml = self.volume_driver_method('connect_volume', |
2240 | + connection_info, |
2241 | + mountpoint) |
2242 | + |
2243 | def pre_block_migration(self, ctxt, instance_ref, disk_info_json): |
2244 | """Preparation block migration. |
2245 | |
2246 | |
2247 | === added file 'nova/virt/libvirt/volume.py' |
2248 | --- nova/virt/libvirt/volume.py 1970-01-01 00:00:00 +0000 |
2249 | +++ nova/virt/libvirt/volume.py 2011-09-20 16:57:39 +0000 |
2250 | @@ -0,0 +1,149 @@ |
2251 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
2252 | + |
2253 | +# Copyright 2011 OpenStack LLC. |
2254 | +# All Rights Reserved. |
2255 | +# |
2256 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
2257 | +# not use this file except in compliance with the License. You may obtain |
2258 | +# a copy of the License at |
2259 | +# |
2260 | +# http://www.apache.org/licenses/LICENSE-2.0 |
2261 | +# |
2262 | +# Unless required by applicable law or agreed to in writing, software |
2263 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
2264 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
2265 | +# License for the specific language governing permissions and limitations |
2266 | +# under the License. |
2267 | + |
2268 | +"""Volume drivers for libvirt.""" |
2269 | + |
2270 | +import os |
2271 | +import time |
2272 | + |
2273 | +from nova import exception |
2274 | +from nova import flags |
2275 | +from nova import log as logging |
2276 | +from nova import utils |
2277 | + |
2278 | +LOG = logging.getLogger('nova.virt.libvirt.volume') |
2279 | + |
2280 | +FLAGS = flags.FLAGS |
2281 | +flags.DECLARE('num_iscsi_scan_tries', 'nova.volume.driver') |
2282 | + |
2283 | +class LibvirtVolumeDriver(object): |
2284 | + """Base class for volume drivers.""" |
2285 | + def __init__(self, connection): |
2286 | + self.connection = connection |
2287 | + |
2288 | + def connect_volume(self, connection_info, mount_device): |
2289 | + """Connect the volume. Returns xml for libvirt.""" |
2290 | + device_path = connection_info['data']['device_path'] |
2291 | + xml = """<disk type='block'> |
2292 | + <driver name='qemu' type='raw'/> |
2293 | + <source dev='%s'/> |
2294 | + <target dev='%s' bus='virtio'/> |
2295 | + </disk>""" % (device_path, mount_device) |
2296 | + return xml |
2297 | + |
2298 | + def disconnect_volume(self, connection_info, mount_device): |
2299 | + """Disconnect the volume""" |
2300 | + pass |
2301 | + |
2302 | + |
2303 | +class LibvirtNetVolumeDriver(LibvirtVolumeDriver): |
2304 | + """Driver to attach Network volumes to libvirt.""" |
2305 | + |
2306 | + def connect_volume(self, connection_info, mount_device): |
2307 | + protocol = connection_info['driver_volume_type'] |
2308 | + name = connection_info['data']['name'] |
2309 | + xml = """<disk type='network'> |
2310 | + <driver name='qemu' type='raw'/> |
2311 | + <source protocol='%s' name='%s'/> |
2312 | + <target dev='%s' bus='virtio'/> |
2313 | + </disk>""" % (protocol, name, mount_device) |
2314 | + return xml |
2315 | + |
2316 | + |
2317 | +class LibvirtISCSIVolumeDriver(LibvirtVolumeDriver): |
2318 | + """Driver to attach Network volumes to libvirt.""" |
2319 | + |
2320 | + def _run_iscsiadm(self, iscsi_properties, iscsi_command): |
2321 | + (out, err) = utils.execute('iscsiadm', '-m', 'node', '-T', |
2322 | + iscsi_properties['target_iqn'], |
2323 | + '-p', iscsi_properties['target_portal'], |
2324 | + *iscsi_command, run_as_root=True) |
2325 | + LOG.debug("iscsiadm %s: stdout=%s stderr=%s" % |
2326 | + (iscsi_command, out, err)) |
2327 | + return (out, err) |
2328 | + |
2329 | + def _iscsiadm_update(self, iscsi_properties, property_key, property_value): |
2330 | + iscsi_command = ('--op', 'update', '-n', property_key, |
2331 | + '-v', property_value) |
2332 | + return self._run_iscsiadm(iscsi_properties, iscsi_command) |
2333 | + |
2334 | + def connect_volume(self, connection_info, mount_device): |
2335 | + """Attach the volume to instance_name""" |
2336 | + iscsi_properties = connection_info['data'] |
2337 | + try: |
2338 | + # NOTE(vish): if we are on the same host as nova volume, the |
2339 | + # discovery makes the target so we don't need to |
2340 | + # run --op new |
2341 | + self._run_iscsiadm(iscsi_properties, ()) |
2342 | + except exception.ProcessExecutionError: |
2343 | + self._run_iscsiadm(iscsi_properties, ('--op', 'new')) |
2344 | + |
2345 | + if iscsi_properties.get('auth_method'): |
2346 | + self._iscsiadm_update(iscsi_properties, |
2347 | + "node.session.auth.authmethod", |
2348 | + iscsi_properties['auth_method']) |
2349 | + self._iscsiadm_update(iscsi_properties, |
2350 | + "node.session.auth.username", |
2351 | + iscsi_properties['auth_username']) |
2352 | + self._iscsiadm_update(iscsi_properties, |
2353 | + "node.session.auth.password", |
2354 | + iscsi_properties['auth_password']) |
2355 | + |
2356 | + self._run_iscsiadm(iscsi_properties, ("--login",)) |
2357 | + |
2358 | + self._iscsiadm_update(iscsi_properties, "node.startup", "automatic") |
2359 | + |
2360 | + host_device = ("/dev/disk/by-path/ip-%s-iscsi-%s-lun-0" % |
2361 | + (iscsi_properties['target_portal'], |
2362 | + iscsi_properties['target_iqn'])) |
2363 | + |
2364 | + # The /dev/disk/by-path/... node is not always present immediately |
2365 | + # TODO(justinsb): This retry-with-delay is a pattern, move to utils? |
2366 | + tries = 0 |
2367 | + while not os.path.exists(host_device): |
2368 | + if tries >= FLAGS.num_iscsi_scan_tries: |
2369 | + raise exception.Error(_("iSCSI device not found at %s") % |
2370 | + (host_device)) |
2371 | + |
2372 | + LOG.warn(_("ISCSI volume not yet found at: %(mount_device)s. " |
2373 | + "Will rescan & retry. Try number: %(tries)s") % |
2374 | + locals()) |
2375 | + |
2376 | + # The rescan isn't documented as being necessary(?), but it helps |
2377 | + self._run_iscsiadm(iscsi_properties, ("--rescan",)) |
2378 | + |
2379 | + tries = tries + 1 |
2380 | + if not os.path.exists(host_device): |
2381 | + time.sleep(tries ** 2) |
2382 | + |
2383 | + if tries != 0: |
2384 | + LOG.debug(_("Found iSCSI node %(mount_device)s " |
2385 | + "(after %(tries)s rescans)") % |
2386 | + locals()) |
2387 | + |
2388 | + connection_info['data']['device_path'] = host_device |
2389 | + sup = super(LibvirtISCSIVolumeDriver, self) |
2390 | + return sup.connect_volume(connection_info, mount_device) |
2391 | + |
2392 | + def disconnect_volume(self, connection_info, mount_device): |
2393 | + """Detach the volume from instance_name""" |
2394 | + sup = super(LibvirtISCSIVolumeDriver, self) |
2395 | + sup.disconnect_volume(connection_info, mount_device) |
2396 | + iscsi_properties = connection_info['data'] |
2397 | + self._iscsiadm_update(iscsi_properties, "node.startup", "manual") |
2398 | + self._run_iscsiadm(iscsi_properties, ("--logout",)) |
2399 | + self._run_iscsiadm(iscsi_properties, ('--op', 'delete')) |
2400 | |
2401 | === modified file 'nova/virt/vmwareapi_conn.py' |
2402 | --- nova/virt/vmwareapi_conn.py 2011-09-08 21:10:03 +0000 |
2403 | +++ nova/virt/vmwareapi_conn.py 2011-09-20 16:57:39 +0000 |
2404 | @@ -137,7 +137,8 @@ |
2405 | """Reboot VM instance.""" |
2406 | self._vmops.reboot(instance, network_info) |
2407 | |
2408 | - def destroy(self, instance, network_info, cleanup=True): |
2409 | + def destroy(self, instance, network_info, block_device_info=None, |
2410 | + cleanup=True): |
2411 | """Destroy VM instance.""" |
2412 | self._vmops.destroy(instance, network_info) |
2413 | |
2414 | @@ -173,11 +174,11 @@ |
2415 | """Return link to instance's ajax console.""" |
2416 | return self._vmops.get_ajax_console(instance) |
2417 | |
2418 | - def attach_volume(self, instance_name, device_path, mountpoint): |
2419 | + def attach_volume(self, connection_info, instance_name, mountpoint): |
2420 | """Attach volume storage to VM instance.""" |
2421 | pass |
2422 | |
2423 | - def detach_volume(self, instance_name, mountpoint): |
2424 | + def detach_volume(self, connection_info, instance_name, mountpoint): |
2425 | """Detach volume storage to VM instance.""" |
2426 | pass |
2427 | |
2428 | |
2429 | === modified file 'nova/virt/xenapi/volume_utils.py' |
2430 | --- nova/virt/xenapi/volume_utils.py 2011-08-05 14:23:48 +0000 |
2431 | +++ nova/virt/xenapi/volume_utils.py 2011-09-20 16:57:39 +0000 |
2432 | @@ -147,7 +147,7 @@ |
2433 | % sr_ref) |
2434 | |
2435 | @classmethod |
2436 | - def parse_volume_info(cls, device_path, mountpoint): |
2437 | + def parse_volume_info(cls, connection_info, mountpoint): |
2438 | """ |
2439 | Parse device_path and mountpoint as they can be used by XenAPI. |
2440 | In particular, the mountpoint (e.g. /dev/sdc) must be translated |
2441 | @@ -161,11 +161,12 @@ |
2442 | the iscsi driver to set them. |
2443 | """ |
2444 | device_number = VolumeHelper.mountpoint_to_number(mountpoint) |
2445 | - volume_id = _get_volume_id(device_path) |
2446 | - (iscsi_name, iscsi_portal) = _get_target(volume_id) |
2447 | - target_host = _get_target_host(iscsi_portal) |
2448 | - target_port = _get_target_port(iscsi_portal) |
2449 | - target_iqn = _get_iqn(iscsi_name, volume_id) |
2450 | + data = connection_info['data'] |
2451 | + volume_id = data['volume_id'] |
2452 | + target_portal = data['target_portal'] |
2453 | + target_host = _get_target_host(target_portal) |
2454 | + target_port = _get_target_port(target_portal) |
2455 | + target_iqn = data['target_iqn'] |
2456 | LOG.debug('(vol_id,number,host,port,iqn): (%s,%s,%s,%s)', |
2457 | volume_id, target_host, target_port, target_iqn) |
2458 | if (device_number < 0) or \ |
2459 | @@ -173,7 +174,7 @@ |
2460 | (target_host is None) or \ |
2461 | (target_iqn is None): |
2462 | raise StorageError(_('Unable to obtain target information' |
2463 | - ' %(device_path)s, %(mountpoint)s') % locals()) |
2464 | + ' %(data)s, %(mountpoint)s') % locals()) |
2465 | volume_info = {} |
2466 | volume_info['deviceNumber'] = device_number |
2467 | volume_info['volumeId'] = volume_id |
2468 | |
2469 | === modified file 'nova/virt/xenapi/volumeops.py' |
2470 | --- nova/virt/xenapi/volumeops.py 2011-04-21 19:50:04 +0000 |
2471 | +++ nova/virt/xenapi/volumeops.py 2011-09-20 16:57:39 +0000 |
2472 | @@ -40,18 +40,21 @@ |
2473 | VolumeHelper.XenAPI = self.XenAPI |
2474 | VMHelper.XenAPI = self.XenAPI |
2475 | |
2476 | - def attach_volume(self, instance_name, device_path, mountpoint): |
2477 | + def attach_volume(self, connection_info, instance_name, mountpoint): |
2478 | """Attach volume storage to VM instance""" |
2479 | # Before we start, check that the VM exists |
2480 | vm_ref = VMHelper.lookup(self._session, instance_name) |
2481 | if vm_ref is None: |
2482 | raise exception.InstanceNotFound(instance_id=instance_name) |
2483 | # NOTE: No Resource Pool concept so far |
2484 | - LOG.debug(_("Attach_volume: %(instance_name)s, %(device_path)s," |
2485 | + LOG.debug(_("Attach_volume: %(connection_info)s, %(instance_name)s," |
2486 | " %(mountpoint)s") % locals()) |
2487 | + driver_type = connection_info['driver_volume_type'] |
2488 | + if driver_type != 'iscsi': |
2489 | + raise exception.VolumeDriverNotFound(driver_type=driver_type) |
2490 | # Create the iSCSI SR, and the PDB through which hosts access SRs. |
2491 | # But first, retrieve target info, like Host, IQN, LUN and SCSIID |
2492 | - vol_rec = VolumeHelper.parse_volume_info(device_path, mountpoint) |
2493 | + vol_rec = VolumeHelper.parse_volume_info(connection_info, mountpoint) |
2494 | label = 'SR-%s' % vol_rec['volumeId'] |
2495 | description = 'Disk-for:%s' % instance_name |
2496 | # Create SR |
2497 | @@ -92,7 +95,7 @@ |
2498 | LOG.info(_('Mountpoint %(mountpoint)s attached to' |
2499 | ' instance %(instance_name)s') % locals()) |
2500 | |
2501 | - def detach_volume(self, instance_name, mountpoint): |
2502 | + def detach_volume(self, connection_info, instance_name, mountpoint): |
2503 | """Detach volume storage to VM instance""" |
2504 | # Before we start, check that the VM exists |
2505 | vm_ref = VMHelper.lookup(self._session, instance_name) |
2506 | |
2507 | === modified file 'nova/virt/xenapi_conn.py' |
2508 | --- nova/virt/xenapi_conn.py 2011-09-15 18:44:49 +0000 |
2509 | +++ nova/virt/xenapi_conn.py 2011-09-20 16:57:39 +0000 |
2510 | @@ -217,7 +217,8 @@ |
2511 | """ |
2512 | self._vmops.inject_file(instance, b64_path, b64_contents) |
2513 | |
2514 | - def destroy(self, instance, network_info, cleanup=True): |
2515 | + def destroy(self, instance, network_info, block_device_info=None, |
2516 | + cleanup=True): |
2517 | """Destroy VM instance""" |
2518 | self._vmops.destroy(instance, network_info) |
2519 | |
2520 | @@ -289,15 +290,17 @@ |
2521 | xs_url = urlparse.urlparse(FLAGS.xenapi_connection_url) |
2522 | return xs_url.netloc |
2523 | |
2524 | - def attach_volume(self, instance_name, device_path, mountpoint): |
2525 | + def attach_volume(self, connection_info, instance_name, mountpoint): |
2526 | """Attach volume storage to VM instance""" |
2527 | - return self._volumeops.attach_volume(instance_name, |
2528 | - device_path, |
2529 | - mountpoint) |
2530 | + return self._volumeops.attach_volume(connection_info, |
2531 | + instance_name, |
2532 | + mountpoint) |
2533 | |
2534 | - def detach_volume(self, instance_name, mountpoint): |
2535 | + def detach_volume(self, connection_info, instance_name, mountpoint): |
2536 | """Detach volume storage to VM instance""" |
2537 | - return self._volumeops.detach_volume(instance_name, mountpoint) |
2538 | + return self._volumeops.detach_volume(connection_info, |
2539 | + instance_name, |
2540 | + mountpoint) |
2541 | |
2542 | def get_console_pool_info(self, console_type): |
2543 | xs_url = urlparse.urlparse(FLAGS.xenapi_connection_url) |
2544 | |
2545 | === modified file 'nova/volume/api.py' |
2546 | --- nova/volume/api.py 2011-08-26 01:38:35 +0000 |
2547 | +++ nova/volume/api.py 2011-09-20 16:57:39 +0000 |
2548 | @@ -23,7 +23,6 @@ |
2549 | |
2550 | from eventlet import greenthread |
2551 | |
2552 | -from nova import db |
2553 | from nova import exception |
2554 | from nova import flags |
2555 | from nova import log as logging |
2556 | @@ -180,12 +179,49 @@ |
2557 | if volume['status'] == "available": |
2558 | raise exception.ApiError(_("Volume is already detached")) |
2559 | |
2560 | - def remove_from_compute(self, context, volume_id, host): |
2561 | + def remove_from_compute(self, context, instance_id, volume_id, host): |
2562 | """Remove volume from specified compute host.""" |
2563 | rpc.call(context, |
2564 | self.db.queue_get_for(context, FLAGS.compute_topic, host), |
2565 | - {"method": "remove_volume", |
2566 | - "args": {'volume_id': volume_id}}) |
2567 | + {"method": "remove_volume_connection", |
2568 | + "args": {'instance_id': instance_id, |
2569 | + 'volume_id': volume_id}}) |
2570 | + |
2571 | + def attach(self, context, volume_id, instance_id, mountpoint): |
2572 | + volume = self.get(context, volume_id) |
2573 | + host = volume['host'] |
2574 | + queue = self.db.queue_get_for(context, FLAGS.volume_topic, host) |
2575 | + return rpc.call(context, queue, |
2576 | + {"method": "attach_volume", |
2577 | + "args": {"volume_id": volume_id, |
2578 | + "instance_id": instance_id, |
2579 | + "mountpoint": mountpoint}}) |
2580 | + |
2581 | + def detach(self, context, volume_id): |
2582 | + volume = self.get(context, volume_id) |
2583 | + host = volume['host'] |
2584 | + queue = self.db.queue_get_for(context, FLAGS.volume_topic, host) |
2585 | + return rpc.call(context, queue, |
2586 | + {"method": "detach_volume", |
2587 | + "args": {"volume_id": volume_id}}) |
2588 | + |
2589 | + def initialize_connection(self, context, volume_id, address): |
2590 | + volume = self.get(context, volume_id) |
2591 | + host = volume['host'] |
2592 | + queue = self.db.queue_get_for(context, FLAGS.volume_topic, host) |
2593 | + return rpc.call(context, queue, |
2594 | + {"method": "initialize_connection", |
2595 | + "args": {"volume_id": volume_id, |
2596 | + "address": address}}) |
2597 | + |
2598 | + def terminate_connection(self, context, volume_id, address): |
2599 | + volume = self.get(context, volume_id) |
2600 | + host = volume['host'] |
2601 | + queue = self.db.queue_get_for(context, FLAGS.volume_topic, host) |
2602 | + return rpc.call(context, queue, |
2603 | + {"method": "terminate_connection", |
2604 | + "args": {"volume_id": volume_id, |
2605 | + "address": address}}) |
2606 | |
2607 | def _create_snapshot(self, context, volume_id, name, description, |
2608 | force=False): |
2609 | |
2610 | === modified file 'nova/volume/driver.py' |
2611 | --- nova/volume/driver.py 2011-09-13 21:32:24 +0000 |
2612 | +++ nova/volume/driver.py 2011-09-20 16:57:39 +0000 |
2613 | @@ -20,8 +20,8 @@ |
2614 | |
2615 | """ |
2616 | |
2617 | +import os |
2618 | import time |
2619 | -import os |
2620 | from xml.etree import ElementTree |
2621 | |
2622 | from nova import exception |
2623 | @@ -35,25 +35,17 @@ |
2624 | FLAGS = flags.FLAGS |
2625 | flags.DEFINE_string('volume_group', 'nova-volumes', |
2626 | 'Name for the VG that will contain exported volumes') |
2627 | -flags.DEFINE_string('aoe_eth_dev', 'eth0', |
2628 | - 'Which device to export the volumes on') |
2629 | flags.DEFINE_string('num_shell_tries', 3, |
2630 | 'number of times to attempt to run flakey shell commands') |
2631 | flags.DEFINE_string('num_iscsi_scan_tries', 3, |
2632 | 'number of times to rescan iSCSI target to find volume') |
2633 | -flags.DEFINE_integer('num_shelves', |
2634 | - 100, |
2635 | - 'Number of vblade shelves') |
2636 | -flags.DEFINE_integer('blades_per_shelf', |
2637 | - 16, |
2638 | - 'Number of vblade blades per shelf') |
2639 | flags.DEFINE_integer('iscsi_num_targets', |
2640 | 100, |
2641 | 'Number of iscsi target ids per host') |
2642 | flags.DEFINE_string('iscsi_target_prefix', 'iqn.2010-10.org.openstack:', |
2643 | 'prefix for iscsi volumes') |
2644 | -flags.DEFINE_string('iscsi_ip_prefix', '$my_ip', |
2645 | - 'discover volumes on the ip that starts with this prefix') |
2646 | +flags.DEFINE_string('iscsi_ip_address', '$my_ip', |
2647 | + 'use this ip for iscsi') |
2648 | flags.DEFINE_string('rbd_pool', 'rbd', |
2649 | 'the rbd pool in which volumes are stored') |
2650 | |
2651 | @@ -202,146 +194,24 @@ |
2652 | """Removes an export for a logical volume.""" |
2653 | raise NotImplementedError() |
2654 | |
2655 | - def discover_volume(self, context, volume): |
2656 | - """Discover volume on a remote host.""" |
2657 | - raise NotImplementedError() |
2658 | - |
2659 | - def undiscover_volume(self, volume): |
2660 | - """Undiscover volume on a remote host.""" |
2661 | - raise NotImplementedError() |
2662 | - |
2663 | def check_for_export(self, context, volume_id): |
2664 | """Make sure volume is exported.""" |
2665 | raise NotImplementedError() |
2666 | |
2667 | + def initialize_connection(self, volume, address): |
2668 | + """Allow connection to ip and return connection info.""" |
2669 | + raise NotImplementedError() |
2670 | + |
2671 | + def terminate_connection(self, volume, address): |
2672 | + """Disallow connection from ip""" |
2673 | + raise NotImplementedError() |
2674 | + |
2675 | def get_volume_stats(self, refresh=False): |
2676 | """Return the current state of the volume service. If 'refresh' is |
2677 | True, run the update first.""" |
2678 | return None |
2679 | |
2680 | |
2681 | -class AOEDriver(VolumeDriver): |
2682 | - """WARNING! Deprecated. This driver will be removed in Essex. Its use |
2683 | - is not recommended. |
2684 | - |
2685 | - Implements AOE specific volume commands.""" |
2686 | - |
2687 | - def __init__(self, *args, **kwargs): |
2688 | - LOG.warn(_("AOEDriver is deprecated and will be removed in Essex")) |
2689 | - super(AOEDriver, self).__init__(*args, **kwargs) |
2690 | - |
2691 | - def ensure_export(self, context, volume): |
2692 | - # NOTE(vish): we depend on vblade-persist for recreating exports |
2693 | - pass |
2694 | - |
2695 | - def _ensure_blades(self, context): |
2696 | - """Ensure that blades have been created in datastore.""" |
2697 | - total_blades = FLAGS.num_shelves * FLAGS.blades_per_shelf |
2698 | - if self.db.export_device_count(context) >= total_blades: |
2699 | - return |
2700 | - for shelf_id in xrange(FLAGS.num_shelves): |
2701 | - for blade_id in xrange(FLAGS.blades_per_shelf): |
2702 | - dev = {'shelf_id': shelf_id, 'blade_id': blade_id} |
2703 | - self.db.export_device_create_safe(context, dev) |
2704 | - |
2705 | - def create_export(self, context, volume): |
2706 | - """Creates an export for a logical volume.""" |
2707 | - self._ensure_blades(context) |
2708 | - (shelf_id, |
2709 | - blade_id) = self.db.volume_allocate_shelf_and_blade(context, |
2710 | - volume['id']) |
2711 | - self._try_execute( |
2712 | - 'vblade-persist', 'setup', |
2713 | - shelf_id, |
2714 | - blade_id, |
2715 | - FLAGS.aoe_eth_dev, |
2716 | - "/dev/%s/%s" % |
2717 | - (FLAGS.volume_group, |
2718 | - volume['name']), |
2719 | - run_as_root=True) |
2720 | - # NOTE(vish): The standard _try_execute does not work here |
2721 | - # because these methods throw errors if other |
2722 | - # volumes on this host are in the process of |
2723 | - # being created. The good news is the command |
2724 | - # still works for the other volumes, so we |
2725 | - # just wait a bit for the current volume to |
2726 | - # be ready and ignore any errors. |
2727 | - time.sleep(2) |
2728 | - self._execute('vblade-persist', 'auto', 'all', |
2729 | - check_exit_code=False, run_as_root=True) |
2730 | - self._execute('vblade-persist', 'start', 'all', |
2731 | - check_exit_code=False, run_as_root=True) |
2732 | - |
2733 | - def remove_export(self, context, volume): |
2734 | - """Removes an export for a logical volume.""" |
2735 | - (shelf_id, |
2736 | - blade_id) = self.db.volume_get_shelf_and_blade(context, |
2737 | - volume['id']) |
2738 | - self._try_execute('vblade-persist', 'stop', |
2739 | - shelf_id, blade_id, run_as_root=True) |
2740 | - self._try_execute('vblade-persist', 'destroy', |
2741 | - shelf_id, blade_id, run_as_root=True) |
2742 | - |
2743 | - def discover_volume(self, context, _volume): |
2744 | - """Discover volume on a remote host.""" |
2745 | - (shelf_id, |
2746 | - blade_id) = self.db.volume_get_shelf_and_blade(context, |
2747 | - _volume['id']) |
2748 | - self._execute('aoe-discover', run_as_root=True) |
2749 | - out, err = self._execute('aoe-stat', check_exit_code=False, |
2750 | - run_as_root=True) |
2751 | - device_path = 'e%(shelf_id)d.%(blade_id)d' % locals() |
2752 | - if out.find(device_path) >= 0: |
2753 | - return "/dev/etherd/%s" % device_path |
2754 | - else: |
2755 | - return |
2756 | - |
2757 | - def undiscover_volume(self, _volume): |
2758 | - """Undiscover volume on a remote host.""" |
2759 | - pass |
2760 | - |
2761 | - def check_for_export(self, context, volume_id): |
2762 | - """Make sure volume is exported.""" |
2763 | - (shelf_id, |
2764 | - blade_id) = self.db.volume_get_shelf_and_blade(context, |
2765 | - volume_id) |
2766 | - cmd = ('vblade-persist', 'ls', '--no-header') |
2767 | - out, _err = self._execute(*cmd, run_as_root=True) |
2768 | - exported = False |
2769 | - for line in out.split('\n'): |
2770 | - param = line.split(' ') |
2771 | - if len(param) == 6 and param[0] == str(shelf_id) \ |
2772 | - and param[1] == str(blade_id) and param[-1] == "run": |
2773 | - exported = True |
2774 | - break |
2775 | - if not exported: |
2776 | - # Instance will be terminated in this case. |
2777 | - desc = _("Cannot confirm exported volume id:%(volume_id)s. " |
2778 | - "vblade process for e%(shelf_id)s.%(blade_id)s " |
2779 | - "isn't running.") % locals() |
2780 | - raise exception.ProcessExecutionError(out, _err, cmd=cmd, |
2781 | - description=desc) |
2782 | - |
2783 | - |
2784 | -class FakeAOEDriver(AOEDriver): |
2785 | - """Logs calls instead of executing.""" |
2786 | - |
2787 | - def __init__(self, *args, **kwargs): |
2788 | - super(FakeAOEDriver, self).__init__(execute=self.fake_execute, |
2789 | - sync_exec=self.fake_execute, |
2790 | - *args, **kwargs) |
2791 | - |
2792 | - def check_for_setup_error(self): |
2793 | - """No setup necessary in fake mode.""" |
2794 | - pass |
2795 | - |
2796 | - @staticmethod |
2797 | - def fake_execute(cmd, *_args, **_kwargs): |
2798 | - """Execute that simply logs the command.""" |
2799 | - LOG.debug(_("FAKE AOE: %s"), cmd) |
2800 | - return (None, None) |
2801 | - |
2802 | - |
2803 | class ISCSIDriver(VolumeDriver): |
2804 | """Executes commands relating to ISCSI volumes. |
2805 | |
2806 | @@ -445,7 +315,7 @@ |
2807 | '-t', 'sendtargets', '-p', volume['host'], |
2808 | run_as_root=True) |
2809 | for target in out.splitlines(): |
2810 | - if FLAGS.iscsi_ip_prefix in target and volume_name in target: |
2811 | + if FLAGS.iscsi_ip_address in target and volume_name in target: |
2812 | return target |
2813 | return None |
2814 | |
2815 | @@ -462,6 +332,8 @@ |
2816 | |
2817 | :target_portal: the portal of the iSCSI target |
2818 | |
2819 | + :volume_id: the id of the volume (currently used by xen) |
2820 | + |
2821 | :auth_method:, :auth_username:, :auth_password: |
2822 | |
2823 | the authentication details. Right now, either auth_method is not |
2824 | @@ -491,6 +363,7 @@ |
2825 | |
2826 | iscsi_portal = iscsi_target.split(",")[0] |
2827 | |
2828 | + properties['volume_id'] = volume['id'] |
2829 | properties['target_iqn'] = iscsi_name |
2830 | properties['target_portal'] = iscsi_portal |
2831 | |
2832 | @@ -519,64 +392,17 @@ |
2833 | '-v', property_value) |
2834 | return self._run_iscsiadm(iscsi_properties, iscsi_command) |
2835 | |
2836 | - def discover_volume(self, context, volume): |
2837 | - """Discover volume on a remote host.""" |
2838 | - iscsi_properties = self._get_iscsi_properties(volume) |
2839 | - |
2840 | - if not iscsi_properties['target_discovered']: |
2841 | - self._run_iscsiadm(iscsi_properties, ('--op', 'new')) |
2842 | - |
2843 | - if iscsi_properties.get('auth_method'): |
2844 | - self._iscsiadm_update(iscsi_properties, |
2845 | - "node.session.auth.authmethod", |
2846 | - iscsi_properties['auth_method']) |
2847 | - self._iscsiadm_update(iscsi_properties, |
2848 | - "node.session.auth.username", |
2849 | - iscsi_properties['auth_username']) |
2850 | - self._iscsiadm_update(iscsi_properties, |
2851 | - "node.session.auth.password", |
2852 | - iscsi_properties['auth_password']) |
2853 | - |
2854 | - self._run_iscsiadm(iscsi_properties, ("--login", )) |
2855 | - |
2856 | - self._iscsiadm_update(iscsi_properties, "node.startup", "automatic") |
2857 | - |
2858 | - mount_device = ("/dev/disk/by-path/ip-%s-iscsi-%s-lun-0" % |
2859 | - (iscsi_properties['target_portal'], |
2860 | - iscsi_properties['target_iqn'])) |
2861 | - |
2862 | - # The /dev/disk/by-path/... node is not always present immediately |
2863 | - # TODO(justinsb): This retry-with-delay is a pattern, move to utils? |
2864 | - tries = 0 |
2865 | - while not os.path.exists(mount_device): |
2866 | - if tries >= FLAGS.num_iscsi_scan_tries: |
2867 | - raise exception.Error(_("iSCSI device not found at %s") % |
2868 | - (mount_device)) |
2869 | - |
2870 | - LOG.warn(_("ISCSI volume not yet found at: %(mount_device)s. " |
2871 | - "Will rescan & retry. Try number: %(tries)s") % |
2872 | - locals()) |
2873 | - |
2874 | - # The rescan isn't documented as being necessary(?), but it helps |
2875 | - self._run_iscsiadm(iscsi_properties, ("--rescan", )) |
2876 | - |
2877 | - tries = tries + 1 |
2878 | - if not os.path.exists(mount_device): |
2879 | - time.sleep(tries ** 2) |
2880 | - |
2881 | - if tries != 0: |
2882 | - LOG.debug(_("Found iSCSI node %(mount_device)s " |
2883 | - "(after %(tries)s rescans)") % |
2884 | - locals()) |
2885 | - |
2886 | - return mount_device |
2887 | - |
2888 | - def undiscover_volume(self, volume): |
2889 | - """Undiscover volume on a remote host.""" |
2890 | - iscsi_properties = self._get_iscsi_properties(volume) |
2891 | - self._iscsiadm_update(iscsi_properties, "node.startup", "manual") |
2892 | - self._run_iscsiadm(iscsi_properties, ("--logout", )) |
2893 | - self._run_iscsiadm(iscsi_properties, ('--op', 'delete')) |
2894 | + def initialize_connection(self, volume, address): |
2895 | + iscsi_properties = self._get_iscsi_properties(volume) |
2896 | + return { |
2897 | + 'driver_volume_type' : 'iscsi', |
2898 | + 'data' : iscsi_properties |
2899 | + } |
2900 | + |
2901 | + |
2902 | + |
2903 | + def terminate_connection(self, volume, address): |
2904 | + pass |
2905 | |
2906 | def check_for_export(self, context, volume_id): |
2907 | """Make sure volume is exported.""" |
2908 | @@ -605,12 +431,13 @@ |
2909 | """No setup necessary in fake mode.""" |
2910 | pass |
2911 | |
2912 | - def discover_volume(self, context, volume): |
2913 | - """Discover volume on a remote host.""" |
2914 | - return "/dev/disk/by-path/volume-id-%d" % volume['id'] |
2915 | + def initialize_connection(self, volume, address): |
2916 | + return { |
2917 | + 'driver_volume_type' : 'iscsi', |
2918 | + 'data' : {} |
2919 | + } |
2920 | |
2921 | - def undiscover_volume(self, volume): |
2922 | - """Undiscover volume on a remote host.""" |
2923 | + def terminate_connection(self, volume, address): |
2924 | pass |
2925 | |
2926 | @staticmethod |
2927 | @@ -675,12 +502,16 @@ |
2928 | """Removes an export for a logical volume""" |
2929 | pass |
2930 | |
2931 | - def discover_volume(self, context, volume): |
2932 | - """Discover volume on a remote host""" |
2933 | - return "rbd:%s/%s" % (FLAGS.rbd_pool, volume['name']) |
2934 | - |
2935 | - def undiscover_volume(self, volume): |
2936 | - """Undiscover volume on a remote host""" |
2937 | + def initialize_connection(self, volume, address): |
2938 | + return { |
2939 | + 'driver_volume_type' : 'rbd', |
2940 | + 'data' : { |
2941 | + 'name' : '%s/%s' % (FLAGS.rbd_pool, volume['name']) |
2942 | + } |
2943 | + } |
2944 | + |
2945 | + |
2946 | + def terminate_connection(self, volume, address): |
2947 | pass |
2948 | |
2949 | |
2950 | @@ -738,12 +569,15 @@ |
2951 | """Removes an export for a logical volume""" |
2952 | pass |
2953 | |
2954 | - def discover_volume(self, context, volume): |
2955 | - """Discover volume on a remote host""" |
2956 | - return "sheepdog:%s" % volume['name'] |
2957 | + def initialize_connection(self, volume, address): |
2958 | + return { |
2959 | + 'driver_volume_type' : 'sheepdog', |
2960 | + 'data' : { |
2961 | + 'name' : volume['name'] |
2962 | + } |
2963 | + } |
2964 | |
2965 | - def undiscover_volume(self, volume): |
2966 | - """Undiscover volume on a remote host""" |
2967 | + def terminate_connection(self, volume, address): |
2968 | pass |
2969 | |
2970 | |
2971 | @@ -772,11 +606,11 @@ |
2972 | def remove_export(self, context, volume): |
2973 | self.log_action('remove_export', volume) |
2974 | |
2975 | - def discover_volume(self, context, volume): |
2976 | - self.log_action('discover_volume', volume) |
2977 | + def initialize_connection(self, volume, address): |
2978 | + self.log_action('initialize_connection', volume) |
2979 | |
2980 | - def undiscover_volume(self, volume): |
2981 | - self.log_action('undiscover_volume', volume) |
2982 | + def terminate_connection(self, volume, address): |
2983 | + self.log_action('terminate_connection', volume) |
2984 | |
2985 | def check_for_export(self, context, volume_id): |
2986 | self.log_action('check_for_export', volume_id) |
2987 | @@ -906,6 +740,58 @@ |
2988 | |
2989 | LOG.debug(_("VSA BE delete_volume for %s suceeded"), volume['name']) |
2990 | |
2991 | + def _discover_volume(self, context, volume): |
2992 | + """Discover volume on a remote host.""" |
2993 | + iscsi_properties = self._get_iscsi_properties(volume) |
2994 | + |
2995 | + if not iscsi_properties['target_discovered']: |
2996 | + self._run_iscsiadm(iscsi_properties, ('--op', 'new')) |
2997 | + |
2998 | + if iscsi_properties.get('auth_method'): |
2999 | + self._iscsiadm_update(iscsi_properties, |
3000 | + "node.session.auth.authmethod", |
3001 | + iscsi_properties['auth_method']) |
3002 | + self._iscsiadm_update(iscsi_properties, |
3003 | + "node.session.auth.username", |
3004 | + iscsi_properties['auth_username']) |
3005 | + self._iscsiadm_update(iscsi_properties, |
3006 | + "node.session.auth.password", |
3007 | + iscsi_properties['auth_password']) |
3008 | + |
3009 | + self._run_iscsiadm(iscsi_properties, ("--login", )) |
3010 | + |
3011 | + self._iscsiadm_update(iscsi_properties, "node.startup", "automatic") |
3012 | + |
3013 | + mount_device = ("/dev/disk/by-path/ip-%s-iscsi-%s-lun-0" % |
3014 | + (iscsi_properties['target_portal'], |
3015 | + iscsi_properties['target_iqn'])) |
3016 | + |
3017 | + # The /dev/disk/by-path/... node is not always present immediately |
3018 | + # TODO(justinsb): This retry-with-delay is a pattern, move to utils? |
3019 | + tries = 0 |
3020 | + while not os.path.exists(mount_device): |
3021 | + if tries >= FLAGS.num_iscsi_scan_tries: |
3022 | + raise exception.Error(_("iSCSI device not found at %s") % |
3023 | + (mount_device)) |
3024 | + |
3025 | + LOG.warn(_("ISCSI volume not yet found at: %(mount_device)s. " |
3026 | + "Will rescan & retry. Try number: %(tries)s") % |
3027 | + locals()) |
3028 | + |
3029 | + # The rescan isn't documented as being necessary(?), but it helps |
3030 | + self._run_iscsiadm(iscsi_properties, ("--rescan", )) |
3031 | + |
3032 | + tries = tries + 1 |
3033 | + if not os.path.exists(mount_device): |
3034 | + time.sleep(tries ** 2) |
3035 | + |
3036 | + if tries != 0: |
3037 | + LOG.debug(_("Found iSCSI node %(mount_device)s " |
3038 | + "(after %(tries)s rescans)") % |
3039 | + locals()) |
3040 | + |
3041 | + return mount_device |
3042 | + |
3043 | def local_path(self, volume): |
3044 | if self._not_vsa_volume_or_drive(volume): |
3045 | return super(ZadaraBEDriver, self).local_path(volume) |
3046 | @@ -913,7 +799,10 @@ |
3047 | if self._is_vsa_volume(volume): |
3048 | LOG.debug(_("\tFE VSA Volume %s local path call - call discover"), |
3049 | volume['name']) |
3050 | - return super(ZadaraBEDriver, self).discover_volume(None, volume) |
3051 | + # NOTE(vish): Copied discover from iscsi_driver since it is used |
3052 | + # but this should probably be refactored into a common |
3053 | + # area because it is used in libvirt driver. |
3054 | + return self._discover_volume(None, volume) |
3055 | |
3056 | raise exception.Error(_("local_path not supported")) |
3057 | |
3058 | |
3059 | === modified file 'nova/volume/manager.py' |
3060 | --- nova/volume/manager.py 2011-08-26 20:55:43 +0000 |
3061 | +++ nova/volume/manager.py 2011-09-20 16:57:39 +0000 |
3062 | @@ -28,20 +28,17 @@ |
3063 | :volume_topic: What :mod:`rpc` topic to listen to (default: `volume`). |
3064 | :volume_manager: The module name of a class derived from |
3065 | :class:`manager.Manager` (default: |
3066 | - :class:`nova.volume.manager.AOEManager`). |
3067 | + :class:`nova.volume.manager.Manager`). |
3068 | :storage_availability_zone: Defaults to `nova`. |
3069 | -:volume_driver: Used by :class:`AOEManager`. Defaults to |
3070 | - :class:`nova.volume.driver.AOEDriver`. |
3071 | -:num_shelves: Number of shelves for AoE (default: 100). |
3072 | -:num_blades: Number of vblades per shelf to allocate AoE storage from |
3073 | - (default: 16). |
3074 | +:volume_driver: Used by :class:`Manager`. Defaults to |
3075 | + :class:`nova.volume.driver.ISCSIDriver`. |
3076 | :volume_group: Name of the group that will contain exported volumes (default: |
3077 | `nova-volumes`) |
3078 | -:aoe_eth_dev: Device name the volumes will be exported on (default: `eth0`). |
3079 | -:num_shell_tries: Number of times to attempt to run AoE commands (default: 3) |
3080 | +:num_shell_tries: Number of times to attempt to run commands (default: 3) |
3081 | |
3082 | """ |
3083 | |
3084 | +import sys |
3085 | |
3086 | from nova import context |
3087 | from nova import exception |
3088 | @@ -126,10 +123,11 @@ |
3089 | if model_update: |
3090 | self.db.volume_update(context, volume_ref['id'], model_update) |
3091 | except Exception: |
3092 | + exc_info = sys.exc_info() |
3093 | self.db.volume_update(context, |
3094 | volume_ref['id'], {'status': 'error'}) |
3095 | self._notify_vsa(context, volume_ref, 'error') |
3096 | - raise |
3097 | + raise exc_info |
3098 | |
3099 | now = utils.utcnow() |
3100 | self.db.volume_update(context, |
3101 | @@ -181,10 +179,11 @@ |
3102 | {'status': 'available'}) |
3103 | return True |
3104 | except Exception: |
3105 | + exc_info = sys.exc_info() |
3106 | self.db.volume_update(context, |
3107 | volume_ref['id'], |
3108 | {'status': 'error_deleting'}) |
3109 | - raise |
3110 | + raise exc_info |
3111 | |
3112 | self.db.volume_destroy(context, volume_id) |
3113 | LOG.debug(_("volume %s: deleted successfully"), volume_ref['name']) |
3114 | @@ -233,26 +232,26 @@ |
3115 | LOG.debug(_("snapshot %s: deleted successfully"), snapshot_ref['name']) |
3116 | return True |
3117 | |
3118 | - def setup_compute_volume(self, context, volume_id): |
3119 | - """Setup remote volume on compute host. |
3120 | - |
3121 | - Returns path to device.""" |
3122 | - context = context.elevated() |
3123 | - volume_ref = self.db.volume_get(context, volume_id) |
3124 | - if volume_ref['host'] == self.host and FLAGS.use_local_volumes: |
3125 | - path = self.driver.local_path(volume_ref) |
3126 | - else: |
3127 | - path = self.driver.discover_volume(context, volume_ref) |
3128 | - return path |
3129 | - |
3130 | - def remove_compute_volume(self, context, volume_id): |
3131 | - """Remove remote volume on compute host.""" |
3132 | - context = context.elevated() |
3133 | - volume_ref = self.db.volume_get(context, volume_id) |
3134 | - if volume_ref['host'] == self.host and FLAGS.use_local_volumes: |
3135 | - return True |
3136 | - else: |
3137 | - self.driver.undiscover_volume(volume_ref) |
3138 | + def attach_volume(self, context, volume_id, instance_id, mountpoint): |
3139 | + """Updates db to show volume is attached""" |
3140 | + # TODO(vish): refactor this into a more general "reserve" |
3141 | + self.db.volume_attached(context, |
3142 | + volume_id, |
3143 | + instance_id, |
3144 | + mountpoint) |
3145 | + |
3146 | + def detach_volume(self, context, volume_id): |
3147 | + """Updates db to show volume is detached""" |
3148 | + # TODO(vish): refactor this into a more general "unreserve" |
3149 | + self.db.volume_detached(context, volume_id) |
3150 | + |
3151 | + def initialize_connection(self, context, volume_id, address): |
3152 | + volume_ref = self.db.volume_get(context, volume_id) |
3153 | + return self.driver.initialize_connection(volume_ref, address) |
3154 | + |
3155 | + def terminate_connection(self, context, volume_id, address): |
3156 | + volume_ref = self.db.volume_get(context, volume_id) |
3157 | + self.driver.terminate_connection(volume_ref, address) |
3158 | |
3159 | def check_for_export(self, context, instance_id): |
3160 | """Make sure whether volume is exported.""" |
3161 | |
3162 | === modified file 'nova/volume/san.py' |
3163 | --- nova/volume/san.py 2011-08-26 01:38:35 +0000 |
3164 | +++ nova/volume/san.py 2011-09-20 16:57:39 +0000 |
3165 | @@ -61,9 +61,6 @@ |
3166 | def _build_iscsi_target_name(self, volume): |
3167 | return "%s%s" % (FLAGS.iscsi_target_prefix, volume['name']) |
3168 | |
3169 | - # discover_volume is still OK |
3170 | - # undiscover_volume is still OK |
3171 | - |
3172 | def _connect_to_ssh(self): |
3173 | ssh = paramiko.SSHClient() |
3174 | #TODO(justinsb): We need a better SSH key policy |
3175 | |
3176 | === modified file 'tools/euca-get-ajax-console' (properties changed: +x to -x) |
This should wait for Essex, based on the meeting we had on 2011-08-23