Merge lp:~yamahata/nova/boot-from-volume-1 into lp:~hudson-openstack/nova/trunk

Proposed by Vish Ishaya
Status: Merged
Approved by: Vish Ishaya
Approved revision: 1076
Merged at revision: 1283
Proposed branch: lp:~yamahata/nova/boot-from-volume-1
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 2188 lines (+1545/-98)
19 files modified
nova/api/ec2/__init__.py (+5/-2)
nova/api/ec2/cloud.py (+295/-34)
nova/api/ec2/ec2utils.py (+40/-0)
nova/compute/api.py (+71/-25)
nova/compute/manager.py (+11/-9)
nova/db/api.py (+7/-1)
nova/db/sqlalchemy/api.py (+17/-0)
nova/db/sqlalchemy/migrate_repo/versions/032_add_root_device_name.py (+47/-0)
nova/db/sqlalchemy/models.py (+2/-0)
nova/image/fake.py (+10/-1)
nova/image/s3.py (+41/-12)
nova/test.py (+16/-0)
nova/tests/image/test_s3.py (+122/-0)
nova/tests/test_api.py (+70/-0)
nova/tests/test_bdm.py (+233/-0)
nova/tests/test_cloud.py (+405/-12)
nova/tests/test_compute.py (+111/-0)
nova/tests/test_volume.py (+31/-0)
nova/volume/api.py (+11/-2)
To merge this branch: bzr merge lp:~yamahata/nova/boot-from-volume-1
Reviewer Review Type Date Requested Status
Vish Ishaya (community) Approve
Brian Waldon (community) Abstain
Sandy Walsh (community) Approve
Review via email: mp+65850@code.launchpad.net

This proposal supersedes a proposal from 2011-06-16.

Description of the change

This change adds the basic boot-from-volume support to the image service.

Specifically following API will supports --block-device-mapping with volume/snapshot and root device name

- register image

- describe image

- create image(newly support)

At the moment swap and ephemeral aren't supported yet. They will be supported with the next step

Next step

- describe instance attribute with euca command

- get metadata for bundle volume

- swap/ephemeral device support

To post a comment you must log in.
Revision history for this message
Isaku Yamahata (yamahata) wrote : Posted in a previous version of this proposal
Download full text (72.6 KiB)

Now here is the unit tests(with some fixes).
So it's ready for merge now, I think.

The next step is to support the following.
- describe instance attribute
- get metadata for bundle volume
- swap/ephemeral device support.

On Wed, Jun 22, 2011 at 05:01:54AM -0000, Isaku Yamahata wrote:
> Isaku Yamahata has proposed merging lp:~yamahata/nova/boot-from-volume-1 into lp:nova with lp:~yamahata/nova/boot-from-volume-0 as a prerequisite.
>
> Requested reviews:
> Nova Core (nova-core)
>
> For more details, see:
> https://code.launchpad.net/~yamahata/nova/boot-from-volume-1/+merge/64825
>
> This is early review request before going further.
> If this direction is okay, I'll add unit tests and then move on to the next step.
>
> This change adds the basic boot-from-volume support to the image service.
> Specifically following API will supports --block-device-mapping with volume/snapshot and root device name
> - register image
> - describe image
> - create image(newly support)
>
> At the moment swap and ephemeral aren't supported. Are these wanted?
>
> NOTE
> - bundle volume is broken
>
> TODO
> - unit tests
>
> Next step
> - describe instance attribute with euca command
> - get metadata for bundle volume
> - swap/ephemeral device support(Is this wanted? or unnecessary?)
> --
> https://code.launchpad.net/~yamahata/nova/boot-from-volume-1/+merge/64825
> You are the owner of lp:~yamahata/nova/boot-from-volume-1.

> === modified file 'nova/api/ec2/__init__.py'
> --- nova/api/ec2/__init__.py 2011-06-15 16:46:24 +0000
> +++ nova/api/ec2/__init__.py 2011-06-22 04:55:48 +0000
> @@ -262,6 +262,8 @@
> 'TerminateInstances': ['projectmanager', 'sysadmin'],
> 'RebootInstances': ['projectmanager', 'sysadmin'],
> 'UpdateInstance': ['projectmanager', 'sysadmin'],
> + 'StartInstances': ['projectmanager', 'sysadmin'],
> + 'StopInstances': ['projectmanager', 'sysadmin'],
> 'DeleteVolume': ['projectmanager', 'sysadmin'],
> 'DescribeImages': ['all'],
> 'DeregisterImage': ['projectmanager', 'sysadmin'],
> @@ -269,6 +271,7 @@
> 'DescribeImageAttribute': ['all'],
> 'ModifyImageAttribute': ['projectmanager', 'sysadmin'],
> 'UpdateImage': ['projectmanager', 'sysadmin'],
> + 'CreateImage': ['projectmanager', 'sysadmin'],
> },
> 'AdminController': {
> # All actions have the same permission: ['none'] (the default)
> @@ -325,13 +328,13 @@
> except exception.VolumeNotFound as ex:
> LOG.info(_('VolumeNotFound raised: %s'), unicode(ex),
> context=context)
> - ec2_id = ec2utils.id_to_ec2_id(ex.volume_id, 'vol-%08x')
> + ec2_id = ec2utils.id_to_ec2_vol_id(ex.volume_id)
> message = _('Volume %s not found') % ec2_id
> return self._error(req, context, type(ex).__name__, message)
> except exception.SnapshotNotFound as ex:
> LOG.info(_('SnapshotNotFound raised: %s'), unicode(ex),
> context=context)
> - ...

Revision history for this message
Sandy Walsh (sandy-walsh) wrote : Posted in a previous version of this proposal

Impressive branch. I don't have a set up for testing it in depth, so I can't verify correctness.

I would like to see mocked out unit tests for each new method/function. Many of the _ internal methods have no tests at all.

Minor things:
+379/380 ... commented out?
+405 ... potential black hole?

review: Needs Fixing
Revision history for this message
Isaku Yamahata (yamahata) wrote : Posted in a previous version of this proposal

Thank you for review.

On Wed, Jun 22, 2011 at 06:04:33PM -0000, Sandy Walsh wrote:

> I would like to see mocked out unit tests for each new method/function. Many of the _ internal methods have no tests at all.

Now I added more unit tests for those methods/functions.
I think they covers what you meant.
  - nova/api/ec2/cloud.py
  _parse_block_device_mapping(), _format_block_device_mapping(),
  _format_mappings(), _format_instance_bdm()

  - nova/compute/api.py
  _update_image_block_device_mapping(), _update_block_device_mapping()

  - nova/volume/api.py
  create_snapshot(),create_snapshot_force()

> Minor things:
> +379/380 ... commented out?

Removed them

> +405 ... potential black hole?

Implemented timeout. I adopted 1 hour to timeout.
Although I'm not sure how long it should be, the length wouldn't matter
so much because timeout is just for safety.

thanks,
--
yamahata

Revision history for this message
Sandy Walsh (sandy-walsh) wrote : Posted in a previous version of this proposal

Awesome ... I have no other immediate feedback. I'll leave it to others closer to the domain.

Nice work Yamahata!

review: Approve
Revision history for this message
Vish Ishaya (vishvananda) wrote : Posted in a previous version of this proposal

excited to get this in!

review: Approve
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Posted in a previous version of this proposal

No proposals found for merge of lp:~yamahata/nova/boot-from-volume-0 into lp:nova.

Revision history for this message
Vish Ishaya (vishvananda) wrote : Posted in a previous version of this proposal

we seem to have lost the ability to merge branches that have an already merged prereq, so i'm rerequesting without the prereq.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :
Download full text (17.7 KiB)

The attempt to merge lp:~yamahata/nova/boot-from-volume-1 into lp:nova failed. Below is the output from the failed tests.

ERROR

======================================================================
ERROR: <nose.suite.ContextSuite context=nova.tests>
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/pymodules/python2.6/nose/suite.py", line 183, in run
    self.setUp()
  File "/usr/lib/pymodules/python2.6/nose/suite.py", line 264, in setUp
    self.setupContext(ancestor)
  File "/usr/lib/pymodules/python2.6/nose/suite.py", line 287, in setupContext
    try_run(context, names)
  File "/usr/lib/pymodules/python2.6/nose/util.py", line 487, in try_run
    return func()
  File "/tmp/tmp7ronKQ/nova/tests/__init__.py", line 54, in setup
    migration.db_sync()
  File "/tmp/tmp7ronKQ/nova/db/migration.py", line 35, in db_sync
    return IMPL.db_sync(version=version)
  File "/tmp/tmp7ronKQ/nova/db/sqlalchemy/migration.py", line 41, in db_sync
    db_version()
  File "/tmp/tmp7ronKQ/nova/db/sqlalchemy/migration.py", line 49, in db_version
    return versioning_api.db_version(FLAGS.sql_connection, repo_path)
  File "<string>", line 2, in db_version
  File "/usr/lib/pymodules/python2.6/migrate/versioning/util/__init__.py", line 160, in with_engine
    return f(*a, **kw)
  File "/usr/lib/pymodules/python2.6/migrate/versioning/api.py", line 147, in db_version
    schema = ControlledSchema(engine, repository)
  File "/usr/lib/pymodules/python2.6/migrate/versioning/schema.py", line 26, in __init__
    repository = Repository(repository)
  File "/usr/lib/pymodules/python2.6/migrate/versioning/repository.py", line 80, in __init__
    self._versions))
  File "/usr/lib/pymodules/python2.6/migrate/versioning/version.py", line 83, in __init__
    self.versions[VerNum(num)] = Version(num, path, files)
  File "/usr/lib/pymodules/python2.6/migrate/versioning/version.py", line 153, in __init__
    self.add_script(os.path.join(path, script))
  File "/usr/lib/pymodules/python2.6/migrate/versioning/version.py", line 174, in add_script
    self._add_script_py(path)
  File "/usr/lib/pymodules/python2.6/migrate/versioning/version.py", line 197, in _add_script_py
    'per version, but you have: %s and %s' % (self.python, path))
ScriptError: You can only have one Python script per version, but you have: /tmp/tmp7ronKQ/nova/db/sqlalchemy/migrate_repo/versions/027_add_root_device_name.py and /tmp/tmp7ronKQ/nova/db/sqlalchemy/migrate_repo/versions/027_add_provider_firewall_rules.py
-------------------- >> begin captured logging << --------------------
2011-06-25 02:25:11,546 WARNING nova.virt.libvirt.firewall [-] Libvirt module could not be loaded. NWFilterFirewall will not work correctly.
2011-06-25 02:25:12,239 DEBUG nova.utils [-] backend <module 'nova.db.sqlalchemy.migration' from '/tmp/tmp7ronKQ/nova/db/sqlalchemy/migration.py'> from (pid=22373) __get_backend /tmp/tmp7ronKQ/nova/utils.py:406
2011-06-25 02:25:12,240 DEBUG migrate.versioning.util [-] Constructing engine from (pid=22373) construct_engine /usr/lib/pymodules/python2.6/migrate/versioning/util/__init__.py:138
2011-06-25 02:25:12,244 DEBUG mi...

Revision history for this message
Vish Ishaya (vishvananda) wrote :

looks like the migration number needs to be bumped by a few...

review: Needs Fixing
Revision history for this message
Isaku Yamahata (yamahata) wrote :

Thank you for review. I fixed it and confirmed that unit tests passed.

On Sat, Jun 25, 2011 at 02:38:23AM -0000, Vish Ishaya wrote:
> Review: Needs Fixing
> looks like the migration number needs to be bumped by a few...
> --
> https://code.launchpad.net/~yamahata/nova/boot-from-volume-1/+merge/65850
> You are the owner of lp:~yamahata/nova/boot-from-volume-1.
>

--
yamahata

Revision history for this message
Brian Waldon (bcwaldon) wrote :

Migration needs to be updated yet again. We're moving fast!

Revision history for this message
Mark Washenberger (markwash) wrote :

Setting to WIP until the versions are updated. Sorry for the difficulties here. I'll check back in this weekend to see if we can move this merge along while things are a bit more quiet.

Revision history for this message
Isaku Yamahata (yamahata) wrote :

I resolved the conflict merging nova trunk.

On Fri, Jul 01, 2011 at 04:01:26PM -0000, Mark Washenberger wrote:
> Setting to WIP until the versions are updated. Sorry for the difficulties here. I'll check back in this weekend to see if we can move this merge along while things are a bit more quiet.
> --
> https://code.launchpad.net/~yamahata/nova/boot-from-volume-1/+merge/65850
> You are the owner of lp:~yamahata/nova/boot-from-volume-1.
>

--
yamahata

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

works for me

review: Approve
Revision history for this message
Brian Waldon (bcwaldon) wrote :

Seeing some smoketest failures, specifically test_004_can_access_metadata_over_public_ip (smoketests.test_netadmin.SecurityGroupTests)

review: Needs Fixing
Revision history for this message
Brian Waldon (bcwaldon) wrote :

> Seeing some smoketest failures, specifically
> test_004_can_access_metadata_over_public_ip
> (smoketests.test_netadmin.SecurityGroupTests)

Hmm, tests passing now. I think it was a problem on my end. Ignore me!

review: Abstain
Revision history for this message
Vish Ishaya (vishvananda) wrote :

looks good now

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'nova/api/ec2/__init__.py'
--- nova/api/ec2/__init__.py 2011-06-24 12:01:51 +0000
+++ nova/api/ec2/__init__.py 2011-07-08 09:39:31 +0000
@@ -262,6 +262,8 @@
262 'TerminateInstances': ['projectmanager', 'sysadmin'],262 'TerminateInstances': ['projectmanager', 'sysadmin'],
263 'RebootInstances': ['projectmanager', 'sysadmin'],263 'RebootInstances': ['projectmanager', 'sysadmin'],
264 'UpdateInstance': ['projectmanager', 'sysadmin'],264 'UpdateInstance': ['projectmanager', 'sysadmin'],
265 'StartInstances': ['projectmanager', 'sysadmin'],
266 'StopInstances': ['projectmanager', 'sysadmin'],
265 'DeleteVolume': ['projectmanager', 'sysadmin'],267 'DeleteVolume': ['projectmanager', 'sysadmin'],
266 'DescribeImages': ['all'],268 'DescribeImages': ['all'],
267 'DeregisterImage': ['projectmanager', 'sysadmin'],269 'DeregisterImage': ['projectmanager', 'sysadmin'],
@@ -269,6 +271,7 @@
269 'DescribeImageAttribute': ['all'],271 'DescribeImageAttribute': ['all'],
270 'ModifyImageAttribute': ['projectmanager', 'sysadmin'],272 'ModifyImageAttribute': ['projectmanager', 'sysadmin'],
271 'UpdateImage': ['projectmanager', 'sysadmin'],273 'UpdateImage': ['projectmanager', 'sysadmin'],
274 'CreateImage': ['projectmanager', 'sysadmin'],
272 },275 },
273 'AdminController': {276 'AdminController': {
274 # All actions have the same permission: ['none'] (the default)277 # All actions have the same permission: ['none'] (the default)
@@ -325,13 +328,13 @@
325 except exception.VolumeNotFound as ex:328 except exception.VolumeNotFound as ex:
326 LOG.info(_('VolumeNotFound raised: %s'), unicode(ex),329 LOG.info(_('VolumeNotFound raised: %s'), unicode(ex),
327 context=context)330 context=context)
328 ec2_id = ec2utils.id_to_ec2_id(ex.volume_id, 'vol-%08x')331 ec2_id = ec2utils.id_to_ec2_vol_id(ex.volume_id)
329 message = _('Volume %s not found') % ec2_id332 message = _('Volume %s not found') % ec2_id
330 return self._error(req, context, type(ex).__name__, message)333 return self._error(req, context, type(ex).__name__, message)
331 except exception.SnapshotNotFound as ex:334 except exception.SnapshotNotFound as ex:
332 LOG.info(_('SnapshotNotFound raised: %s'), unicode(ex),335 LOG.info(_('SnapshotNotFound raised: %s'), unicode(ex),
333 context=context)336 context=context)
334 ec2_id = ec2utils.id_to_ec2_id(ex.snapshot_id, 'snap-%08x')337 ec2_id = ec2utils.id_to_ec2_snap_id(ex.snapshot_id)
335 message = _('Snapshot %s not found') % ec2_id338 message = _('Snapshot %s not found') % ec2_id
336 return self._error(req, context, type(ex).__name__, message)339 return self._error(req, context, type(ex).__name__, message)
337 except exception.NotFound as ex:340 except exception.NotFound as ex:
338341
=== modified file 'nova/api/ec2/cloud.py'
--- nova/api/ec2/cloud.py 2011-07-01 15:47:33 +0000
+++ nova/api/ec2/cloud.py 2011-07-08 09:39:31 +0000
@@ -27,6 +27,7 @@
27import os27import os
28import urllib28import urllib
29import tempfile29import tempfile
30import time
30import shutil31import shutil
3132
32from nova import compute33from nova import compute
@@ -75,6 +76,95 @@
75 return {'private_key': private_key, 'fingerprint': fingerprint}76 return {'private_key': private_key, 'fingerprint': fingerprint}
7677
7778
79# TODO(yamahata): hypervisor dependent default device name
80_DEFAULT_ROOT_DEVICE_NAME = '/dev/sda1'
81
82
83def _parse_block_device_mapping(bdm):
84 """Parse BlockDeviceMappingItemType into flat hash
85 BlockDevicedMapping.<N>.DeviceName
86 BlockDevicedMapping.<N>.Ebs.SnapshotId
87 BlockDevicedMapping.<N>.Ebs.VolumeSize
88 BlockDevicedMapping.<N>.Ebs.DeleteOnTermination
89 BlockDevicedMapping.<N>.Ebs.NoDevice
90 BlockDevicedMapping.<N>.VirtualName
91 => remove .Ebs and allow volume id in SnapshotId
92 """
93 ebs = bdm.pop('ebs', None)
94 if ebs:
95 ec2_id = ebs.pop('snapshot_id', None)
96 if ec2_id:
97 id = ec2utils.ec2_id_to_id(ec2_id)
98 if ec2_id.startswith('snap-'):
99 bdm['snapshot_id'] = id
100 elif ec2_id.startswith('vol-'):
101 bdm['volume_id'] = id
102 ebs.setdefault('delete_on_termination', True)
103 bdm.update(ebs)
104 return bdm
105
106
107def _properties_get_mappings(properties):
108 return ec2utils.mappings_prepend_dev(properties.get('mappings', []))
109
110
111def _format_block_device_mapping(bdm):
112 """Contruct BlockDeviceMappingItemType
113 {'device_name': '...', 'snapshot_id': , ...}
114 => BlockDeviceMappingItemType
115 """
116 keys = (('deviceName', 'device_name'),
117 ('virtualName', 'virtual_name'))
118 item = {}
119 for name, k in keys:
120 if k in bdm:
121 item[name] = bdm[k]
122 if bdm.get('no_device'):
123 item['noDevice'] = True
124 if ('snapshot_id' in bdm) or ('volume_id' in bdm):
125 ebs_keys = (('snapshotId', 'snapshot_id'),
126 ('snapshotId', 'volume_id'), # snapshotId is abused
127 ('volumeSize', 'volume_size'),
128 ('deleteOnTermination', 'delete_on_termination'))
129 ebs = {}
130 for name, k in ebs_keys:
131 if k in bdm:
132 if k == 'snapshot_id':
133 ebs[name] = ec2utils.id_to_ec2_snap_id(bdm[k])
134 elif k == 'volume_id':
135 ebs[name] = ec2utils.id_to_ec2_vol_id(bdm[k])
136 else:
137 ebs[name] = bdm[k]
138 assert 'snapshotId' in ebs
139 item['ebs'] = ebs
140 return item
141
142
143def _format_mappings(properties, result):
144 """Format multiple BlockDeviceMappingItemType"""
145 mappings = [{'virtualName': m['virtual'], 'deviceName': m['device']}
146 for m in _properties_get_mappings(properties)
147 if (m['virtual'] == 'swap' or
148 m['virtual'].startswith('ephemeral'))]
149
150 block_device_mapping = [_format_block_device_mapping(bdm) for bdm in
151 properties.get('block_device_mapping', [])]
152
153 # NOTE(yamahata): overwrite mappings with block_device_mapping
154 for bdm in block_device_mapping:
155 for i in range(len(mappings)):
156 if bdm['deviceName'] == mappings[i]['deviceName']:
157 del mappings[i]
158 break
159 mappings.append(bdm)
160
161 # NOTE(yamahata): trim ebs.no_device == true. Is this necessary?
162 mappings = [bdm for bdm in mappings if not (bdm.get('noDevice', False))]
163
164 if mappings:
165 result['blockDeviceMapping'] = mappings
166
167
78class CloudController(object):168class CloudController(object):
79 """ CloudController provides the critical dispatch between169 """ CloudController provides the critical dispatch between
80 inbound API calls through the endpoint and messages170 inbound API calls through the endpoint and messages
@@ -176,7 +266,7 @@
176 # TODO(vish): replace with real data266 # TODO(vish): replace with real data
177 'ami': 'sda1',267 'ami': 'sda1',
178 'ephemeral0': 'sda2',268 'ephemeral0': 'sda2',
179 'root': '/dev/sda1',269 'root': _DEFAULT_ROOT_DEVICE_NAME,
180 'swap': 'sda3'},270 'swap': 'sda3'},
181 'hostname': hostname,271 'hostname': hostname,
182 'instance-action': 'none',272 'instance-action': 'none',
@@ -304,9 +394,8 @@
304394
305 def _format_snapshot(self, context, snapshot):395 def _format_snapshot(self, context, snapshot):
306 s = {}396 s = {}
307 s['snapshotId'] = ec2utils.id_to_ec2_id(snapshot['id'], 'snap-%08x')397 s['snapshotId'] = ec2utils.id_to_ec2_snap_id(snapshot['id'])
308 s['volumeId'] = ec2utils.id_to_ec2_id(snapshot['volume_id'],398 s['volumeId'] = ec2utils.id_to_ec2_vol_id(snapshot['volume_id'])
309 'vol-%08x')
310 s['status'] = snapshot['status']399 s['status'] = snapshot['status']
311 s['startTime'] = snapshot['created_at']400 s['startTime'] = snapshot['created_at']
312 s['progress'] = snapshot['progress']401 s['progress'] = snapshot['progress']
@@ -683,7 +772,7 @@
683 instance_data = '%s[%s]' % (instance_ec2_id,772 instance_data = '%s[%s]' % (instance_ec2_id,
684 volume['instance']['host'])773 volume['instance']['host'])
685 v = {}774 v = {}
686 v['volumeId'] = ec2utils.id_to_ec2_id(volume['id'], 'vol-%08x')775 v['volumeId'] = ec2utils.id_to_ec2_vol_id(volume['id'])
687 v['status'] = volume['status']776 v['status'] = volume['status']
688 v['size'] = volume['size']777 v['size'] = volume['size']
689 v['availabilityZone'] = volume['availability_zone']778 v['availabilityZone'] = volume['availability_zone']
@@ -705,8 +794,7 @@
705 else:794 else:
706 v['attachmentSet'] = [{}]795 v['attachmentSet'] = [{}]
707 if volume.get('snapshot_id') != None:796 if volume.get('snapshot_id') != None:
708 v['snapshotId'] = ec2utils.id_to_ec2_id(volume['snapshot_id'],797 v['snapshotId'] = ec2utils.id_to_ec2_snap_id(volume['snapshot_id'])
709 'snap-%08x')
710 else:798 else:
711 v['snapshotId'] = None799 v['snapshotId'] = None
712800
@@ -769,7 +857,7 @@
769 'instanceId': ec2utils.id_to_ec2_id(instance_id),857 'instanceId': ec2utils.id_to_ec2_id(instance_id),
770 'requestId': context.request_id,858 'requestId': context.request_id,
771 'status': volume['attach_status'],859 'status': volume['attach_status'],
772 'volumeId': ec2utils.id_to_ec2_id(volume_id, 'vol-%08x')}860 'volumeId': ec2utils.id_to_ec2_vol_id(volume_id)}
773861
774 def detach_volume(self, context, volume_id, **kwargs):862 def detach_volume(self, context, volume_id, **kwargs):
775 volume_id = ec2utils.ec2_id_to_id(volume_id)863 volume_id = ec2utils.ec2_id_to_id(volume_id)
@@ -781,7 +869,7 @@
781 'instanceId': ec2utils.id_to_ec2_id(instance['id']),869 'instanceId': ec2utils.id_to_ec2_id(instance['id']),
782 'requestId': context.request_id,870 'requestId': context.request_id,
783 'status': volume['attach_status'],871 'status': volume['attach_status'],
784 'volumeId': ec2utils.id_to_ec2_id(volume_id, 'vol-%08x')}872 'volumeId': ec2utils.id_to_ec2_vol_id(volume_id)}
785873
786 def _convert_to_set(self, lst, label):874 def _convert_to_set(self, lst, label):
787 if lst is None or lst == []:875 if lst is None or lst == []:
@@ -805,6 +893,37 @@
805 assert len(i) == 1893 assert len(i) == 1
806 return i[0]894 return i[0]
807895
896 def _format_instance_bdm(self, context, instance_id, root_device_name,
897 result):
898 """Format InstanceBlockDeviceMappingResponseItemType"""
899 root_device_type = 'instance-store'
900 mapping = []
901 for bdm in db.block_device_mapping_get_all_by_instance(context,
902 instance_id):
903 volume_id = bdm['volume_id']
904 if (volume_id is None or bdm['no_device']):
905 continue
906
907 if (bdm['device_name'] == root_device_name and
908 (bdm['snapshot_id'] or bdm['volume_id'])):
909 assert not bdm['virtual_name']
910 root_device_type = 'ebs'
911
912 vol = self.volume_api.get(context, volume_id=volume_id)
913 LOG.debug(_("vol = %s\n"), vol)
914 # TODO(yamahata): volume attach time
915 ebs = {'volumeId': volume_id,
916 'deleteOnTermination': bdm['delete_on_termination'],
917 'attachTime': vol['attach_time'] or '-',
918 'status': vol['status'], }
919 res = {'deviceName': bdm['device_name'],
920 'ebs': ebs, }
921 mapping.append(res)
922
923 if mapping:
924 result['blockDeviceMapping'] = mapping
925 result['rootDeviceType'] = root_device_type
926
808 def _format_instances(self, context, instance_id=None, **kwargs):927 def _format_instances(self, context, instance_id=None, **kwargs):
809 # TODO(termie): this method is poorly named as its name does not imply928 # TODO(termie): this method is poorly named as its name does not imply
810 # that it will be making a variety of database calls929 # that it will be making a variety of database calls
@@ -866,6 +985,10 @@
866 i['amiLaunchIndex'] = instance['launch_index']985 i['amiLaunchIndex'] = instance['launch_index']
867 i['displayName'] = instance['display_name']986 i['displayName'] = instance['display_name']
868 i['displayDescription'] = instance['display_description']987 i['displayDescription'] = instance['display_description']
988 i['rootDeviceName'] = (instance.get('root_device_name') or
989 _DEFAULT_ROOT_DEVICE_NAME)
990 self._format_instance_bdm(context, instance_id,
991 i['rootDeviceName'], i)
869 host = instance['host']992 host = instance['host']
870 zone = self._get_availability_zone_by_host(context, host)993 zone = self._get_availability_zone_by_host(context, host)
871 i['placement'] = {'availabilityZone': zone}994 i['placement'] = {'availabilityZone': zone}
@@ -953,23 +1076,7 @@
953 ramdisk = self._get_image(context, kwargs['ramdisk_id'])1076 ramdisk = self._get_image(context, kwargs['ramdisk_id'])
954 kwargs['ramdisk_id'] = ramdisk['id']1077 kwargs['ramdisk_id'] = ramdisk['id']
955 for bdm in kwargs.get('block_device_mapping', []):1078 for bdm in kwargs.get('block_device_mapping', []):
956 # NOTE(yamahata)1079 _parse_block_device_mapping(bdm)
957 # BlockDevicedMapping.<N>.DeviceName
958 # BlockDevicedMapping.<N>.Ebs.SnapshotId
959 # BlockDevicedMapping.<N>.Ebs.VolumeSize
960 # BlockDevicedMapping.<N>.Ebs.DeleteOnTermination
961 # BlockDevicedMapping.<N>.VirtualName
962 # => remove .Ebs and allow volume id in SnapshotId
963 ebs = bdm.pop('ebs', None)
964 if ebs:
965 ec2_id = ebs.pop('snapshot_id')
966 id = ec2utils.ec2_id_to_id(ec2_id)
967 if ec2_id.startswith('snap-'):
968 bdm['snapshot_id'] = id
969 elif ec2_id.startswith('vol-'):
970 bdm['volume_id'] = id
971 ebs.setdefault('delete_on_termination', True)
972 bdm.update(ebs)
9731080
974 image = self._get_image(context, kwargs['image_id'])1081 image = self._get_image(context, kwargs['image_id'])
9751082
@@ -1124,6 +1231,20 @@
1124 i['imageType'] = display_mapping.get(image_type)1231 i['imageType'] = display_mapping.get(image_type)
1125 i['isPublic'] = image.get('is_public') == True1232 i['isPublic'] = image.get('is_public') == True
1126 i['architecture'] = image['properties'].get('architecture')1233 i['architecture'] = image['properties'].get('architecture')
1234
1235 properties = image['properties']
1236 root_device_name = ec2utils.properties_root_device_name(properties)
1237 root_device_type = 'instance-store'
1238 for bdm in properties.get('block_device_mapping', []):
1239 if (bdm.get('device_name') == root_device_name and
1240 ('snapshot_id' in bdm or 'volume_id' in bdm) and
1241 not bdm.get('no_device')):
1242 root_device_type = 'ebs'
1243 i['rootDeviceName'] = (root_device_name or _DEFAULT_ROOT_DEVICE_NAME)
1244 i['rootDeviceType'] = root_device_type
1245
1246 _format_mappings(properties, i)
1247
1127 return i1248 return i
11281249
1129 def describe_images(self, context, image_id=None, **kwargs):1250 def describe_images(self, context, image_id=None, **kwargs):
@@ -1148,30 +1269,64 @@
1148 self.image_service.delete(context, internal_id)1269 self.image_service.delete(context, internal_id)
1149 return {'imageId': image_id}1270 return {'imageId': image_id}
11501271
1272 def _register_image(self, context, metadata):
1273 image = self.image_service.create(context, metadata)
1274 image_type = self._image_type(image.get('container_format'))
1275 image_id = self.image_ec2_id(image['id'], image_type)
1276 return image_id
1277
1151 def register_image(self, context, image_location=None, **kwargs):1278 def register_image(self, context, image_location=None, **kwargs):
1152 if image_location is None and 'name' in kwargs:1279 if image_location is None and 'name' in kwargs:
1153 image_location = kwargs['name']1280 image_location = kwargs['name']
1154 metadata = {'properties': {'image_location': image_location}}1281 metadata = {'properties': {'image_location': image_location}}
1155 image = self.image_service.create(context, metadata)1282
1156 image_type = self._image_type(image.get('container_format'))1283 if 'root_device_name' in kwargs:
1157 image_id = self.image_ec2_id(image['id'],1284 metadata['properties']['root_device_name'] = \
1158 image_type)1285 kwargs.get('root_device_name')
1286
1287 mappings = [_parse_block_device_mapping(bdm) for bdm in
1288 kwargs.get('block_device_mapping', [])]
1289 if mappings:
1290 metadata['properties']['block_device_mapping'] = mappings
1291
1292 image_id = self._register_image(context, metadata)
1159 msg = _("Registered image %(image_location)s with"1293 msg = _("Registered image %(image_location)s with"
1160 " id %(image_id)s") % locals()1294 " id %(image_id)s") % locals()
1161 LOG.audit(msg, context=context)1295 LOG.audit(msg, context=context)
1162 return {'imageId': image_id}1296 return {'imageId': image_id}
11631297
1164 def describe_image_attribute(self, context, image_id, attribute, **kwargs):1298 def describe_image_attribute(self, context, image_id, attribute, **kwargs):
1165 if attribute != 'launchPermission':1299 def _block_device_mapping_attribute(image, result):
1300 _format_mappings(image['properties'], result)
1301
1302 def _launch_permission_attribute(image, result):
1303 result['launchPermission'] = []
1304 if image['is_public']:
1305 result['launchPermission'].append({'group': 'all'})
1306
1307 def _root_device_name_attribute(image, result):
1308 result['rootDeviceName'] = \
1309 ec2utils.properties_root_device_name(image['properties'])
1310 if result['rootDeviceName'] is None:
1311 result['rootDeviceName'] = _DEFAULT_ROOT_DEVICE_NAME
1312
1313 supported_attributes = {
1314 'blockDeviceMapping': _block_device_mapping_attribute,
1315 'launchPermission': _launch_permission_attribute,
1316 'rootDeviceName': _root_device_name_attribute,
1317 }
1318
1319 fn = supported_attributes.get(attribute)
1320 if fn is None:
1166 raise exception.ApiError(_('attribute not supported: %s')1321 raise exception.ApiError(_('attribute not supported: %s')
1167 % attribute)1322 % attribute)
1168 try:1323 try:
1169 image = self._get_image(context, image_id)1324 image = self._get_image(context, image_id)
1170 except exception.NotFound:1325 except exception.NotFound:
1171 raise exception.ImageNotFound(image_id=image_id)1326 raise exception.ImageNotFound(image_id=image_id)
1172 result = {'imageId': image_id, 'launchPermission': []}1327
1173 if image['is_public']:1328 result = {'imageId': image_id}
1174 result['launchPermission'].append({'group': 'all'})1329 fn(image, result)
1175 return result1330 return result
11761331
1177 def modify_image_attribute(self, context, image_id, attribute,1332 def modify_image_attribute(self, context, image_id, attribute,
@@ -1202,3 +1357,109 @@
1202 internal_id = ec2utils.ec2_id_to_id(image_id)1357 internal_id = ec2utils.ec2_id_to_id(image_id)
1203 result = self.image_service.update(context, internal_id, dict(kwargs))1358 result = self.image_service.update(context, internal_id, dict(kwargs))
1204 return result1359 return result
1360
1361 # TODO(yamahata): race condition
1362 # At the moment there is no way to prevent others from
1363 # manipulating instances/volumes/snapshots.
1364 # As other code doesn't take it into consideration, here we don't
1365 # care of it for now. Ostrich algorithm
1366 def create_image(self, context, instance_id, **kwargs):
1367 # NOTE(yamahata): name/description are ignored by register_image(),
1368 # do so here
1369 no_reboot = kwargs.get('no_reboot', False)
1370
1371 ec2_instance_id = instance_id
1372 instance_id = ec2utils.ec2_id_to_id(ec2_instance_id)
1373 instance = self.compute_api.get(context, instance_id)
1374
1375 # stop the instance if necessary
1376 restart_instance = False
1377 if not no_reboot:
1378 state_description = instance['state_description']
1379
1380 # if the instance is in subtle state, refuse to proceed.
1381 if state_description not in ('running', 'stopping', 'stopped'):
1382 raise exception.InstanceNotRunning(instance_id=ec2_instance_id)
1383
1384 if state_description == 'running':
1385 restart_instance = True
1386 self.compute_api.stop(context, instance_id=instance_id)
1387
1388 # wait instance for really stopped
1389 start_time = time.time()
1390 while state_description != 'stopped':
1391 time.sleep(1)
1392 instance = self.compute_api.get(context, instance_id)
1393 state_description = instance['state_description']
1394 # NOTE(yamahata): timeout and error. 1 hour for now for safety.
1395 # Is it too short/long?
1396 # Or is there any better way?
1397 timeout = 1 * 60 * 60 * 60
1398 if time.time() > start_time + timeout:
1399 raise exception.ApiError(
1400 _('Couldn\'t stop instance with in %d sec') % timeout)
1401
1402 src_image = self._get_image(context, instance['image_ref'])
1403 properties = src_image['properties']
1404 if instance['root_device_name']:
1405 properties['root_device_name'] = instance['root_device_name']
1406
1407 mapping = []
1408 bdms = db.block_device_mapping_get_all_by_instance(context,
1409 instance_id)
1410 for bdm in bdms:
1411 if bdm.no_device:
1412 continue
1413 m = {}
1414 for attr in ('device_name', 'snapshot_id', 'volume_id',
1415 'volume_size', 'delete_on_termination', 'no_device',
1416 'virtual_name'):
1417 val = getattr(bdm, attr)
1418 if val is not None:
1419 m[attr] = val
1420
1421 volume_id = m.get('volume_id')
1422 if m.get('snapshot_id') and volume_id:
1423 # create snapshot based on volume_id
1424 vol = self.volume_api.get(context, volume_id=volume_id)
1425 # NOTE(yamahata): Should we wait for snapshot creation?
1426 # Linux LVM snapshot creation completes in
1427 # short time, it doesn't matter for now.
1428 snapshot = self.volume_api.create_snapshot_force(
1429 context, volume_id=volume_id, name=vol['display_name'],
1430 description=vol['display_description'])
1431 m['snapshot_id'] = snapshot['id']
1432 del m['volume_id']
1433
1434 if m:
1435 mapping.append(m)
1436
1437 for m in _properties_get_mappings(properties):
1438 virtual_name = m['virtual']
1439 if virtual_name in ('ami', 'root'):
1440 continue
1441
1442 assert (virtual_name == 'swap' or
1443 virtual_name.startswith('ephemeral'))
1444 device_name = m['device']
1445 if device_name in [b['device_name'] for b in mapping
1446 if not b.get('no_device', False)]:
1447 continue
1448
1449 # NOTE(yamahata): swap and ephemeral devices are specified in
1450 # AMI, but disabled for this instance by user.
1451 # So disable those device by no_device.
1452 mapping.append({'device_name': device_name, 'no_device': True})
1453
1454 if mapping:
1455 properties['block_device_mapping'] = mapping
1456
1457 for attr in ('status', 'location', 'id'):
1458 src_image.pop(attr, None)
1459
1460 image_id = self._register_image(context, src_image)
1461
1462 if restart_instance:
1463 self.compute_api.start(context, instance_id=instance_id)
1464
1465 return {'imageId': image_id}
12051466
=== modified file 'nova/api/ec2/ec2utils.py'
--- nova/api/ec2/ec2utils.py 2011-06-24 12:01:51 +0000
+++ nova/api/ec2/ec2utils.py 2011-07-08 09:39:31 +0000
@@ -34,6 +34,17 @@
34 return template % instance_id34 return template % instance_id
3535
3636
37def id_to_ec2_snap_id(instance_id):
38 """Convert an snapshot ID (int) to an ec2 snapshot ID
39 (snap-[base 16 number])"""
40 return id_to_ec2_id(instance_id, 'snap-%08x')
41
42
43def id_to_ec2_vol_id(instance_id):
44 """Convert an volume ID (int) to an ec2 volume ID (vol-[base 16 number])"""
45 return id_to_ec2_id(instance_id, 'vol-%08x')
46
47
37_c2u = re.compile('(((?<=[a-z])[A-Z])|([A-Z](?![A-Z]|$)))')48_c2u = re.compile('(((?<=[a-z])[A-Z])|([A-Z](?![A-Z]|$)))')
3849
3950
@@ -124,3 +135,32 @@
124 args[key] = value135 args[key] = value
125136
126 return args137 return args
138
139
140def properties_root_device_name(properties):
141 """get root device name from image meta data.
142 If it isn't specified, return None.
143 """
144 root_device_name = None
145
146 # NOTE(yamahata): see image_service.s3.s3create()
147 for bdm in properties.get('mappings', []):
148 if bdm['virtual'] == 'root':
149 root_device_name = bdm['device']
150
151 # NOTE(yamahata): register_image's command line can override
152 # <machine>.manifest.xml
153 if 'root_device_name' in properties:
154 root_device_name = properties['root_device_name']
155
156 return root_device_name
157
158
159def mappings_prepend_dev(mappings):
160 """Prepend '/dev/' to 'device' entry of swap/ephemeral virtual type"""
161 for m in mappings:
162 virtual = m['virtual']
163 if ((virtual == 'swap' or virtual.startswith('ephemeral')) and
164 (not m['device'].startswith('/'))):
165 m['device'] = '/dev/' + m['device']
166 return mappings
127167
=== modified file 'nova/compute/api.py'
--- nova/compute/api.py 2011-07-01 19:44:10 +0000
+++ nova/compute/api.py 2011-07-08 09:39:31 +0000
@@ -32,6 +32,7 @@
32from nova import rpc32from nova import rpc
33from nova import utils33from nova import utils
34from nova import volume34from nova import volume
35from nova.api.ec2 import ec2utils
35from nova.compute import instance_types36from nova.compute import instance_types
36from nova.compute import power_state37from nova.compute import power_state
37from nova.compute.utils import terminate_volumes38from nova.compute.utils import terminate_volumes
@@ -217,6 +218,9 @@
217 if reservation_id is None:218 if reservation_id is None:
218 reservation_id = utils.generate_uid('r')219 reservation_id = utils.generate_uid('r')
219220
221 root_device_name = ec2utils.properties_root_device_name(
222 image['properties'])
223
220 base_options = {224 base_options = {
221 'reservation_id': reservation_id,225 'reservation_id': reservation_id,
222 'image_ref': image_href,226 'image_ref': image_href,
@@ -241,11 +245,61 @@
241 'availability_zone': availability_zone,245 'availability_zone': availability_zone,
242 'os_type': os_type,246 'os_type': os_type,
243 'architecture': architecture,247 'architecture': architecture,
244 'vm_mode': vm_mode}248 'vm_mode': vm_mode,
245249 'root_device_name': root_device_name}
246 return (num_instances, base_options)250
247251 return (num_instances, base_options, image)
248 def create_db_entry_for_new_instance(self, context, base_options,252
253 def _update_image_block_device_mapping(self, elevated_context, instance_id,
254 mappings):
255 """tell vm driver to create ephemeral/swap device at boot time by
256 updating BlockDeviceMapping
257 """
258 for bdm in ec2utils.mappings_prepend_dev(mappings):
259 LOG.debug(_("bdm %s"), bdm)
260
261 virtual_name = bdm['virtual']
262 if virtual_name == 'ami' or virtual_name == 'root':
263 continue
264
265 assert (virtual_name == 'swap' or
266 virtual_name.startswith('ephemeral'))
267 values = {
268 'instance_id': instance_id,
269 'device_name': bdm['device'],
270 'virtual_name': virtual_name, }
271 self.db.block_device_mapping_update_or_create(elevated_context,
272 values)
273
274 def _update_block_device_mapping(self, elevated_context, instance_id,
275 block_device_mapping):
276 """tell vm driver to attach volume at boot time by updating
277 BlockDeviceMapping
278 """
279 for bdm in block_device_mapping:
280 LOG.debug(_('bdm %s'), bdm)
281 assert 'device_name' in bdm
282
283 values = {'instance_id': instance_id}
284 for key in ('device_name', 'delete_on_termination', 'virtual_name',
285 'snapshot_id', 'volume_id', 'volume_size',
286 'no_device'):
287 values[key] = bdm.get(key)
288
289 # NOTE(yamahata): NoDevice eliminates devices defined in image
290 # files by command line option.
291 # (--block-device-mapping)
292 if bdm.get('virtual_name') == 'NoDevice':
293 values['no_device'] = True
294 for k in ('delete_on_termination', 'volume_id',
295 'snapshot_id', 'volume_id', 'volume_size',
296 'virtual_name'):
297 values[k] = None
298
299 self.db.block_device_mapping_update_or_create(elevated_context,
300 values)
301
302 def create_db_entry_for_new_instance(self, context, image, base_options,
249 security_group, block_device_mapping, num=1):303 security_group, block_device_mapping, num=1):
250 """Create an entry in the DB for this new instance,304 """Create an entry in the DB for this new instance,
251 including any related table updates (such as security group,305 including any related table updates (such as security group,
@@ -278,23 +332,14 @@
278 instance_id,332 instance_id,
279 security_group_id)333 security_group_id)
280334
281 block_device_mapping = block_device_mapping or []335 # BlockDeviceMapping table
282 # NOTE(yamahata)336 self._update_image_block_device_mapping(elevated, instance_id,
283 # tell vm driver to attach volume at boot time by updating337 image['properties'].get('mappings', []))
284 # BlockDeviceMapping338 self._update_block_device_mapping(elevated, instance_id,
285 for bdm in block_device_mapping:339 image['properties'].get('block_device_mapping', []))
286 LOG.debug(_('bdm %s'), bdm)340 # override via command line option
287 assert 'device_name' in bdm341 self._update_block_device_mapping(elevated, instance_id,
288 values = {342 block_device_mapping)
289 'instance_id': instance_id,
290 'device_name': bdm['device_name'],
291 'delete_on_termination': bdm.get('delete_on_termination'),
292 'virtual_name': bdm.get('virtual_name'),
293 'snapshot_id': bdm.get('snapshot_id'),
294 'volume_id': bdm.get('volume_id'),
295 'volume_size': bdm.get('volume_size'),
296 'no_device': bdm.get('no_device')}
297 self.db.block_device_mapping_create(elevated, values)
298343
299 # Set sane defaults if not specified344 # Set sane defaults if not specified
300 updates = {}345 updates = {}
@@ -356,7 +401,7 @@
356 """Provision the instances by passing the whole request to401 """Provision the instances by passing the whole request to
357 the Scheduler for execution. Returns a Reservation ID402 the Scheduler for execution. Returns a Reservation ID
358 related to the creation of all of these instances."""403 related to the creation of all of these instances."""
359 num_instances, base_options = self._check_create_parameters(404 num_instances, base_options, image = self._check_create_parameters(
360 context, instance_type,405 context, instance_type,
361 image_href, kernel_id, ramdisk_id,406 image_href, kernel_id, ramdisk_id,
362 min_count, max_count,407 min_count, max_count,
@@ -394,7 +439,7 @@
394 Returns a list of instance dicts.439 Returns a list of instance dicts.
395 """440 """
396441
397 num_instances, base_options = self._check_create_parameters(442 num_instances, base_options, image = self._check_create_parameters(
398 context, instance_type,443 context, instance_type,
399 image_href, kernel_id, ramdisk_id,444 image_href, kernel_id, ramdisk_id,
400 min_count, max_count,445 min_count, max_count,
@@ -404,10 +449,11 @@
404 injected_files, admin_password, zone_blob,449 injected_files, admin_password, zone_blob,
405 reservation_id)450 reservation_id)
406451
452 block_device_mapping = block_device_mapping or []
407 instances = []453 instances = []
408 LOG.debug(_("Going to run %s instances..."), num_instances)454 LOG.debug(_("Going to run %s instances..."), num_instances)
409 for num in range(num_instances):455 for num in range(num_instances):
410 instance = self.create_db_entry_for_new_instance(context,456 instance = self.create_db_entry_for_new_instance(context, image,
411 base_options, security_group,457 base_options, security_group,
412 block_device_mapping, num=num)458 block_device_mapping, num=num)
413 instances.append(instance)459 instances.append(instance)
414460
=== modified file 'nova/compute/manager.py'
--- nova/compute/manager.py 2011-07-01 14:26:05 +0000
+++ nova/compute/manager.py 2011-07-08 09:39:31 +0000
@@ -220,6 +220,17 @@
220 for bdm in self.db.block_device_mapping_get_all_by_instance(220 for bdm in self.db.block_device_mapping_get_all_by_instance(
221 context, instance_id):221 context, instance_id):
222 LOG.debug(_("setting up bdm %s"), bdm)222 LOG.debug(_("setting up bdm %s"), bdm)
223
224 if bdm['no_device']:
225 continue
226 if bdm['virtual_name']:
227 # TODO(yamahata):
228 # block devices for swap and ephemeralN will be
229 # created by virt driver locally in compute node.
230 assert (bdm['virtual_name'] == 'swap' or
231 bdm['virtual_name'].startswith('ephemeral'))
232 continue
233
223 if ((bdm['snapshot_id'] is not None) and234 if ((bdm['snapshot_id'] is not None) and
224 (bdm['volume_id'] is None)):235 (bdm['volume_id'] is None)):
225 # TODO(yamahata): default name and description236 # TODO(yamahata): default name and description
@@ -252,15 +263,6 @@
252 block_device_mapping.append({'device_path': dev_path,263 block_device_mapping.append({'device_path': dev_path,
253 'mount_device':264 'mount_device':
254 bdm['device_name']})265 bdm['device_name']})
255 elif bdm['virtual_name'] is not None:
256 # TODO(yamahata): ephemeral/swap device support
257 LOG.debug(_('block_device_mapping: '
258 'ephemeral device is not supported yet'))
259 else:
260 # TODO(yamahata): NoDevice support
261 assert bdm['no_device']
262 LOG.debug(_('block_device_mapping: '
263 'no device is not supported yet'))
264266
265 return block_device_mapping267 return block_device_mapping
266268
267269
=== modified file 'nova/db/api.py'
--- nova/db/api.py 2011-06-30 19:20:59 +0000
+++ nova/db/api.py 2011-07-08 09:39:31 +0000
@@ -989,10 +989,16 @@
989989
990990
991def block_device_mapping_update(context, bdm_id, values):991def block_device_mapping_update(context, bdm_id, values):
992 """Create an entry of block device mapping"""992 """Update an entry of block device mapping"""
993 return IMPL.block_device_mapping_update(context, bdm_id, values)993 return IMPL.block_device_mapping_update(context, bdm_id, values)
994994
995995
996def block_device_mapping_update_or_create(context, values):
997 """Update an entry of block device mapping.
998 If not existed, create a new entry"""
999 return IMPL.block_device_mapping_update_or_create(context, values)
1000
1001
996def block_device_mapping_get_all_by_instance(context, instance_id):1002def block_device_mapping_get_all_by_instance(context, instance_id):
997 """Get all block device mapping belonging to a instance"""1003 """Get all block device mapping belonging to a instance"""
998 return IMPL.block_device_mapping_get_all_by_instance(context, instance_id)1004 return IMPL.block_device_mapping_get_all_by_instance(context, instance_id)
9991005
=== modified file 'nova/db/sqlalchemy/api.py'
--- nova/db/sqlalchemy/api.py 2011-07-01 15:07:08 +0000
+++ nova/db/sqlalchemy/api.py 2011-07-08 09:39:31 +0000
@@ -2208,6 +2208,23 @@
22082208
22092209
2210@require_context2210@require_context
2211def block_device_mapping_update_or_create(context, values):
2212 session = get_session()
2213 with session.begin():
2214 result = session.query(models.BlockDeviceMapping).\
2215 filter_by(instance_id=values['instance_id']).\
2216 filter_by(device_name=values['device_name']).\
2217 filter_by(deleted=False).\
2218 first()
2219 if not result:
2220 bdm_ref = models.BlockDeviceMapping()
2221 bdm_ref.update(values)
2222 bdm_ref.save(session=session)
2223 else:
2224 result.update(values)
2225
2226
2227@require_context
2211def block_device_mapping_get_all_by_instance(context, instance_id):2228def block_device_mapping_get_all_by_instance(context, instance_id):
2212 session = get_session()2229 session = get_session()
2213 result = session.query(models.BlockDeviceMapping).\2230 result = session.query(models.BlockDeviceMapping).\
22142231
=== added file 'nova/db/sqlalchemy/migrate_repo/versions/032_add_root_device_name.py'
--- nova/db/sqlalchemy/migrate_repo/versions/032_add_root_device_name.py 1970-01-01 00:00:00 +0000
+++ nova/db/sqlalchemy/migrate_repo/versions/032_add_root_device_name.py 2011-07-08 09:39:31 +0000
@@ -0,0 +1,47 @@
1# Copyright 2011 OpenStack LLC.
2# Copyright 2011 Isaku Yamahata
3#
4# Licensed under the Apache License, Version 2.0 (the "License"); you may
5# not use this file except in compliance with the License. You may obtain
6# a copy of the License at
7#
8# http://www.apache.org/licenses/LICENSE-2.0
9#
10# Unless required by applicable law or agreed to in writing, software
11# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
12# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13# License for the specific language governing permissions and limitations
14# under the License.
15
16from sqlalchemy import Column, Integer, MetaData, Table, String
17
18meta = MetaData()
19
20
21# Just for the ForeignKey and column creation to succeed, these are not the
22# actual definitions of instances or services.
23instances = Table('instances', meta,
24 Column('id', Integer(), primary_key=True, nullable=False),
25 )
26
27#
28# New Column
29#
30root_device_name = Column(
31 'root_device_name',
32 String(length=255, convert_unicode=False, assert_unicode=None,
33 unicode_error=None, _warn_on_bytestring=False),
34 nullable=True)
35
36
37def upgrade(migrate_engine):
38 # Upgrade operations go here. Don't create your own engine;
39 # bind migrate_engine to your metadata
40 meta.bind = migrate_engine
41 instances.create_column(root_device_name)
42
43
44def downgrade(migrate_engine):
45 # Operations to reverse the above upgrade go here.
46 meta.bind = migrate_engine
47 instances.drop_column('root_device_name')
048
=== modified file 'nova/db/sqlalchemy/models.py'
--- nova/db/sqlalchemy/models.py 2011-06-30 19:20:59 +0000
+++ nova/db/sqlalchemy/models.py 2011-07-08 09:39:31 +0000
@@ -236,6 +236,8 @@
236 vm_mode = Column(String(255))236 vm_mode = Column(String(255))
237 uuid = Column(String(36))237 uuid = Column(String(36))
238238
239 root_device_name = Column(String(255))
240
239 # TODO(vish): see Ewan's email about state improvements, probably241 # TODO(vish): see Ewan's email about state improvements, probably
240 # should be in a driver base class or some such242 # should be in a driver base class or some such
241 # vmstate_state = running, halted, suspended, paused243 # vmstate_state = running, halted, suspended, paused
242244
=== modified file 'nova/image/fake.py'
--- nova/image/fake.py 2011-06-24 12:01:51 +0000
+++ nova/image/fake.py 2011-07-08 09:39:31 +0000
@@ -137,7 +137,11 @@
137 try:137 try:
138 image_id = metadata['id']138 image_id = metadata['id']
139 except KeyError:139 except KeyError:
140 image_id = random.randint(0, 2 ** 31 - 1)140 while True:
141 image_id = random.randint(0, 2 ** 31 - 1)
142 if not self.images.get(str(image_id)):
143 break
144
141 image_id = str(image_id)145 image_id = str(image_id)
142146
143 if self.images.get(image_id):147 if self.images.get(image_id):
@@ -176,3 +180,8 @@
176180
177def FakeImageService():181def FakeImageService():
178 return _fakeImageService182 return _fakeImageService
183
184
185def FakeImageService_reset():
186 global _fakeImageService
187 _fakeImageService = _FakeImageService()
179188
=== modified file 'nova/image/s3.py'
--- nova/image/s3.py 2011-06-01 03:16:22 +0000
+++ nova/image/s3.py 2011-07-08 09:39:31 +0000
@@ -102,18 +102,7 @@
102 key.get_contents_to_filename(local_filename)102 key.get_contents_to_filename(local_filename)
103 return local_filename103 return local_filename
104104
105 def _s3_create(self, context, metadata):105 def _s3_parse_manifest(self, context, metadata, manifest):
106 """Gets a manifext from s3 and makes an image."""
107
108 image_path = tempfile.mkdtemp(dir=FLAGS.image_decryption_dir)
109
110 image_location = metadata['properties']['image_location']
111 bucket_name = image_location.split('/')[0]
112 manifest_path = image_location[len(bucket_name) + 1:]
113 bucket = self._conn(context).get_bucket(bucket_name)
114 key = bucket.get_key(manifest_path)
115 manifest = key.get_contents_as_string()
116
117 manifest = ElementTree.fromstring(manifest)106 manifest = ElementTree.fromstring(manifest)
118 image_format = 'ami'107 image_format = 'ami'
119 image_type = 'machine'108 image_type = 'machine'
@@ -141,6 +130,28 @@
141 except Exception:130 except Exception:
142 arch = 'x86_64'131 arch = 'x86_64'
143132
133 # NOTE(yamahata):
134 # EC2 ec2-budlne-image --block-device-mapping accepts
135 # <virtual name>=<device name> where
136 # virtual name = {ami, root, swap, ephemeral<N>}
137 # where N is no negative integer
138 # device name = the device name seen by guest kernel.
139 # They are converted into
140 # block_device_mapping/mapping/{virtual, device}
141 #
142 # Do NOT confuse this with ec2-register's block device mapping
143 # argument.
144 mappings = []
145 try:
146 block_device_mapping = manifest.findall('machine_configuration/'
147 'block_device_mapping/'
148 'mapping')
149 for bdm in block_device_mapping:
150 mappings.append({'virtual': bdm.find('virtual').text,
151 'device': bdm.find('device').text})
152 except Exception:
153 mappings = []
154
144 properties = metadata['properties']155 properties = metadata['properties']
145 properties['project_id'] = context.project_id156 properties['project_id'] = context.project_id
146 properties['architecture'] = arch157 properties['architecture'] = arch
@@ -151,6 +162,9 @@
151 if ramdisk_id:162 if ramdisk_id:
152 properties['ramdisk_id'] = ec2utils.ec2_id_to_id(ramdisk_id)163 properties['ramdisk_id'] = ec2utils.ec2_id_to_id(ramdisk_id)
153164
165 if mappings:
166 properties['mappings'] = mappings
167
154 metadata.update({'disk_format': image_format,168 metadata.update({'disk_format': image_format,
155 'container_format': image_format,169 'container_format': image_format,
156 'status': 'queued',170 'status': 'queued',
@@ -158,6 +172,21 @@
158 'properties': properties})172 'properties': properties})
159 metadata['properties']['image_state'] = 'pending'173 metadata['properties']['image_state'] = 'pending'
160 image = self.service.create(context, metadata)174 image = self.service.create(context, metadata)
175 return manifest, image
176
177 def _s3_create(self, context, metadata):
178 """Gets a manifext from s3 and makes an image."""
179
180 image_path = tempfile.mkdtemp(dir=FLAGS.image_decryption_dir)
181
182 image_location = metadata['properties']['image_location']
183 bucket_name = image_location.split('/')[0]
184 manifest_path = image_location[len(bucket_name) + 1:]
185 bucket = self._conn(context).get_bucket(bucket_name)
186 key = bucket.get_key(manifest_path)
187 manifest = key.get_contents_as_string()
188
189 manifest, image = self._s3_parse_manifest(context, metadata, manifest)
161 image_id = image['id']190 image_id = image['id']
162191
163 def delayed_create():192 def delayed_create():
164193
=== modified file 'nova/test.py'
--- nova/test.py 2011-06-29 17:58:10 +0000
+++ nova/test.py 2011-07-08 09:39:31 +0000
@@ -31,6 +31,7 @@
3131
32import mox32import mox
33import nose.plugins.skip33import nose.plugins.skip
34import nova.image.fake
34import shutil35import shutil
35import stubout36import stubout
36from eventlet import greenthread37from eventlet import greenthread
@@ -119,6 +120,9 @@
119 if hasattr(fake.FakeConnection, '_instance'):120 if hasattr(fake.FakeConnection, '_instance'):
120 del fake.FakeConnection._instance121 del fake.FakeConnection._instance
121122
123 if FLAGS.image_service == 'nova.image.fake.FakeImageService':
124 nova.image.fake.FakeImageService_reset()
125
122 # Reset any overriden flags126 # Reset any overriden flags
123 self.reset_flags()127 self.reset_flags()
124128
@@ -248,3 +252,15 @@
248 for d1, d2 in zip(L1, L2):252 for d1, d2 in zip(L1, L2):
249 self.assertDictMatch(d1, d2, approx_equal=approx_equal,253 self.assertDictMatch(d1, d2, approx_equal=approx_equal,
250 tolerance=tolerance)254 tolerance=tolerance)
255
256 def assertSubDictMatch(self, sub_dict, super_dict):
257 """Assert a sub_dict is subset of super_dict."""
258 self.assertTrue(set(sub_dict.keys()).issubset(set(super_dict.keys())))
259 for k, sub_value in sub_dict.items():
260 super_value = super_dict[k]
261 if isinstance(sub_value, dict):
262 self.assertSubDictMatch(sub_value, super_value)
263 elif 'DONTCARE' in (sub_value, super_value):
264 continue
265 else:
266 self.assertEqual(sub_value, super_value)
251267
=== added file 'nova/tests/image/test_s3.py'
--- nova/tests/image/test_s3.py 1970-01-01 00:00:00 +0000
+++ nova/tests/image/test_s3.py 2011-07-08 09:39:31 +0000
@@ -0,0 +1,122 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2011 Isaku Yamahata
4# All Rights Reserved.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18from nova import context
19from nova import flags
20from nova import test
21from nova.image import s3
22
23FLAGS = flags.FLAGS
24
25
26ami_manifest_xml = """<?xml version="1.0" ?>
27<manifest>
28 <version>2011-06-17</version>
29 <bundler>
30 <name>test-s3</name>
31 <version>0</version>
32 <release>0</release>
33 </bundler>
34 <machine_configuration>
35 <architecture>x86_64</architecture>
36 <block_device_mapping>
37 <mapping>
38 <virtual>ami</virtual>
39 <device>sda1</device>
40 </mapping>
41 <mapping>
42 <virtual>root</virtual>
43 <device>/dev/sda1</device>
44 </mapping>
45 <mapping>
46 <virtual>ephemeral0</virtual>
47 <device>sda2</device>
48 </mapping>
49 <mapping>
50 <virtual>swap</virtual>
51 <device>sda3</device>
52 </mapping>
53 </block_device_mapping>
54 </machine_configuration>
55</manifest>
56"""
57
58
59class TestS3ImageService(test.TestCase):
60 def setUp(self):
61 super(TestS3ImageService, self).setUp()
62 self.orig_image_service = FLAGS.image_service
63 FLAGS.image_service = 'nova.image.fake.FakeImageService'
64 self.image_service = s3.S3ImageService()
65 self.context = context.RequestContext(None, None)
66
67 def tearDown(self):
68 super(TestS3ImageService, self).tearDown()
69 FLAGS.image_service = self.orig_image_service
70
71 def _assertEqualList(self, list0, list1, keys):
72 self.assertEqual(len(list0), len(list1))
73 key = keys[0]
74 for x in list0:
75 self.assertEqual(len(x), len(keys))
76 self.assertTrue(key in x)
77 for y in list1:
78 self.assertTrue(key in y)
79 if x[key] == y[key]:
80 for k in keys:
81 self.assertEqual(x[k], y[k])
82
83 def test_s3_create(self):
84 metadata = {'properties': {
85 'root_device_name': '/dev/sda1',
86 'block_device_mapping': [
87 {'device_name': '/dev/sda1',
88 'snapshot_id': 'snap-12345678',
89 'delete_on_termination': True},
90 {'device_name': '/dev/sda2',
91 'virutal_name': 'ephemeral0'},
92 {'device_name': '/dev/sdb0',
93 'no_device': True}]}}
94 _manifest, image = self.image_service._s3_parse_manifest(
95 self.context, metadata, ami_manifest_xml)
96 image_id = image['id']
97
98 ret_image = self.image_service.show(self.context, image_id)
99 self.assertTrue('properties' in ret_image)
100 properties = ret_image['properties']
101
102 self.assertTrue('mappings' in properties)
103 mappings = properties['mappings']
104 expected_mappings = [
105 {"device": "sda1", "virtual": "ami"},
106 {"device": "/dev/sda1", "virtual": "root"},
107 {"device": "sda2", "virtual": "ephemeral0"},
108 {"device": "sda3", "virtual": "swap"}]
109 self._assertEqualList(mappings, expected_mappings,
110 ['device', 'virtual'])
111
112 self.assertTrue('block_device_mapping', properties)
113 block_device_mapping = properties['block_device_mapping']
114 expected_bdm = [
115 {'device_name': '/dev/sda1',
116 'snapshot_id': 'snap-12345678',
117 'delete_on_termination': True},
118 {'device_name': '/dev/sda2',
119 'virutal_name': 'ephemeral0'},
120 {'device_name': '/dev/sdb0',
121 'no_device': True}]
122 self.assertEqual(block_device_mapping, expected_bdm)
0123
=== modified file 'nova/tests/test_api.py'
--- nova/tests/test_api.py 2011-06-24 12:01:51 +0000
+++ nova/tests/test_api.py 2011-07-08 09:39:31 +0000
@@ -92,7 +92,9 @@
92 conv = ec2utils._try_convert92 conv = ec2utils._try_convert
93 self.assertEqual(conv('None'), None)93 self.assertEqual(conv('None'), None)
94 self.assertEqual(conv('True'), True)94 self.assertEqual(conv('True'), True)
95 self.assertEqual(conv('true'), True)
95 self.assertEqual(conv('False'), False)96 self.assertEqual(conv('False'), False)
97 self.assertEqual(conv('false'), False)
96 self.assertEqual(conv('0'), 0)98 self.assertEqual(conv('0'), 0)
97 self.assertEqual(conv('42'), 42)99 self.assertEqual(conv('42'), 42)
98 self.assertEqual(conv('3.14'), 3.14)100 self.assertEqual(conv('3.14'), 3.14)
@@ -107,6 +109,8 @@
107 def test_ec2_id_to_id(self):109 def test_ec2_id_to_id(self):
108 self.assertEqual(ec2utils.ec2_id_to_id('i-0000001e'), 30)110 self.assertEqual(ec2utils.ec2_id_to_id('i-0000001e'), 30)
109 self.assertEqual(ec2utils.ec2_id_to_id('ami-1d'), 29)111 self.assertEqual(ec2utils.ec2_id_to_id('ami-1d'), 29)
112 self.assertEqual(ec2utils.ec2_id_to_id('snap-0000001c'), 28)
113 self.assertEqual(ec2utils.ec2_id_to_id('vol-0000001b'), 27)
110114
111 def test_bad_ec2_id(self):115 def test_bad_ec2_id(self):
112 self.assertRaises(exception.InvalidEc2Id,116 self.assertRaises(exception.InvalidEc2Id,
@@ -116,6 +120,72 @@
116 def test_id_to_ec2_id(self):120 def test_id_to_ec2_id(self):
117 self.assertEqual(ec2utils.id_to_ec2_id(30), 'i-0000001e')121 self.assertEqual(ec2utils.id_to_ec2_id(30), 'i-0000001e')
118 self.assertEqual(ec2utils.id_to_ec2_id(29, 'ami-%08x'), 'ami-0000001d')122 self.assertEqual(ec2utils.id_to_ec2_id(29, 'ami-%08x'), 'ami-0000001d')
123 self.assertEqual(ec2utils.id_to_ec2_snap_id(28), 'snap-0000001c')
124 self.assertEqual(ec2utils.id_to_ec2_vol_id(27), 'vol-0000001b')
125
126 def test_dict_from_dotted_str(self):
127 in_str = [('BlockDeviceMapping.1.DeviceName', '/dev/sda1'),
128 ('BlockDeviceMapping.1.Ebs.SnapshotId', 'snap-0000001c'),
129 ('BlockDeviceMapping.1.Ebs.VolumeSize', '80'),
130 ('BlockDeviceMapping.1.Ebs.DeleteOnTermination', 'false'),
131 ('BlockDeviceMapping.2.DeviceName', '/dev/sdc'),
132 ('BlockDeviceMapping.2.VirtualName', 'ephemeral0')]
133 expected_dict = {
134 'block_device_mapping': {
135 '1': {'device_name': '/dev/sda1',
136 'ebs': {'snapshot_id': 'snap-0000001c',
137 'volume_size': 80,
138 'delete_on_termination': False}},
139 '2': {'device_name': '/dev/sdc',
140 'virtual_name': 'ephemeral0'}}}
141 out_dict = ec2utils.dict_from_dotted_str(in_str)
142
143 self.assertDictMatch(out_dict, expected_dict)
144
145 def test_properties_root_defice_name(self):
146 mappings = [{"device": "/dev/sda1", "virtual": "root"}]
147 properties0 = {'mappings': mappings}
148 properties1 = {'root_device_name': '/dev/sdb', 'mappings': mappings}
149
150 root_device_name = ec2utils.properties_root_device_name(properties0)
151 self.assertEqual(root_device_name, '/dev/sda1')
152
153 root_device_name = ec2utils.properties_root_device_name(properties1)
154 self.assertEqual(root_device_name, '/dev/sdb')
155
156 def test_mapping_prepend_dev(self):
157 mappings = [
158 {'virtual': 'ami',
159 'device': 'sda1'},
160 {'virtual': 'root',
161 'device': '/dev/sda1'},
162
163 {'virtual': 'swap',
164 'device': 'sdb1'},
165 {'virtual': 'swap',
166 'device': '/dev/sdb2'},
167
168 {'virtual': 'ephemeral0',
169 'device': 'sdc1'},
170 {'virtual': 'ephemeral1',
171 'device': '/dev/sdc1'}]
172 expected_result = [
173 {'virtual': 'ami',
174 'device': 'sda1'},
175 {'virtual': 'root',
176 'device': '/dev/sda1'},
177
178 {'virtual': 'swap',
179 'device': '/dev/sdb1'},
180 {'virtual': 'swap',
181 'device': '/dev/sdb2'},
182
183 {'virtual': 'ephemeral0',
184 'device': '/dev/sdc1'},
185 {'virtual': 'ephemeral1',
186 'device': '/dev/sdc1'}]
187 self.assertDictListMatch(ec2utils.mappings_prepend_dev(mappings),
188 expected_result)
119189
120190
121class ApiEc2TestCase(test.TestCase):191class ApiEc2TestCase(test.TestCase):
122192
=== added file 'nova/tests/test_bdm.py'
--- nova/tests/test_bdm.py 1970-01-01 00:00:00 +0000
+++ nova/tests/test_bdm.py 2011-07-08 09:39:31 +0000
@@ -0,0 +1,233 @@
1# vim: tabstop=4 shiftwidth=4 softtabstop=4
2
3# Copyright 2011 Isaku Yamahata
4# All Rights Reserved.
5#
6# Licensed under the Apache License, Version 2.0 (the "License"); you may
7# not use this file except in compliance with the License. You may obtain
8# a copy of the License at
9#
10# http://www.apache.org/licenses/LICENSE-2.0
11#
12# Unless required by applicable law or agreed to in writing, software
13# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
14# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
15# License for the specific language governing permissions and limitations
16# under the License.
17
18"""
19Tests for Block Device Mapping Code.
20"""
21
22from nova.api.ec2 import cloud
23from nova import test
24
25
26class BlockDeviceMappingEc2CloudTestCase(test.TestCase):
27 """Test Case for Block Device Mapping"""
28
29 def setUp(self):
30 super(BlockDeviceMappingEc2CloudTestCase, self).setUp()
31
32 def tearDown(self):
33 super(BlockDeviceMappingEc2CloudTestCase, self).tearDown()
34
35 def _assertApply(self, action, bdm_list):
36 for bdm, expected_result in bdm_list:
37 self.assertDictMatch(action(bdm), expected_result)
38
39 def test_parse_block_device_mapping(self):
40 bdm_list = [
41 ({'device_name': '/dev/fake0',
42 'ebs': {'snapshot_id': 'snap-12345678',
43 'volume_size': 1}},
44 {'device_name': '/dev/fake0',
45 'snapshot_id': 0x12345678,
46 'volume_size': 1,
47 'delete_on_termination': True}),
48
49 ({'device_name': '/dev/fake1',
50 'ebs': {'snapshot_id': 'snap-23456789',
51 'delete_on_termination': False}},
52 {'device_name': '/dev/fake1',
53 'snapshot_id': 0x23456789,
54 'delete_on_termination': False}),
55
56 ({'device_name': '/dev/fake2',
57 'ebs': {'snapshot_id': 'vol-87654321',
58 'volume_size': 2}},
59 {'device_name': '/dev/fake2',
60 'volume_id': 0x87654321,
61 'volume_size': 2,
62 'delete_on_termination': True}),
63
64 ({'device_name': '/dev/fake3',
65 'ebs': {'snapshot_id': 'vol-98765432',
66 'delete_on_termination': False}},
67 {'device_name': '/dev/fake3',
68 'volume_id': 0x98765432,
69 'delete_on_termination': False}),
70
71 ({'device_name': '/dev/fake4',
72 'ebs': {'no_device': True}},
73 {'device_name': '/dev/fake4',
74 'no_device': True}),
75
76 ({'device_name': '/dev/fake5',
77 'virtual_name': 'ephemeral0'},
78 {'device_name': '/dev/fake5',
79 'virtual_name': 'ephemeral0'}),
80
81 ({'device_name': '/dev/fake6',
82 'virtual_name': 'swap'},
83 {'device_name': '/dev/fake6',
84 'virtual_name': 'swap'}),
85 ]
86 self._assertApply(cloud._parse_block_device_mapping, bdm_list)
87
88 def test_format_block_device_mapping(self):
89 bdm_list = [
90 ({'device_name': '/dev/fake0',
91 'snapshot_id': 0x12345678,
92 'volume_size': 1,
93 'delete_on_termination': True},
94 {'deviceName': '/dev/fake0',
95 'ebs': {'snapshotId': 'snap-12345678',
96 'volumeSize': 1,
97 'deleteOnTermination': True}}),
98
99 ({'device_name': '/dev/fake1',
100 'snapshot_id': 0x23456789},
101 {'deviceName': '/dev/fake1',
102 'ebs': {'snapshotId': 'snap-23456789'}}),
103
104 ({'device_name': '/dev/fake2',
105 'snapshot_id': 0x23456789,
106 'delete_on_termination': False},
107 {'deviceName': '/dev/fake2',
108 'ebs': {'snapshotId': 'snap-23456789',
109 'deleteOnTermination': False}}),
110
111 ({'device_name': '/dev/fake3',
112 'volume_id': 0x12345678,
113 'volume_size': 1,
114 'delete_on_termination': True},
115 {'deviceName': '/dev/fake3',
116 'ebs': {'snapshotId': 'vol-12345678',
117 'volumeSize': 1,
118 'deleteOnTermination': True}}),
119
120 ({'device_name': '/dev/fake4',
121 'volume_id': 0x23456789},
122 {'deviceName': '/dev/fake4',
123 'ebs': {'snapshotId': 'vol-23456789'}}),
124
125 ({'device_name': '/dev/fake5',
126 'volume_id': 0x23456789,
127 'delete_on_termination': False},
128 {'deviceName': '/dev/fake5',
129 'ebs': {'snapshotId': 'vol-23456789',
130 'deleteOnTermination': False}}),
131 ]
132 self._assertApply(cloud._format_block_device_mapping, bdm_list)
133
134 def test_format_mapping(self):
135 properties = {
136 'mappings': [
137 {'virtual': 'ami',
138 'device': 'sda1'},
139 {'virtual': 'root',
140 'device': '/dev/sda1'},
141
142 {'virtual': 'swap',
143 'device': 'sdb1'},
144 {'virtual': 'swap',
145 'device': 'sdb2'},
146 {'virtual': 'swap',
147 'device': 'sdb3'},
148 {'virtual': 'swap',
149 'device': 'sdb4'},
150
151 {'virtual': 'ephemeral0',
152 'device': 'sdc1'},
153 {'virtual': 'ephemeral1',
154 'device': 'sdc2'},
155 {'virtual': 'ephemeral2',
156 'device': 'sdc3'},
157 ],
158
159 'block_device_mapping': [
160 # root
161 {'device_name': '/dev/sda1',
162 'snapshot_id': 0x12345678,
163 'delete_on_termination': False},
164
165
166 # overwrite swap
167 {'device_name': '/dev/sdb2',
168 'snapshot_id': 0x23456789,
169 'delete_on_termination': False},
170 {'device_name': '/dev/sdb3',
171 'snapshot_id': 0x3456789A},
172 {'device_name': '/dev/sdb4',
173 'no_device': True},
174
175 # overwrite ephemeral
176 {'device_name': '/dev/sdc2',
177 'snapshot_id': 0x3456789A,
178 'delete_on_termination': False},
179 {'device_name': '/dev/sdc3',
180 'snapshot_id': 0x456789AB},
181 {'device_name': '/dev/sdc4',
182 'no_device': True},
183
184 # volume
185 {'device_name': '/dev/sdd1',
186 'snapshot_id': 0x87654321,
187 'delete_on_termination': False},
188 {'device_name': '/dev/sdd2',
189 'snapshot_id': 0x98765432},
190 {'device_name': '/dev/sdd3',
191 'snapshot_id': 0xA9875463},
192 {'device_name': '/dev/sdd4',
193 'no_device': True}]}
194
195 expected_result = {
196 'blockDeviceMapping': [
197 # root
198 {'deviceName': '/dev/sda1',
199 'ebs': {'snapshotId': 'snap-12345678',
200 'deleteOnTermination': False}},
201
202 # swap
203 {'deviceName': '/dev/sdb1',
204 'virtualName': 'swap'},
205 {'deviceName': '/dev/sdb2',
206 'ebs': {'snapshotId': 'snap-23456789',
207 'deleteOnTermination': False}},
208 {'deviceName': '/dev/sdb3',
209 'ebs': {'snapshotId': 'snap-3456789a'}},
210
211 # ephemeral
212 {'deviceName': '/dev/sdc1',
213 'virtualName': 'ephemeral0'},
214 {'deviceName': '/dev/sdc2',
215 'ebs': {'snapshotId': 'snap-3456789a',
216 'deleteOnTermination': False}},
217 {'deviceName': '/dev/sdc3',
218 'ebs': {'snapshotId': 'snap-456789ab'}},
219
220 # volume
221 {'deviceName': '/dev/sdd1',
222 'ebs': {'snapshotId': 'snap-87654321',
223 'deleteOnTermination': False}},
224 {'deviceName': '/dev/sdd2',
225 'ebs': {'snapshotId': 'snap-98765432'}},
226 {'deviceName': '/dev/sdd3',
227 'ebs': {'snapshotId': 'snap-a9875463'}}]}
228
229 result = {}
230 cloud._format_mappings(properties, result)
231 print result
232 self.assertEqual(result['blockDeviceMapping'].sort(),
233 expected_result['blockDeviceMapping'].sort())
0234
=== modified file 'nova/tests/test_cloud.py'
--- nova/tests/test_cloud.py 2011-07-01 15:47:33 +0000
+++ nova/tests/test_cloud.py 2011-07-08 09:39:31 +0000
@@ -45,7 +45,8 @@
45class CloudTestCase(test.TestCase):45class CloudTestCase(test.TestCase):
46 def setUp(self):46 def setUp(self):
47 super(CloudTestCase, self).setUp()47 super(CloudTestCase, self).setUp()
48 self.flags(connection_type='fake')48 self.flags(connection_type='fake',
49 stub_network=True)
4950
50 self.conn = rpc.Connection.instance()51 self.conn = rpc.Connection.instance()
5152
@@ -289,7 +290,7 @@
289 vol2 = db.volume_create(self.context, {})290 vol2 = db.volume_create(self.context, {})
290 result = self.cloud.describe_volumes(self.context)291 result = self.cloud.describe_volumes(self.context)
291 self.assertEqual(len(result['volumeSet']), 2)292 self.assertEqual(len(result['volumeSet']), 2)
292 volume_id = ec2utils.id_to_ec2_id(vol2['id'], 'vol-%08x')293 volume_id = ec2utils.id_to_ec2_vol_id(vol2['id'])
293 result = self.cloud.describe_volumes(self.context,294 result = self.cloud.describe_volumes(self.context,
294 volume_id=[volume_id])295 volume_id=[volume_id])
295 self.assertEqual(len(result['volumeSet']), 1)296 self.assertEqual(len(result['volumeSet']), 1)
@@ -305,7 +306,7 @@
305 snap = db.snapshot_create(self.context, {'volume_id': vol['id'],306 snap = db.snapshot_create(self.context, {'volume_id': vol['id'],
306 'volume_size': vol['size'],307 'volume_size': vol['size'],
307 'status': "available"})308 'status': "available"})
308 snapshot_id = ec2utils.id_to_ec2_id(snap['id'], 'snap-%08x')309 snapshot_id = ec2utils.id_to_ec2_snap_id(snap['id'])
309310
310 result = self.cloud.create_volume(self.context,311 result = self.cloud.create_volume(self.context,
311 snapshot_id=snapshot_id)312 snapshot_id=snapshot_id)
@@ -344,7 +345,7 @@
344 snap2 = db.snapshot_create(self.context, {'volume_id': vol['id']})345 snap2 = db.snapshot_create(self.context, {'volume_id': vol['id']})
345 result = self.cloud.describe_snapshots(self.context)346 result = self.cloud.describe_snapshots(self.context)
346 self.assertEqual(len(result['snapshotSet']), 2)347 self.assertEqual(len(result['snapshotSet']), 2)
347 snapshot_id = ec2utils.id_to_ec2_id(snap2['id'], 'snap-%08x')348 snapshot_id = ec2utils.id_to_ec2_snap_id(snap2['id'])
348 result = self.cloud.describe_snapshots(self.context,349 result = self.cloud.describe_snapshots(self.context,
349 snapshot_id=[snapshot_id])350 snapshot_id=[snapshot_id])
350 self.assertEqual(len(result['snapshotSet']), 1)351 self.assertEqual(len(result['snapshotSet']), 1)
@@ -358,7 +359,7 @@
358 def test_create_snapshot(self):359 def test_create_snapshot(self):
359 """Makes sure create_snapshot works."""360 """Makes sure create_snapshot works."""
360 vol = db.volume_create(self.context, {'status': "available"})361 vol = db.volume_create(self.context, {'status': "available"})
361 volume_id = ec2utils.id_to_ec2_id(vol['id'], 'vol-%08x')362 volume_id = ec2utils.id_to_ec2_vol_id(vol['id'])
362363
363 result = self.cloud.create_snapshot(self.context,364 result = self.cloud.create_snapshot(self.context,
364 volume_id=volume_id)365 volume_id=volume_id)
@@ -375,7 +376,7 @@
375 vol = db.volume_create(self.context, {'status': "available"})376 vol = db.volume_create(self.context, {'status': "available"})
376 snap = db.snapshot_create(self.context, {'volume_id': vol['id'],377 snap = db.snapshot_create(self.context, {'volume_id': vol['id'],
377 'status': "available"})378 'status': "available"})
378 snapshot_id = ec2utils.id_to_ec2_id(snap['id'], 'snap-%08x')379 snapshot_id = ec2utils.id_to_ec2_snap_id(snap['id'])
379380
380 result = self.cloud.delete_snapshot(self.context,381 result = self.cloud.delete_snapshot(self.context,
381 snapshot_id=snapshot_id)382 snapshot_id=snapshot_id)
@@ -414,6 +415,185 @@
414 db.service_destroy(self.context, comp1['id'])415 db.service_destroy(self.context, comp1['id'])
415 db.service_destroy(self.context, comp2['id'])416 db.service_destroy(self.context, comp2['id'])
416417
418 def _block_device_mapping_create(self, instance_id, mappings):
419 volumes = []
420 for bdm in mappings:
421 db.block_device_mapping_create(self.context, bdm)
422 if 'volume_id' in bdm:
423 values = {'id': bdm['volume_id']}
424 for bdm_key, vol_key in [('snapshot_id', 'snapshot_id'),
425 ('snapshot_size', 'volume_size'),
426 ('delete_on_termination',
427 'delete_on_termination')]:
428 if bdm_key in bdm:
429 values[vol_key] = bdm[bdm_key]
430 vol = db.volume_create(self.context, values)
431 db.volume_attached(self.context, vol['id'],
432 instance_id, bdm['device_name'])
433 volumes.append(vol)
434 return volumes
435
436 def _setUpBlockDeviceMapping(self):
437 inst1 = db.instance_create(self.context,
438 {'image_ref': 1,
439 'root_device_name': '/dev/sdb1'})
440 inst2 = db.instance_create(self.context,
441 {'image_ref': 2,
442 'root_device_name': '/dev/sdc1'})
443
444 instance_id = inst1['id']
445 mappings0 = [
446 {'instance_id': instance_id,
447 'device_name': '/dev/sdb1',
448 'snapshot_id': '1',
449 'volume_id': '2'},
450 {'instance_id': instance_id,
451 'device_name': '/dev/sdb2',
452 'volume_id': '3',
453 'volume_size': 1},
454 {'instance_id': instance_id,
455 'device_name': '/dev/sdb3',
456 'delete_on_termination': True,
457 'snapshot_id': '4',
458 'volume_id': '5'},
459 {'instance_id': instance_id,
460 'device_name': '/dev/sdb4',
461 'delete_on_termination': False,
462 'snapshot_id': '6',
463 'volume_id': '7'},
464 {'instance_id': instance_id,
465 'device_name': '/dev/sdb5',
466 'snapshot_id': '8',
467 'volume_id': '9',
468 'volume_size': 0},
469 {'instance_id': instance_id,
470 'device_name': '/dev/sdb6',
471 'snapshot_id': '10',
472 'volume_id': '11',
473 'volume_size': 1},
474 {'instance_id': instance_id,
475 'device_name': '/dev/sdb7',
476 'no_device': True},
477 {'instance_id': instance_id,
478 'device_name': '/dev/sdb8',
479 'virtual_name': 'swap'},
480 {'instance_id': instance_id,
481 'device_name': '/dev/sdb9',
482 'virtual_name': 'ephemeral3'}]
483
484 volumes = self._block_device_mapping_create(instance_id, mappings0)
485 return (inst1, inst2, volumes)
486
487 def _tearDownBlockDeviceMapping(self, inst1, inst2, volumes):
488 for vol in volumes:
489 db.volume_destroy(self.context, vol['id'])
490 for id in (inst1['id'], inst2['id']):
491 for bdm in db.block_device_mapping_get_all_by_instance(
492 self.context, id):
493 db.block_device_mapping_destroy(self.context, bdm['id'])
494 db.instance_destroy(self.context, inst2['id'])
495 db.instance_destroy(self.context, inst1['id'])
496
497 _expected_instance_bdm1 = {
498 'instanceId': 'i-00000001',
499 'rootDeviceName': '/dev/sdb1',
500 'rootDeviceType': 'ebs'}
501
502 _expected_block_device_mapping0 = [
503 {'deviceName': '/dev/sdb1',
504 'ebs': {'status': 'in-use',
505 'deleteOnTermination': False,
506 'volumeId': 2,
507 }},
508 {'deviceName': '/dev/sdb2',
509 'ebs': {'status': 'in-use',
510 'deleteOnTermination': False,
511 'volumeId': 3,
512 }},
513 {'deviceName': '/dev/sdb3',
514 'ebs': {'status': 'in-use',
515 'deleteOnTermination': True,
516 'volumeId': 5,
517 }},
518 {'deviceName': '/dev/sdb4',
519 'ebs': {'status': 'in-use',
520 'deleteOnTermination': False,
521 'volumeId': 7,
522 }},
523 {'deviceName': '/dev/sdb5',
524 'ebs': {'status': 'in-use',
525 'deleteOnTermination': False,
526 'volumeId': 9,
527 }},
528 {'deviceName': '/dev/sdb6',
529 'ebs': {'status': 'in-use',
530 'deleteOnTermination': False,
531 'volumeId': 11, }}]
532 # NOTE(yamahata): swap/ephemeral device case isn't supported yet.
533
534 _expected_instance_bdm2 = {
535 'instanceId': 'i-00000002',
536 'rootDeviceName': '/dev/sdc1',
537 'rootDeviceType': 'instance-store'}
538
539 def test_format_instance_bdm(self):
540 (inst1, inst2, volumes) = self._setUpBlockDeviceMapping()
541
542 result = {}
543 self.cloud._format_instance_bdm(self.context, inst1['id'], '/dev/sdb1',
544 result)
545 self.assertSubDictMatch(
546 {'rootDeviceType': self._expected_instance_bdm1['rootDeviceType']},
547 result)
548 self._assertEqualBlockDeviceMapping(
549 self._expected_block_device_mapping0, result['blockDeviceMapping'])
550
551 result = {}
552 self.cloud._format_instance_bdm(self.context, inst2['id'], '/dev/sdc1',
553 result)
554 self.assertSubDictMatch(
555 {'rootDeviceType': self._expected_instance_bdm2['rootDeviceType']},
556 result)
557
558 self._tearDownBlockDeviceMapping(inst1, inst2, volumes)
559
560 def _assertInstance(self, instance_id):
561 ec2_instance_id = ec2utils.id_to_ec2_id(instance_id)
562 result = self.cloud.describe_instances(self.context,
563 instance_id=[ec2_instance_id])
564 result = result['reservationSet'][0]
565 self.assertEqual(len(result['instancesSet']), 1)
566 result = result['instancesSet'][0]
567 self.assertEqual(result['instanceId'], ec2_instance_id)
568 return result
569
570 def _assertEqualBlockDeviceMapping(self, expected, result):
571 self.assertEqual(len(expected), len(result))
572 for x in expected:
573 found = False
574 for y in result:
575 if x['deviceName'] == y['deviceName']:
576 self.assertSubDictMatch(x, y)
577 found = True
578 break
579 self.assertTrue(found)
580
581 def test_describe_instances_bdm(self):
582 """Make sure describe_instances works with root_device_name and
583 block device mappings
584 """
585 (inst1, inst2, volumes) = self._setUpBlockDeviceMapping()
586
587 result = self._assertInstance(inst1['id'])
588 self.assertSubDictMatch(self._expected_instance_bdm1, result)
589 self._assertEqualBlockDeviceMapping(
590 self._expected_block_device_mapping0, result['blockDeviceMapping'])
591
592 result = self._assertInstance(inst2['id'])
593 self.assertSubDictMatch(self._expected_instance_bdm2, result)
594
595 self._tearDownBlockDeviceMapping(inst1, inst2, volumes)
596
417 def test_describe_images(self):597 def test_describe_images(self):
418 describe_images = self.cloud.describe_images598 describe_images = self.cloud.describe_images
419599
@@ -443,6 +623,161 @@
443 self.assertRaises(exception.ImageNotFound, describe_images,623 self.assertRaises(exception.ImageNotFound, describe_images,
444 self.context, ['ami-fake'])624 self.context, ['ami-fake'])
445625
626 def assertDictListUnorderedMatch(self, L1, L2, key):
627 self.assertEqual(len(L1), len(L2))
628 for d1 in L1:
629 self.assertTrue(key in d1)
630 for d2 in L2:
631 self.assertTrue(key in d2)
632 if d1[key] == d2[key]:
633 self.assertDictMatch(d1, d2)
634
635 def _setUpImageSet(self, create_volumes_and_snapshots=False):
636 mappings1 = [
637 {'device': '/dev/sda1', 'virtual': 'root'},
638
639 {'device': 'sdb0', 'virtual': 'ephemeral0'},
640 {'device': 'sdb1', 'virtual': 'ephemeral1'},
641 {'device': 'sdb2', 'virtual': 'ephemeral2'},
642 {'device': 'sdb3', 'virtual': 'ephemeral3'},
643 {'device': 'sdb4', 'virtual': 'ephemeral4'},
644
645 {'device': 'sdc0', 'virtual': 'swap'},
646 {'device': 'sdc1', 'virtual': 'swap'},
647 {'device': 'sdc2', 'virtual': 'swap'},
648 {'device': 'sdc3', 'virtual': 'swap'},
649 {'device': 'sdc4', 'virtual': 'swap'}]
650 block_device_mapping1 = [
651 {'device_name': '/dev/sdb1', 'snapshot_id': 01234567},
652 {'device_name': '/dev/sdb2', 'volume_id': 01234567},
653 {'device_name': '/dev/sdb3', 'virtual_name': 'ephemeral5'},
654 {'device_name': '/dev/sdb4', 'no_device': True},
655
656 {'device_name': '/dev/sdc1', 'snapshot_id': 12345678},
657 {'device_name': '/dev/sdc2', 'volume_id': 12345678},
658 {'device_name': '/dev/sdc3', 'virtual_name': 'ephemeral6'},
659 {'device_name': '/dev/sdc4', 'no_device': True}]
660 image1 = {
661 'id': 1,
662 'properties': {
663 'kernel_id': 1,
664 'type': 'machine',
665 'image_state': 'available',
666 'mappings': mappings1,
667 'block_device_mapping': block_device_mapping1,
668 }
669 }
670
671 mappings2 = [{'device': '/dev/sda1', 'virtual': 'root'}]
672 block_device_mapping2 = [{'device_name': '/dev/sdb1',
673 'snapshot_id': 01234567}]
674 image2 = {
675 'id': 2,
676 'properties': {
677 'kernel_id': 2,
678 'type': 'machine',
679 'root_device_name': '/dev/sdb1',
680 'mappings': mappings2,
681 'block_device_mapping': block_device_mapping2}}
682
683 def fake_show(meh, context, image_id):
684 for i in [image1, image2]:
685 if i['id'] == image_id:
686 return i
687 raise exception.ImageNotFound(image_id=image_id)
688
689 def fake_detail(meh, context):
690 return [image1, image2]
691
692 self.stubs.Set(fake._FakeImageService, 'show', fake_show)
693 self.stubs.Set(fake._FakeImageService, 'detail', fake_detail)
694
695 volumes = []
696 snapshots = []
697 if create_volumes_and_snapshots:
698 for bdm in block_device_mapping1:
699 if 'volume_id' in bdm:
700 vol = self._volume_create(bdm['volume_id'])
701 volumes.append(vol['id'])
702 if 'snapshot_id' in bdm:
703 snap = db.snapshot_create(self.context,
704 {'id': bdm['snapshot_id'],
705 'volume_id': 76543210,
706 'status': "available",
707 'volume_size': 1})
708 snapshots.append(snap['id'])
709 return (volumes, snapshots)
710
711 def _assertImageSet(self, result, root_device_type, root_device_name):
712 self.assertEqual(1, len(result['imagesSet']))
713 result = result['imagesSet'][0]
714 self.assertTrue('rootDeviceType' in result)
715 self.assertEqual(result['rootDeviceType'], root_device_type)
716 self.assertTrue('rootDeviceName' in result)
717 self.assertEqual(result['rootDeviceName'], root_device_name)
718 self.assertTrue('blockDeviceMapping' in result)
719
720 return result
721
722 _expected_root_device_name1 = '/dev/sda1'
723 # NOTE(yamahata): noDevice doesn't make sense when returning mapping
724 # It makes sense only when user overriding existing
725 # mapping.
726 _expected_bdms1 = [
727 {'deviceName': '/dev/sdb0', 'virtualName': 'ephemeral0'},
728 {'deviceName': '/dev/sdb1', 'ebs': {'snapshotId':
729 'snap-00053977'}},
730 {'deviceName': '/dev/sdb2', 'ebs': {'snapshotId':
731 'vol-00053977'}},
732 {'deviceName': '/dev/sdb3', 'virtualName': 'ephemeral5'},
733 # {'deviceName': '/dev/sdb4', 'noDevice': True},
734
735 {'deviceName': '/dev/sdc0', 'virtualName': 'swap'},
736 {'deviceName': '/dev/sdc1', 'ebs': {'snapshotId':
737 'snap-00bc614e'}},
738 {'deviceName': '/dev/sdc2', 'ebs': {'snapshotId':
739 'vol-00bc614e'}},
740 {'deviceName': '/dev/sdc3', 'virtualName': 'ephemeral6'},
741 # {'deviceName': '/dev/sdc4', 'noDevice': True}
742 ]
743
744 _expected_root_device_name2 = '/dev/sdb1'
745 _expected_bdms2 = [{'deviceName': '/dev/sdb1',
746 'ebs': {'snapshotId': 'snap-00053977'}}]
747
748 # NOTE(yamahata):
749 # InstanceBlockDeviceMappingItemType
750 # rootDeviceType
751 # rootDeviceName
752 # blockDeviceMapping
753 # deviceName
754 # virtualName
755 # ebs
756 # snapshotId
757 # volumeSize
758 # deleteOnTermination
759 # noDevice
760 def test_describe_image_mapping(self):
761 """test for rootDeviceName and blockDeiceMapping"""
762 describe_images = self.cloud.describe_images
763 self._setUpImageSet()
764
765 result = describe_images(self.context, ['ami-00000001'])
766 result = self._assertImageSet(result, 'instance-store',
767 self._expected_root_device_name1)
768
769 self.assertDictListUnorderedMatch(result['blockDeviceMapping'],
770 self._expected_bdms1, 'deviceName')
771
772 result = describe_images(self.context, ['ami-00000002'])
773 result = self._assertImageSet(result, 'ebs',
774 self._expected_root_device_name2)
775
776 self.assertDictListUnorderedMatch(result['blockDeviceMapping'],
777 self._expected_bdms2, 'deviceName')
778
779 self.stubs.UnsetAll()
780
446 def test_describe_image_attribute(self):781 def test_describe_image_attribute(self):
447 describe_image_attribute = self.cloud.describe_image_attribute782 describe_image_attribute = self.cloud.describe_image_attribute
448783
@@ -456,6 +791,32 @@
456 'launchPermission')791 'launchPermission')
457 self.assertEqual([{'group': 'all'}], result['launchPermission'])792 self.assertEqual([{'group': 'all'}], result['launchPermission'])
458793
794 def test_describe_image_attribute_root_device_name(self):
795 describe_image_attribute = self.cloud.describe_image_attribute
796 self._setUpImageSet()
797
798 result = describe_image_attribute(self.context, 'ami-00000001',
799 'rootDeviceName')
800 self.assertEqual(result['rootDeviceName'],
801 self._expected_root_device_name1)
802 result = describe_image_attribute(self.context, 'ami-00000002',
803 'rootDeviceName')
804 self.assertEqual(result['rootDeviceName'],
805 self._expected_root_device_name2)
806
807 def test_describe_image_attribute_block_device_mapping(self):
808 describe_image_attribute = self.cloud.describe_image_attribute
809 self._setUpImageSet()
810
811 result = describe_image_attribute(self.context, 'ami-00000001',
812 'blockDeviceMapping')
813 self.assertDictListUnorderedMatch(result['blockDeviceMapping'],
814 self._expected_bdms1, 'deviceName')
815 result = describe_image_attribute(self.context, 'ami-00000002',
816 'blockDeviceMapping')
817 self.assertDictListUnorderedMatch(result['blockDeviceMapping'],
818 self._expected_bdms2, 'deviceName')
819
459 def test_modify_image_attribute(self):820 def test_modify_image_attribute(self):
460 modify_image_attribute = self.cloud.modify_image_attribute821 modify_image_attribute = self.cloud.modify_image_attribute
461822
@@ -683,7 +1044,7 @@
683 def test_update_of_volume_display_fields(self):1044 def test_update_of_volume_display_fields(self):
684 vol = db.volume_create(self.context, {})1045 vol = db.volume_create(self.context, {})
685 self.cloud.update_volume(self.context,1046 self.cloud.update_volume(self.context,
686 ec2utils.id_to_ec2_id(vol['id'], 'vol-%08x'),1047 ec2utils.id_to_ec2_vol_id(vol['id']),
687 display_name='c00l v0lum3')1048 display_name='c00l v0lum3')
688 vol = db.volume_get(self.context, vol['id'])1049 vol = db.volume_get(self.context, vol['id'])
689 self.assertEqual('c00l v0lum3', vol['display_name'])1050 self.assertEqual('c00l v0lum3', vol['display_name'])
@@ -692,7 +1053,7 @@
692 def test_update_of_volume_wont_update_private_fields(self):1053 def test_update_of_volume_wont_update_private_fields(self):
693 vol = db.volume_create(self.context, {})1054 vol = db.volume_create(self.context, {})
694 self.cloud.update_volume(self.context,1055 self.cloud.update_volume(self.context,
695 ec2utils.id_to_ec2_id(vol['id'], 'vol-%08x'),1056 ec2utils.id_to_ec2_vol_id(vol['id']),
696 mountpoint='/not/here')1057 mountpoint='/not/here')
697 vol = db.volume_get(self.context, vol['id'])1058 vol = db.volume_get(self.context, vol['id'])
698 self.assertEqual(None, vol['mountpoint'])1059 self.assertEqual(None, vol['mountpoint'])
@@ -770,11 +1131,13 @@
7701131
771 self._restart_compute_service()1132 self._restart_compute_service()
7721133
773 def _volume_create(self):1134 def _volume_create(self, volume_id=None):
774 kwargs = {'status': 'available',1135 kwargs = {'status': 'available',
775 'host': self.volume.host,1136 'host': self.volume.host,
776 'size': 1,1137 'size': 1,
777 'attach_status': 'detached', }1138 'attach_status': 'detached', }
1139 if volume_id:
1140 kwargs['id'] = volume_id
778 return db.volume_create(self.context, kwargs)1141 return db.volume_create(self.context, kwargs)
7791142
780 def _assert_volume_attached(self, vol, instance_id, mountpoint):1143 def _assert_volume_attached(self, vol, instance_id, mountpoint):
@@ -803,10 +1166,10 @@
803 'max_count': 1,1166 'max_count': 1,
804 'block_device_mapping': [{'device_name': '/dev/vdb',1167 'block_device_mapping': [{'device_name': '/dev/vdb',
805 'volume_id': vol1['id'],1168 'volume_id': vol1['id'],
806 'delete_on_termination': False, },1169 'delete_on_termination': False},
807 {'device_name': '/dev/vdc',1170 {'device_name': '/dev/vdc',
808 'volume_id': vol2['id'],1171 'volume_id': vol2['id'],
809 'delete_on_termination': True, },1172 'delete_on_termination': True},
810 ]}1173 ]}
811 ec2_instance_id = self._run_instance_wait(**kwargs)1174 ec2_instance_id = self._run_instance_wait(**kwargs)
812 instance_id = ec2utils.ec2_id_to_id(ec2_instance_id)1175 instance_id = ec2utils.ec2_id_to_id(ec2_instance_id)
@@ -938,7 +1301,7 @@
938 def test_run_with_snapshot(self):1301 def test_run_with_snapshot(self):
939 """Makes sure run/stop/start instance with snapshot works."""1302 """Makes sure run/stop/start instance with snapshot works."""
940 vol = self._volume_create()1303 vol = self._volume_create()
941 ec2_volume_id = ec2utils.id_to_ec2_id(vol['id'], 'vol-%08x')1304 ec2_volume_id = ec2utils.id_to_ec2_vol_id(vol['id'])
9421305
943 ec2_snapshot1_id = self._create_snapshot(ec2_volume_id)1306 ec2_snapshot1_id = self._create_snapshot(ec2_volume_id)
944 snapshot1_id = ec2utils.ec2_id_to_id(ec2_snapshot1_id)1307 snapshot1_id = ec2utils.ec2_id_to_id(ec2_snapshot1_id)
@@ -997,3 +1360,33 @@
997 self.cloud.delete_snapshot(self.context, snapshot_id)1360 self.cloud.delete_snapshot(self.context, snapshot_id)
998 greenthread.sleep(0.3)1361 greenthread.sleep(0.3)
999 db.volume_destroy(self.context, vol['id'])1362 db.volume_destroy(self.context, vol['id'])
1363
1364 def test_create_image(self):
1365 """Make sure that CreateImage works"""
1366 # enforce periodic tasks run in short time to avoid wait for 60s.
1367 self._restart_compute_service(periodic_interval=0.3)
1368
1369 (volumes, snapshots) = self._setUpImageSet(
1370 create_volumes_and_snapshots=True)
1371
1372 kwargs = {'image_id': 'ami-1',
1373 'instance_type': FLAGS.default_instance_type,
1374 'max_count': 1}
1375 ec2_instance_id = self._run_instance_wait(**kwargs)
1376
1377 # TODO(yamahata): s3._s3_create() can't be tested easily by unit test
1378 # as there is no unit test for s3.create()
1379 ## result = self.cloud.create_image(self.context, ec2_instance_id,
1380 ## no_reboot=True)
1381 ## ec2_image_id = result['imageId']
1382 ## created_image = self.cloud.describe_images(self.context,
1383 ## [ec2_image_id])
1384
1385 self.cloud.terminate_instances(self.context, [ec2_instance_id])
1386 for vol in volumes:
1387 db.volume_destroy(self.context, vol)
1388 for snap in snapshots:
1389 db.snapshot_destroy(self.context, snap)
1390 # TODO(yamahata): clean up snapshot created by CreateImage.
1391
1392 self._restart_compute_service()
10001393
=== modified file 'nova/tests/test_compute.py'
--- nova/tests/test_compute.py 2011-06-30 19:20:59 +0000
+++ nova/tests/test_compute.py 2011-07-08 09:39:31 +0000
@@ -810,3 +810,114 @@
810 LOG.info(_("After force-killing instances: %s"), instances)810 LOG.info(_("After force-killing instances: %s"), instances)
811 self.assertEqual(len(instances), 1)811 self.assertEqual(len(instances), 1)
812 self.assertEqual(power_state.SHUTOFF, instances[0]['state'])812 self.assertEqual(power_state.SHUTOFF, instances[0]['state'])
813
814 @staticmethod
815 def _parse_db_block_device_mapping(bdm_ref):
816 attr_list = ('delete_on_termination', 'device_name', 'no_device',
817 'virtual_name', 'volume_id', 'volume_size', 'snapshot_id')
818 bdm = {}
819 for attr in attr_list:
820 val = bdm_ref.get(attr, None)
821 if val:
822 bdm[attr] = val
823
824 return bdm
825
826 def test_update_block_device_mapping(self):
827 instance_id = self._create_instance()
828 mappings = [
829 {'virtual': 'ami', 'device': 'sda1'},
830 {'virtual': 'root', 'device': '/dev/sda1'},
831
832 {'virtual': 'swap', 'device': 'sdb1'},
833 {'virtual': 'swap', 'device': 'sdb2'},
834 {'virtual': 'swap', 'device': 'sdb3'},
835 {'virtual': 'swap', 'device': 'sdb4'},
836
837 {'virtual': 'ephemeral0', 'device': 'sdc1'},
838 {'virtual': 'ephemeral1', 'device': 'sdc2'},
839 {'virtual': 'ephemeral2', 'device': 'sdc3'}]
840 block_device_mapping = [
841 # root
842 {'device_name': '/dev/sda1',
843 'snapshot_id': 0x12345678,
844 'delete_on_termination': False},
845
846
847 # overwrite swap
848 {'device_name': '/dev/sdb2',
849 'snapshot_id': 0x23456789,
850 'delete_on_termination': False},
851 {'device_name': '/dev/sdb3',
852 'snapshot_id': 0x3456789A},
853 {'device_name': '/dev/sdb4',
854 'no_device': True},
855
856 # overwrite ephemeral
857 {'device_name': '/dev/sdc2',
858 'snapshot_id': 0x456789AB,
859 'delete_on_termination': False},
860 {'device_name': '/dev/sdc3',
861 'snapshot_id': 0x56789ABC},
862 {'device_name': '/dev/sdc4',
863 'no_device': True},
864
865 # volume
866 {'device_name': '/dev/sdd1',
867 'snapshot_id': 0x87654321,
868 'delete_on_termination': False},
869 {'device_name': '/dev/sdd2',
870 'snapshot_id': 0x98765432},
871 {'device_name': '/dev/sdd3',
872 'snapshot_id': 0xA9875463},
873 {'device_name': '/dev/sdd4',
874 'no_device': True}]
875
876 self.compute_api._update_image_block_device_mapping(
877 self.context, instance_id, mappings)
878
879 bdms = [self._parse_db_block_device_mapping(bdm_ref)
880 for bdm_ref in db.block_device_mapping_get_all_by_instance(
881 self.context, instance_id)]
882 expected_result = [
883 {'virtual_name': 'swap', 'device_name': '/dev/sdb1'},
884 {'virtual_name': 'swap', 'device_name': '/dev/sdb2'},
885 {'virtual_name': 'swap', 'device_name': '/dev/sdb3'},
886 {'virtual_name': 'swap', 'device_name': '/dev/sdb4'},
887 {'virtual_name': 'ephemeral0', 'device_name': '/dev/sdc1'},
888 {'virtual_name': 'ephemeral1', 'device_name': '/dev/sdc2'},
889 {'virtual_name': 'ephemeral2', 'device_name': '/dev/sdc3'}]
890 bdms.sort()
891 expected_result.sort()
892 self.assertDictListMatch(bdms, expected_result)
893
894 self.compute_api._update_block_device_mapping(
895 self.context, instance_id, block_device_mapping)
896 bdms = [self._parse_db_block_device_mapping(bdm_ref)
897 for bdm_ref in db.block_device_mapping_get_all_by_instance(
898 self.context, instance_id)]
899 expected_result = [
900 {'snapshot_id': 0x12345678, 'device_name': '/dev/sda1'},
901
902 {'virtual_name': 'swap', 'device_name': '/dev/sdb1'},
903 {'snapshot_id': 0x23456789, 'device_name': '/dev/sdb2'},
904 {'snapshot_id': 0x3456789A, 'device_name': '/dev/sdb3'},
905 {'no_device': True, 'device_name': '/dev/sdb4'},
906
907 {'virtual_name': 'ephemeral0', 'device_name': '/dev/sdc1'},
908 {'snapshot_id': 0x456789AB, 'device_name': '/dev/sdc2'},
909 {'snapshot_id': 0x56789ABC, 'device_name': '/dev/sdc3'},
910 {'no_device': True, 'device_name': '/dev/sdc4'},
911
912 {'snapshot_id': 0x87654321, 'device_name': '/dev/sdd1'},
913 {'snapshot_id': 0x98765432, 'device_name': '/dev/sdd2'},
914 {'snapshot_id': 0xA9875463, 'device_name': '/dev/sdd3'},
915 {'no_device': True, 'device_name': '/dev/sdd4'}]
916 bdms.sort()
917 expected_result.sort()
918 self.assertDictListMatch(bdms, expected_result)
919
920 for bdm in db.block_device_mapping_get_all_by_instance(
921 self.context, instance_id):
922 db.block_device_mapping_destroy(self.context, bdm['id'])
923 self.compute.terminate_instance(self.context, instance_id)
813924
=== modified file 'nova/tests/test_volume.py'
--- nova/tests/test_volume.py 2011-06-06 17:20:08 +0000
+++ nova/tests/test_volume.py 2011-07-08 09:39:31 +0000
@@ -27,8 +27,10 @@
27from nova import db27from nova import db
28from nova import flags28from nova import flags
29from nova import log as logging29from nova import log as logging
30from nova import rpc
30from nova import test31from nova import test
31from nova import utils32from nova import utils
33from nova import volume
3234
33FLAGS = flags.FLAGS35FLAGS = flags.FLAGS
34LOG = logging.getLogger('nova.tests.volume')36LOG = logging.getLogger('nova.tests.volume')
@@ -43,6 +45,11 @@
43 self.flags(connection_type='fake')45 self.flags(connection_type='fake')
44 self.volume = utils.import_object(FLAGS.volume_manager)46 self.volume = utils.import_object(FLAGS.volume_manager)
45 self.context = context.get_admin_context()47 self.context = context.get_admin_context()
48 self.instance_id = db.instance_create(self.context, {})['id']
49
50 def tearDown(self):
51 db.instance_destroy(self.context, self.instance_id)
52 super(VolumeTestCase, self).tearDown()
4653
47 @staticmethod54 @staticmethod
48 def _create_volume(size='0', snapshot_id=None):55 def _create_volume(size='0', snapshot_id=None):
@@ -223,6 +230,30 @@
223 snapshot_id)230 snapshot_id)
224 self.volume.delete_volume(self.context, volume_id)231 self.volume.delete_volume(self.context, volume_id)
225232
233 def test_create_snapshot_force(self):
234 """Test snapshot in use can be created forcibly."""
235
236 def fake_cast(ctxt, topic, msg):
237 pass
238 self.stubs.Set(rpc, 'cast', fake_cast)
239
240 volume_id = self._create_volume()
241 self.volume.create_volume(self.context, volume_id)
242 db.volume_attached(self.context, volume_id, self.instance_id,
243 '/dev/sda1')
244
245 volume_api = volume.api.API()
246 self.assertRaises(exception.ApiError,
247 volume_api.create_snapshot,
248 self.context, volume_id,
249 'fake_name', 'fake_description')
250 snapshot_ref = volume_api.create_snapshot_force(self.context,
251 volume_id,
252 'fake_name',
253 'fake_description')
254 db.snapshot_destroy(self.context, snapshot_ref['id'])
255 db.volume_destroy(self.context, volume_id)
256
226257
227class DriverTestCase(test.TestCase):258class DriverTestCase(test.TestCase):
228 """Base Test class for Drivers."""259 """Base Test class for Drivers."""
229260
=== modified file 'nova/volume/api.py'
--- nova/volume/api.py 2011-06-24 12:01:51 +0000
+++ nova/volume/api.py 2011-07-08 09:39:31 +0000
@@ -140,9 +140,10 @@
140 {"method": "remove_volume",140 {"method": "remove_volume",
141 "args": {'volume_id': volume_id}})141 "args": {'volume_id': volume_id}})
142142
143 def create_snapshot(self, context, volume_id, name, description):143 def _create_snapshot(self, context, volume_id, name, description,
144 force=False):
144 volume = self.get(context, volume_id)145 volume = self.get(context, volume_id)
145 if volume['status'] != "available":146 if ((not force) and (volume['status'] != "available")):
146 raise exception.ApiError(_("Volume status must be available"))147 raise exception.ApiError(_("Volume status must be available"))
147148
148 options = {149 options = {
@@ -164,6 +165,14 @@
164 "snapshot_id": snapshot['id']}})165 "snapshot_id": snapshot['id']}})
165 return snapshot166 return snapshot
166167
168 def create_snapshot(self, context, volume_id, name, description):
169 return self._create_snapshot(context, volume_id, name, description,
170 False)
171
172 def create_snapshot_force(self, context, volume_id, name, description):
173 return self._create_snapshot(context, volume_id, name, description,
174 True)
175
167 def delete_snapshot(self, context, snapshot_id):176 def delete_snapshot(self, context, snapshot_id):
168 snapshot = self.get_snapshot(context, snapshot_id)177 snapshot = self.get_snapshot(context, snapshot_id)
169 if snapshot['status'] != "available":178 if snapshot['status'] != "available":