Merge lp:~yamahata/nova/boot-from-volume-1 into lp:~hudson-openstack/nova/trunk

Proposed by Vish Ishaya
Status: Merged
Approved by: Vish Ishaya
Approved revision: 1076
Merged at revision: 1283
Proposed branch: lp:~yamahata/nova/boot-from-volume-1
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 2188 lines (+1545/-98)
19 files modified
nova/api/ec2/__init__.py (+5/-2)
nova/api/ec2/cloud.py (+295/-34)
nova/api/ec2/ec2utils.py (+40/-0)
nova/compute/api.py (+71/-25)
nova/compute/manager.py (+11/-9)
nova/db/api.py (+7/-1)
nova/db/sqlalchemy/api.py (+17/-0)
nova/db/sqlalchemy/migrate_repo/versions/032_add_root_device_name.py (+47/-0)
nova/db/sqlalchemy/models.py (+2/-0)
nova/image/fake.py (+10/-1)
nova/image/s3.py (+41/-12)
nova/test.py (+16/-0)
nova/tests/image/test_s3.py (+122/-0)
nova/tests/test_api.py (+70/-0)
nova/tests/test_bdm.py (+233/-0)
nova/tests/test_cloud.py (+405/-12)
nova/tests/test_compute.py (+111/-0)
nova/tests/test_volume.py (+31/-0)
nova/volume/api.py (+11/-2)
To merge this branch: bzr merge lp:~yamahata/nova/boot-from-volume-1
Reviewer Review Type Date Requested Status
Vish Ishaya (community) Approve
Brian Waldon (community) Abstain
Sandy Walsh (community) Approve
Review via email: mp+65850@code.launchpad.net

This proposal supersedes a proposal from 2011-06-16.

Description of the change

This change adds the basic boot-from-volume support to the image service.

Specifically following API will supports --block-device-mapping with volume/snapshot and root device name

- register image

- describe image

- create image(newly support)

At the moment swap and ephemeral aren't supported yet. They will be supported with the next step

Next step

- describe instance attribute with euca command

- get metadata for bundle volume

- swap/ephemeral device support

To post a comment you must log in.
Revision history for this message
Isaku Yamahata (yamahata) wrote : Posted in a previous version of this proposal
Download full text (72.6 KiB)

Now here is the unit tests(with some fixes).
So it's ready for merge now, I think.

The next step is to support the following.
- describe instance attribute
- get metadata for bundle volume
- swap/ephemeral device support.

On Wed, Jun 22, 2011 at 05:01:54AM -0000, Isaku Yamahata wrote:
> Isaku Yamahata has proposed merging lp:~yamahata/nova/boot-from-volume-1 into lp:nova with lp:~yamahata/nova/boot-from-volume-0 as a prerequisite.
>
> Requested reviews:
> Nova Core (nova-core)
>
> For more details, see:
> https://code.launchpad.net/~yamahata/nova/boot-from-volume-1/+merge/64825
>
> This is early review request before going further.
> If this direction is okay, I'll add unit tests and then move on to the next step.
>
> This change adds the basic boot-from-volume support to the image service.
> Specifically following API will supports --block-device-mapping with volume/snapshot and root device name
> - register image
> - describe image
> - create image(newly support)
>
> At the moment swap and ephemeral aren't supported. Are these wanted?
>
> NOTE
> - bundle volume is broken
>
> TODO
> - unit tests
>
> Next step
> - describe instance attribute with euca command
> - get metadata for bundle volume
> - swap/ephemeral device support(Is this wanted? or unnecessary?)
> --
> https://code.launchpad.net/~yamahata/nova/boot-from-volume-1/+merge/64825
> You are the owner of lp:~yamahata/nova/boot-from-volume-1.

> === modified file 'nova/api/ec2/__init__.py'
> --- nova/api/ec2/__init__.py 2011-06-15 16:46:24 +0000
> +++ nova/api/ec2/__init__.py 2011-06-22 04:55:48 +0000
> @@ -262,6 +262,8 @@
> 'TerminateInstances': ['projectmanager', 'sysadmin'],
> 'RebootInstances': ['projectmanager', 'sysadmin'],
> 'UpdateInstance': ['projectmanager', 'sysadmin'],
> + 'StartInstances': ['projectmanager', 'sysadmin'],
> + 'StopInstances': ['projectmanager', 'sysadmin'],
> 'DeleteVolume': ['projectmanager', 'sysadmin'],
> 'DescribeImages': ['all'],
> 'DeregisterImage': ['projectmanager', 'sysadmin'],
> @@ -269,6 +271,7 @@
> 'DescribeImageAttribute': ['all'],
> 'ModifyImageAttribute': ['projectmanager', 'sysadmin'],
> 'UpdateImage': ['projectmanager', 'sysadmin'],
> + 'CreateImage': ['projectmanager', 'sysadmin'],
> },
> 'AdminController': {
> # All actions have the same permission: ['none'] (the default)
> @@ -325,13 +328,13 @@
> except exception.VolumeNotFound as ex:
> LOG.info(_('VolumeNotFound raised: %s'), unicode(ex),
> context=context)
> - ec2_id = ec2utils.id_to_ec2_id(ex.volume_id, 'vol-%08x')
> + ec2_id = ec2utils.id_to_ec2_vol_id(ex.volume_id)
> message = _('Volume %s not found') % ec2_id
> return self._error(req, context, type(ex).__name__, message)
> except exception.SnapshotNotFound as ex:
> LOG.info(_('SnapshotNotFound raised: %s'), unicode(ex),
> context=context)
> - ...

Revision history for this message
Sandy Walsh (sandy-walsh) wrote : Posted in a previous version of this proposal

Impressive branch. I don't have a set up for testing it in depth, so I can't verify correctness.

I would like to see mocked out unit tests for each new method/function. Many of the _ internal methods have no tests at all.

Minor things:
+379/380 ... commented out?
+405 ... potential black hole?

review: Needs Fixing
Revision history for this message
Isaku Yamahata (yamahata) wrote : Posted in a previous version of this proposal

Thank you for review.

On Wed, Jun 22, 2011 at 06:04:33PM -0000, Sandy Walsh wrote:

> I would like to see mocked out unit tests for each new method/function. Many of the _ internal methods have no tests at all.

Now I added more unit tests for those methods/functions.
I think they covers what you meant.
  - nova/api/ec2/cloud.py
  _parse_block_device_mapping(), _format_block_device_mapping(),
  _format_mappings(), _format_instance_bdm()

  - nova/compute/api.py
  _update_image_block_device_mapping(), _update_block_device_mapping()

  - nova/volume/api.py
  create_snapshot(),create_snapshot_force()

> Minor things:
> +379/380 ... commented out?

Removed them

> +405 ... potential black hole?

Implemented timeout. I adopted 1 hour to timeout.
Although I'm not sure how long it should be, the length wouldn't matter
so much because timeout is just for safety.

thanks,
--
yamahata

Revision history for this message
Sandy Walsh (sandy-walsh) wrote : Posted in a previous version of this proposal

Awesome ... I have no other immediate feedback. I'll leave it to others closer to the domain.

Nice work Yamahata!

review: Approve
Revision history for this message
Vish Ishaya (vishvananda) wrote : Posted in a previous version of this proposal

excited to get this in!

review: Approve
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Posted in a previous version of this proposal

No proposals found for merge of lp:~yamahata/nova/boot-from-volume-0 into lp:nova.

Revision history for this message
Vish Ishaya (vishvananda) wrote : Posted in a previous version of this proposal

we seem to have lost the ability to merge branches that have an already merged prereq, so i'm rerequesting without the prereq.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :
Download full text (17.7 KiB)

The attempt to merge lp:~yamahata/nova/boot-from-volume-1 into lp:nova failed. Below is the output from the failed tests.

ERROR

======================================================================
ERROR: <nose.suite.ContextSuite context=nova.tests>
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/pymodules/python2.6/nose/suite.py", line 183, in run
    self.setUp()
  File "/usr/lib/pymodules/python2.6/nose/suite.py", line 264, in setUp
    self.setupContext(ancestor)
  File "/usr/lib/pymodules/python2.6/nose/suite.py", line 287, in setupContext
    try_run(context, names)
  File "/usr/lib/pymodules/python2.6/nose/util.py", line 487, in try_run
    return func()
  File "/tmp/tmp7ronKQ/nova/tests/__init__.py", line 54, in setup
    migration.db_sync()
  File "/tmp/tmp7ronKQ/nova/db/migration.py", line 35, in db_sync
    return IMPL.db_sync(version=version)
  File "/tmp/tmp7ronKQ/nova/db/sqlalchemy/migration.py", line 41, in db_sync
    db_version()
  File "/tmp/tmp7ronKQ/nova/db/sqlalchemy/migration.py", line 49, in db_version
    return versioning_api.db_version(FLAGS.sql_connection, repo_path)
  File "<string>", line 2, in db_version
  File "/usr/lib/pymodules/python2.6/migrate/versioning/util/__init__.py", line 160, in with_engine
    return f(*a, **kw)
  File "/usr/lib/pymodules/python2.6/migrate/versioning/api.py", line 147, in db_version
    schema = ControlledSchema(engine, repository)
  File "/usr/lib/pymodules/python2.6/migrate/versioning/schema.py", line 26, in __init__
    repository = Repository(repository)
  File "/usr/lib/pymodules/python2.6/migrate/versioning/repository.py", line 80, in __init__
    self._versions))
  File "/usr/lib/pymodules/python2.6/migrate/versioning/version.py", line 83, in __init__
    self.versions[VerNum(num)] = Version(num, path, files)
  File "/usr/lib/pymodules/python2.6/migrate/versioning/version.py", line 153, in __init__
    self.add_script(os.path.join(path, script))
  File "/usr/lib/pymodules/python2.6/migrate/versioning/version.py", line 174, in add_script
    self._add_script_py(path)
  File "/usr/lib/pymodules/python2.6/migrate/versioning/version.py", line 197, in _add_script_py
    'per version, but you have: %s and %s' % (self.python, path))
ScriptError: You can only have one Python script per version, but you have: /tmp/tmp7ronKQ/nova/db/sqlalchemy/migrate_repo/versions/027_add_root_device_name.py and /tmp/tmp7ronKQ/nova/db/sqlalchemy/migrate_repo/versions/027_add_provider_firewall_rules.py
-------------------- >> begin captured logging << --------------------
2011-06-25 02:25:11,546 WARNING nova.virt.libvirt.firewall [-] Libvirt module could not be loaded. NWFilterFirewall will not work correctly.
2011-06-25 02:25:12,239 DEBUG nova.utils [-] backend <module 'nova.db.sqlalchemy.migration' from '/tmp/tmp7ronKQ/nova/db/sqlalchemy/migration.py'> from (pid=22373) __get_backend /tmp/tmp7ronKQ/nova/utils.py:406
2011-06-25 02:25:12,240 DEBUG migrate.versioning.util [-] Constructing engine from (pid=22373) construct_engine /usr/lib/pymodules/python2.6/migrate/versioning/util/__init__.py:138
2011-06-25 02:25:12,244 DEBUG mi...

Revision history for this message
Vish Ishaya (vishvananda) wrote :

looks like the migration number needs to be bumped by a few...

review: Needs Fixing
Revision history for this message
Isaku Yamahata (yamahata) wrote :

Thank you for review. I fixed it and confirmed that unit tests passed.

On Sat, Jun 25, 2011 at 02:38:23AM -0000, Vish Ishaya wrote:
> Review: Needs Fixing
> looks like the migration number needs to be bumped by a few...
> --
> https://code.launchpad.net/~yamahata/nova/boot-from-volume-1/+merge/65850
> You are the owner of lp:~yamahata/nova/boot-from-volume-1.
>

--
yamahata

Revision history for this message
Brian Waldon (bcwaldon) wrote :

Migration needs to be updated yet again. We're moving fast!

Revision history for this message
Mark Washenberger (markwash) wrote :

Setting to WIP until the versions are updated. Sorry for the difficulties here. I'll check back in this weekend to see if we can move this merge along while things are a bit more quiet.

Revision history for this message
Isaku Yamahata (yamahata) wrote :

I resolved the conflict merging nova trunk.

On Fri, Jul 01, 2011 at 04:01:26PM -0000, Mark Washenberger wrote:
> Setting to WIP until the versions are updated. Sorry for the difficulties here. I'll check back in this weekend to see if we can move this merge along while things are a bit more quiet.
> --
> https://code.launchpad.net/~yamahata/nova/boot-from-volume-1/+merge/65850
> You are the owner of lp:~yamahata/nova/boot-from-volume-1.
>

--
yamahata

Revision history for this message
Sandy Walsh (sandy-walsh) wrote :

works for me

review: Approve
Revision history for this message
Brian Waldon (bcwaldon) wrote :

Seeing some smoketest failures, specifically test_004_can_access_metadata_over_public_ip (smoketests.test_netadmin.SecurityGroupTests)

review: Needs Fixing
Revision history for this message
Brian Waldon (bcwaldon) wrote :

> Seeing some smoketest failures, specifically
> test_004_can_access_metadata_over_public_ip
> (smoketests.test_netadmin.SecurityGroupTests)

Hmm, tests passing now. I think it was a problem on my end. Ignore me!

review: Abstain
Revision history for this message
Vish Ishaya (vishvananda) wrote :

looks good now

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'nova/api/ec2/__init__.py'
2--- nova/api/ec2/__init__.py 2011-06-24 12:01:51 +0000
3+++ nova/api/ec2/__init__.py 2011-07-08 09:39:31 +0000
4@@ -262,6 +262,8 @@
5 'TerminateInstances': ['projectmanager', 'sysadmin'],
6 'RebootInstances': ['projectmanager', 'sysadmin'],
7 'UpdateInstance': ['projectmanager', 'sysadmin'],
8+ 'StartInstances': ['projectmanager', 'sysadmin'],
9+ 'StopInstances': ['projectmanager', 'sysadmin'],
10 'DeleteVolume': ['projectmanager', 'sysadmin'],
11 'DescribeImages': ['all'],
12 'DeregisterImage': ['projectmanager', 'sysadmin'],
13@@ -269,6 +271,7 @@
14 'DescribeImageAttribute': ['all'],
15 'ModifyImageAttribute': ['projectmanager', 'sysadmin'],
16 'UpdateImage': ['projectmanager', 'sysadmin'],
17+ 'CreateImage': ['projectmanager', 'sysadmin'],
18 },
19 'AdminController': {
20 # All actions have the same permission: ['none'] (the default)
21@@ -325,13 +328,13 @@
22 except exception.VolumeNotFound as ex:
23 LOG.info(_('VolumeNotFound raised: %s'), unicode(ex),
24 context=context)
25- ec2_id = ec2utils.id_to_ec2_id(ex.volume_id, 'vol-%08x')
26+ ec2_id = ec2utils.id_to_ec2_vol_id(ex.volume_id)
27 message = _('Volume %s not found') % ec2_id
28 return self._error(req, context, type(ex).__name__, message)
29 except exception.SnapshotNotFound as ex:
30 LOG.info(_('SnapshotNotFound raised: %s'), unicode(ex),
31 context=context)
32- ec2_id = ec2utils.id_to_ec2_id(ex.snapshot_id, 'snap-%08x')
33+ ec2_id = ec2utils.id_to_ec2_snap_id(ex.snapshot_id)
34 message = _('Snapshot %s not found') % ec2_id
35 return self._error(req, context, type(ex).__name__, message)
36 except exception.NotFound as ex:
37
38=== modified file 'nova/api/ec2/cloud.py'
39--- nova/api/ec2/cloud.py 2011-07-01 15:47:33 +0000
40+++ nova/api/ec2/cloud.py 2011-07-08 09:39:31 +0000
41@@ -27,6 +27,7 @@
42 import os
43 import urllib
44 import tempfile
45+import time
46 import shutil
47
48 from nova import compute
49@@ -75,6 +76,95 @@
50 return {'private_key': private_key, 'fingerprint': fingerprint}
51
52
53+# TODO(yamahata): hypervisor dependent default device name
54+_DEFAULT_ROOT_DEVICE_NAME = '/dev/sda1'
55+
56+
57+def _parse_block_device_mapping(bdm):
58+ """Parse BlockDeviceMappingItemType into flat hash
59+ BlockDevicedMapping.<N>.DeviceName
60+ BlockDevicedMapping.<N>.Ebs.SnapshotId
61+ BlockDevicedMapping.<N>.Ebs.VolumeSize
62+ BlockDevicedMapping.<N>.Ebs.DeleteOnTermination
63+ BlockDevicedMapping.<N>.Ebs.NoDevice
64+ BlockDevicedMapping.<N>.VirtualName
65+ => remove .Ebs and allow volume id in SnapshotId
66+ """
67+ ebs = bdm.pop('ebs', None)
68+ if ebs:
69+ ec2_id = ebs.pop('snapshot_id', None)
70+ if ec2_id:
71+ id = ec2utils.ec2_id_to_id(ec2_id)
72+ if ec2_id.startswith('snap-'):
73+ bdm['snapshot_id'] = id
74+ elif ec2_id.startswith('vol-'):
75+ bdm['volume_id'] = id
76+ ebs.setdefault('delete_on_termination', True)
77+ bdm.update(ebs)
78+ return bdm
79+
80+
81+def _properties_get_mappings(properties):
82+ return ec2utils.mappings_prepend_dev(properties.get('mappings', []))
83+
84+
85+def _format_block_device_mapping(bdm):
86+ """Contruct BlockDeviceMappingItemType
87+ {'device_name': '...', 'snapshot_id': , ...}
88+ => BlockDeviceMappingItemType
89+ """
90+ keys = (('deviceName', 'device_name'),
91+ ('virtualName', 'virtual_name'))
92+ item = {}
93+ for name, k in keys:
94+ if k in bdm:
95+ item[name] = bdm[k]
96+ if bdm.get('no_device'):
97+ item['noDevice'] = True
98+ if ('snapshot_id' in bdm) or ('volume_id' in bdm):
99+ ebs_keys = (('snapshotId', 'snapshot_id'),
100+ ('snapshotId', 'volume_id'), # snapshotId is abused
101+ ('volumeSize', 'volume_size'),
102+ ('deleteOnTermination', 'delete_on_termination'))
103+ ebs = {}
104+ for name, k in ebs_keys:
105+ if k in bdm:
106+ if k == 'snapshot_id':
107+ ebs[name] = ec2utils.id_to_ec2_snap_id(bdm[k])
108+ elif k == 'volume_id':
109+ ebs[name] = ec2utils.id_to_ec2_vol_id(bdm[k])
110+ else:
111+ ebs[name] = bdm[k]
112+ assert 'snapshotId' in ebs
113+ item['ebs'] = ebs
114+ return item
115+
116+
117+def _format_mappings(properties, result):
118+ """Format multiple BlockDeviceMappingItemType"""
119+ mappings = [{'virtualName': m['virtual'], 'deviceName': m['device']}
120+ for m in _properties_get_mappings(properties)
121+ if (m['virtual'] == 'swap' or
122+ m['virtual'].startswith('ephemeral'))]
123+
124+ block_device_mapping = [_format_block_device_mapping(bdm) for bdm in
125+ properties.get('block_device_mapping', [])]
126+
127+ # NOTE(yamahata): overwrite mappings with block_device_mapping
128+ for bdm in block_device_mapping:
129+ for i in range(len(mappings)):
130+ if bdm['deviceName'] == mappings[i]['deviceName']:
131+ del mappings[i]
132+ break
133+ mappings.append(bdm)
134+
135+ # NOTE(yamahata): trim ebs.no_device == true. Is this necessary?
136+ mappings = [bdm for bdm in mappings if not (bdm.get('noDevice', False))]
137+
138+ if mappings:
139+ result['blockDeviceMapping'] = mappings
140+
141+
142 class CloudController(object):
143 """ CloudController provides the critical dispatch between
144 inbound API calls through the endpoint and messages
145@@ -176,7 +266,7 @@
146 # TODO(vish): replace with real data
147 'ami': 'sda1',
148 'ephemeral0': 'sda2',
149- 'root': '/dev/sda1',
150+ 'root': _DEFAULT_ROOT_DEVICE_NAME,
151 'swap': 'sda3'},
152 'hostname': hostname,
153 'instance-action': 'none',
154@@ -304,9 +394,8 @@
155
156 def _format_snapshot(self, context, snapshot):
157 s = {}
158- s['snapshotId'] = ec2utils.id_to_ec2_id(snapshot['id'], 'snap-%08x')
159- s['volumeId'] = ec2utils.id_to_ec2_id(snapshot['volume_id'],
160- 'vol-%08x')
161+ s['snapshotId'] = ec2utils.id_to_ec2_snap_id(snapshot['id'])
162+ s['volumeId'] = ec2utils.id_to_ec2_vol_id(snapshot['volume_id'])
163 s['status'] = snapshot['status']
164 s['startTime'] = snapshot['created_at']
165 s['progress'] = snapshot['progress']
166@@ -683,7 +772,7 @@
167 instance_data = '%s[%s]' % (instance_ec2_id,
168 volume['instance']['host'])
169 v = {}
170- v['volumeId'] = ec2utils.id_to_ec2_id(volume['id'], 'vol-%08x')
171+ v['volumeId'] = ec2utils.id_to_ec2_vol_id(volume['id'])
172 v['status'] = volume['status']
173 v['size'] = volume['size']
174 v['availabilityZone'] = volume['availability_zone']
175@@ -705,8 +794,7 @@
176 else:
177 v['attachmentSet'] = [{}]
178 if volume.get('snapshot_id') != None:
179- v['snapshotId'] = ec2utils.id_to_ec2_id(volume['snapshot_id'],
180- 'snap-%08x')
181+ v['snapshotId'] = ec2utils.id_to_ec2_snap_id(volume['snapshot_id'])
182 else:
183 v['snapshotId'] = None
184
185@@ -769,7 +857,7 @@
186 'instanceId': ec2utils.id_to_ec2_id(instance_id),
187 'requestId': context.request_id,
188 'status': volume['attach_status'],
189- 'volumeId': ec2utils.id_to_ec2_id(volume_id, 'vol-%08x')}
190+ 'volumeId': ec2utils.id_to_ec2_vol_id(volume_id)}
191
192 def detach_volume(self, context, volume_id, **kwargs):
193 volume_id = ec2utils.ec2_id_to_id(volume_id)
194@@ -781,7 +869,7 @@
195 'instanceId': ec2utils.id_to_ec2_id(instance['id']),
196 'requestId': context.request_id,
197 'status': volume['attach_status'],
198- 'volumeId': ec2utils.id_to_ec2_id(volume_id, 'vol-%08x')}
199+ 'volumeId': ec2utils.id_to_ec2_vol_id(volume_id)}
200
201 def _convert_to_set(self, lst, label):
202 if lst is None or lst == []:
203@@ -805,6 +893,37 @@
204 assert len(i) == 1
205 return i[0]
206
207+ def _format_instance_bdm(self, context, instance_id, root_device_name,
208+ result):
209+ """Format InstanceBlockDeviceMappingResponseItemType"""
210+ root_device_type = 'instance-store'
211+ mapping = []
212+ for bdm in db.block_device_mapping_get_all_by_instance(context,
213+ instance_id):
214+ volume_id = bdm['volume_id']
215+ if (volume_id is None or bdm['no_device']):
216+ continue
217+
218+ if (bdm['device_name'] == root_device_name and
219+ (bdm['snapshot_id'] or bdm['volume_id'])):
220+ assert not bdm['virtual_name']
221+ root_device_type = 'ebs'
222+
223+ vol = self.volume_api.get(context, volume_id=volume_id)
224+ LOG.debug(_("vol = %s\n"), vol)
225+ # TODO(yamahata): volume attach time
226+ ebs = {'volumeId': volume_id,
227+ 'deleteOnTermination': bdm['delete_on_termination'],
228+ 'attachTime': vol['attach_time'] or '-',
229+ 'status': vol['status'], }
230+ res = {'deviceName': bdm['device_name'],
231+ 'ebs': ebs, }
232+ mapping.append(res)
233+
234+ if mapping:
235+ result['blockDeviceMapping'] = mapping
236+ result['rootDeviceType'] = root_device_type
237+
238 def _format_instances(self, context, instance_id=None, **kwargs):
239 # TODO(termie): this method is poorly named as its name does not imply
240 # that it will be making a variety of database calls
241@@ -866,6 +985,10 @@
242 i['amiLaunchIndex'] = instance['launch_index']
243 i['displayName'] = instance['display_name']
244 i['displayDescription'] = instance['display_description']
245+ i['rootDeviceName'] = (instance.get('root_device_name') or
246+ _DEFAULT_ROOT_DEVICE_NAME)
247+ self._format_instance_bdm(context, instance_id,
248+ i['rootDeviceName'], i)
249 host = instance['host']
250 zone = self._get_availability_zone_by_host(context, host)
251 i['placement'] = {'availabilityZone': zone}
252@@ -953,23 +1076,7 @@
253 ramdisk = self._get_image(context, kwargs['ramdisk_id'])
254 kwargs['ramdisk_id'] = ramdisk['id']
255 for bdm in kwargs.get('block_device_mapping', []):
256- # NOTE(yamahata)
257- # BlockDevicedMapping.<N>.DeviceName
258- # BlockDevicedMapping.<N>.Ebs.SnapshotId
259- # BlockDevicedMapping.<N>.Ebs.VolumeSize
260- # BlockDevicedMapping.<N>.Ebs.DeleteOnTermination
261- # BlockDevicedMapping.<N>.VirtualName
262- # => remove .Ebs and allow volume id in SnapshotId
263- ebs = bdm.pop('ebs', None)
264- if ebs:
265- ec2_id = ebs.pop('snapshot_id')
266- id = ec2utils.ec2_id_to_id(ec2_id)
267- if ec2_id.startswith('snap-'):
268- bdm['snapshot_id'] = id
269- elif ec2_id.startswith('vol-'):
270- bdm['volume_id'] = id
271- ebs.setdefault('delete_on_termination', True)
272- bdm.update(ebs)
273+ _parse_block_device_mapping(bdm)
274
275 image = self._get_image(context, kwargs['image_id'])
276
277@@ -1124,6 +1231,20 @@
278 i['imageType'] = display_mapping.get(image_type)
279 i['isPublic'] = image.get('is_public') == True
280 i['architecture'] = image['properties'].get('architecture')
281+
282+ properties = image['properties']
283+ root_device_name = ec2utils.properties_root_device_name(properties)
284+ root_device_type = 'instance-store'
285+ for bdm in properties.get('block_device_mapping', []):
286+ if (bdm.get('device_name') == root_device_name and
287+ ('snapshot_id' in bdm or 'volume_id' in bdm) and
288+ not bdm.get('no_device')):
289+ root_device_type = 'ebs'
290+ i['rootDeviceName'] = (root_device_name or _DEFAULT_ROOT_DEVICE_NAME)
291+ i['rootDeviceType'] = root_device_type
292+
293+ _format_mappings(properties, i)
294+
295 return i
296
297 def describe_images(self, context, image_id=None, **kwargs):
298@@ -1148,30 +1269,64 @@
299 self.image_service.delete(context, internal_id)
300 return {'imageId': image_id}
301
302+ def _register_image(self, context, metadata):
303+ image = self.image_service.create(context, metadata)
304+ image_type = self._image_type(image.get('container_format'))
305+ image_id = self.image_ec2_id(image['id'], image_type)
306+ return image_id
307+
308 def register_image(self, context, image_location=None, **kwargs):
309 if image_location is None and 'name' in kwargs:
310 image_location = kwargs['name']
311 metadata = {'properties': {'image_location': image_location}}
312- image = self.image_service.create(context, metadata)
313- image_type = self._image_type(image.get('container_format'))
314- image_id = self.image_ec2_id(image['id'],
315- image_type)
316+
317+ if 'root_device_name' in kwargs:
318+ metadata['properties']['root_device_name'] = \
319+ kwargs.get('root_device_name')
320+
321+ mappings = [_parse_block_device_mapping(bdm) for bdm in
322+ kwargs.get('block_device_mapping', [])]
323+ if mappings:
324+ metadata['properties']['block_device_mapping'] = mappings
325+
326+ image_id = self._register_image(context, metadata)
327 msg = _("Registered image %(image_location)s with"
328 " id %(image_id)s") % locals()
329 LOG.audit(msg, context=context)
330 return {'imageId': image_id}
331
332 def describe_image_attribute(self, context, image_id, attribute, **kwargs):
333- if attribute != 'launchPermission':
334+ def _block_device_mapping_attribute(image, result):
335+ _format_mappings(image['properties'], result)
336+
337+ def _launch_permission_attribute(image, result):
338+ result['launchPermission'] = []
339+ if image['is_public']:
340+ result['launchPermission'].append({'group': 'all'})
341+
342+ def _root_device_name_attribute(image, result):
343+ result['rootDeviceName'] = \
344+ ec2utils.properties_root_device_name(image['properties'])
345+ if result['rootDeviceName'] is None:
346+ result['rootDeviceName'] = _DEFAULT_ROOT_DEVICE_NAME
347+
348+ supported_attributes = {
349+ 'blockDeviceMapping': _block_device_mapping_attribute,
350+ 'launchPermission': _launch_permission_attribute,
351+ 'rootDeviceName': _root_device_name_attribute,
352+ }
353+
354+ fn = supported_attributes.get(attribute)
355+ if fn is None:
356 raise exception.ApiError(_('attribute not supported: %s')
357 % attribute)
358 try:
359 image = self._get_image(context, image_id)
360 except exception.NotFound:
361 raise exception.ImageNotFound(image_id=image_id)
362- result = {'imageId': image_id, 'launchPermission': []}
363- if image['is_public']:
364- result['launchPermission'].append({'group': 'all'})
365+
366+ result = {'imageId': image_id}
367+ fn(image, result)
368 return result
369
370 def modify_image_attribute(self, context, image_id, attribute,
371@@ -1202,3 +1357,109 @@
372 internal_id = ec2utils.ec2_id_to_id(image_id)
373 result = self.image_service.update(context, internal_id, dict(kwargs))
374 return result
375+
376+ # TODO(yamahata): race condition
377+ # At the moment there is no way to prevent others from
378+ # manipulating instances/volumes/snapshots.
379+ # As other code doesn't take it into consideration, here we don't
380+ # care of it for now. Ostrich algorithm
381+ def create_image(self, context, instance_id, **kwargs):
382+ # NOTE(yamahata): name/description are ignored by register_image(),
383+ # do so here
384+ no_reboot = kwargs.get('no_reboot', False)
385+
386+ ec2_instance_id = instance_id
387+ instance_id = ec2utils.ec2_id_to_id(ec2_instance_id)
388+ instance = self.compute_api.get(context, instance_id)
389+
390+ # stop the instance if necessary
391+ restart_instance = False
392+ if not no_reboot:
393+ state_description = instance['state_description']
394+
395+ # if the instance is in subtle state, refuse to proceed.
396+ if state_description not in ('running', 'stopping', 'stopped'):
397+ raise exception.InstanceNotRunning(instance_id=ec2_instance_id)
398+
399+ if state_description == 'running':
400+ restart_instance = True
401+ self.compute_api.stop(context, instance_id=instance_id)
402+
403+ # wait instance for really stopped
404+ start_time = time.time()
405+ while state_description != 'stopped':
406+ time.sleep(1)
407+ instance = self.compute_api.get(context, instance_id)
408+ state_description = instance['state_description']
409+ # NOTE(yamahata): timeout and error. 1 hour for now for safety.
410+ # Is it too short/long?
411+ # Or is there any better way?
412+ timeout = 1 * 60 * 60 * 60
413+ if time.time() > start_time + timeout:
414+ raise exception.ApiError(
415+ _('Couldn\'t stop instance with in %d sec') % timeout)
416+
417+ src_image = self._get_image(context, instance['image_ref'])
418+ properties = src_image['properties']
419+ if instance['root_device_name']:
420+ properties['root_device_name'] = instance['root_device_name']
421+
422+ mapping = []
423+ bdms = db.block_device_mapping_get_all_by_instance(context,
424+ instance_id)
425+ for bdm in bdms:
426+ if bdm.no_device:
427+ continue
428+ m = {}
429+ for attr in ('device_name', 'snapshot_id', 'volume_id',
430+ 'volume_size', 'delete_on_termination', 'no_device',
431+ 'virtual_name'):
432+ val = getattr(bdm, attr)
433+ if val is not None:
434+ m[attr] = val
435+
436+ volume_id = m.get('volume_id')
437+ if m.get('snapshot_id') and volume_id:
438+ # create snapshot based on volume_id
439+ vol = self.volume_api.get(context, volume_id=volume_id)
440+ # NOTE(yamahata): Should we wait for snapshot creation?
441+ # Linux LVM snapshot creation completes in
442+ # short time, it doesn't matter for now.
443+ snapshot = self.volume_api.create_snapshot_force(
444+ context, volume_id=volume_id, name=vol['display_name'],
445+ description=vol['display_description'])
446+ m['snapshot_id'] = snapshot['id']
447+ del m['volume_id']
448+
449+ if m:
450+ mapping.append(m)
451+
452+ for m in _properties_get_mappings(properties):
453+ virtual_name = m['virtual']
454+ if virtual_name in ('ami', 'root'):
455+ continue
456+
457+ assert (virtual_name == 'swap' or
458+ virtual_name.startswith('ephemeral'))
459+ device_name = m['device']
460+ if device_name in [b['device_name'] for b in mapping
461+ if not b.get('no_device', False)]:
462+ continue
463+
464+ # NOTE(yamahata): swap and ephemeral devices are specified in
465+ # AMI, but disabled for this instance by user.
466+ # So disable those device by no_device.
467+ mapping.append({'device_name': device_name, 'no_device': True})
468+
469+ if mapping:
470+ properties['block_device_mapping'] = mapping
471+
472+ for attr in ('status', 'location', 'id'):
473+ src_image.pop(attr, None)
474+
475+ image_id = self._register_image(context, src_image)
476+
477+ if restart_instance:
478+ self.compute_api.start(context, instance_id=instance_id)
479+
480+ return {'imageId': image_id}
481
482=== modified file 'nova/api/ec2/ec2utils.py'
483--- nova/api/ec2/ec2utils.py 2011-06-24 12:01:51 +0000
484+++ nova/api/ec2/ec2utils.py 2011-07-08 09:39:31 +0000
485@@ -34,6 +34,17 @@
486 return template % instance_id
487
488
489+def id_to_ec2_snap_id(instance_id):
490+ """Convert an snapshot ID (int) to an ec2 snapshot ID
491+ (snap-[base 16 number])"""
492+ return id_to_ec2_id(instance_id, 'snap-%08x')
493+
494+
495+def id_to_ec2_vol_id(instance_id):
496+ """Convert an volume ID (int) to an ec2 volume ID (vol-[base 16 number])"""
497+ return id_to_ec2_id(instance_id, 'vol-%08x')
498+
499+
500 _c2u = re.compile('(((?<=[a-z])[A-Z])|([A-Z](?![A-Z]|$)))')
501
502
503@@ -124,3 +135,32 @@
504 args[key] = value
505
506 return args
507+
508+
509+def properties_root_device_name(properties):
510+ """get root device name from image meta data.
511+ If it isn't specified, return None.
512+ """
513+ root_device_name = None
514+
515+ # NOTE(yamahata): see image_service.s3.s3create()
516+ for bdm in properties.get('mappings', []):
517+ if bdm['virtual'] == 'root':
518+ root_device_name = bdm['device']
519+
520+ # NOTE(yamahata): register_image's command line can override
521+ # <machine>.manifest.xml
522+ if 'root_device_name' in properties:
523+ root_device_name = properties['root_device_name']
524+
525+ return root_device_name
526+
527+
528+def mappings_prepend_dev(mappings):
529+ """Prepend '/dev/' to 'device' entry of swap/ephemeral virtual type"""
530+ for m in mappings:
531+ virtual = m['virtual']
532+ if ((virtual == 'swap' or virtual.startswith('ephemeral')) and
533+ (not m['device'].startswith('/'))):
534+ m['device'] = '/dev/' + m['device']
535+ return mappings
536
537=== modified file 'nova/compute/api.py'
538--- nova/compute/api.py 2011-07-01 19:44:10 +0000
539+++ nova/compute/api.py 2011-07-08 09:39:31 +0000
540@@ -32,6 +32,7 @@
541 from nova import rpc
542 from nova import utils
543 from nova import volume
544+from nova.api.ec2 import ec2utils
545 from nova.compute import instance_types
546 from nova.compute import power_state
547 from nova.compute.utils import terminate_volumes
548@@ -217,6 +218,9 @@
549 if reservation_id is None:
550 reservation_id = utils.generate_uid('r')
551
552+ root_device_name = ec2utils.properties_root_device_name(
553+ image['properties'])
554+
555 base_options = {
556 'reservation_id': reservation_id,
557 'image_ref': image_href,
558@@ -241,11 +245,61 @@
559 'availability_zone': availability_zone,
560 'os_type': os_type,
561 'architecture': architecture,
562- 'vm_mode': vm_mode}
563-
564- return (num_instances, base_options)
565-
566- def create_db_entry_for_new_instance(self, context, base_options,
567+ 'vm_mode': vm_mode,
568+ 'root_device_name': root_device_name}
569+
570+ return (num_instances, base_options, image)
571+
572+ def _update_image_block_device_mapping(self, elevated_context, instance_id,
573+ mappings):
574+ """tell vm driver to create ephemeral/swap device at boot time by
575+ updating BlockDeviceMapping
576+ """
577+ for bdm in ec2utils.mappings_prepend_dev(mappings):
578+ LOG.debug(_("bdm %s"), bdm)
579+
580+ virtual_name = bdm['virtual']
581+ if virtual_name == 'ami' or virtual_name == 'root':
582+ continue
583+
584+ assert (virtual_name == 'swap' or
585+ virtual_name.startswith('ephemeral'))
586+ values = {
587+ 'instance_id': instance_id,
588+ 'device_name': bdm['device'],
589+ 'virtual_name': virtual_name, }
590+ self.db.block_device_mapping_update_or_create(elevated_context,
591+ values)
592+
593+ def _update_block_device_mapping(self, elevated_context, instance_id,
594+ block_device_mapping):
595+ """tell vm driver to attach volume at boot time by updating
596+ BlockDeviceMapping
597+ """
598+ for bdm in block_device_mapping:
599+ LOG.debug(_('bdm %s'), bdm)
600+ assert 'device_name' in bdm
601+
602+ values = {'instance_id': instance_id}
603+ for key in ('device_name', 'delete_on_termination', 'virtual_name',
604+ 'snapshot_id', 'volume_id', 'volume_size',
605+ 'no_device'):
606+ values[key] = bdm.get(key)
607+
608+ # NOTE(yamahata): NoDevice eliminates devices defined in image
609+ # files by command line option.
610+ # (--block-device-mapping)
611+ if bdm.get('virtual_name') == 'NoDevice':
612+ values['no_device'] = True
613+ for k in ('delete_on_termination', 'volume_id',
614+ 'snapshot_id', 'volume_id', 'volume_size',
615+ 'virtual_name'):
616+ values[k] = None
617+
618+ self.db.block_device_mapping_update_or_create(elevated_context,
619+ values)
620+
621+ def create_db_entry_for_new_instance(self, context, image, base_options,
622 security_group, block_device_mapping, num=1):
623 """Create an entry in the DB for this new instance,
624 including any related table updates (such as security group,
625@@ -278,23 +332,14 @@
626 instance_id,
627 security_group_id)
628
629- block_device_mapping = block_device_mapping or []
630- # NOTE(yamahata)
631- # tell vm driver to attach volume at boot time by updating
632- # BlockDeviceMapping
633- for bdm in block_device_mapping:
634- LOG.debug(_('bdm %s'), bdm)
635- assert 'device_name' in bdm
636- values = {
637- 'instance_id': instance_id,
638- 'device_name': bdm['device_name'],
639- 'delete_on_termination': bdm.get('delete_on_termination'),
640- 'virtual_name': bdm.get('virtual_name'),
641- 'snapshot_id': bdm.get('snapshot_id'),
642- 'volume_id': bdm.get('volume_id'),
643- 'volume_size': bdm.get('volume_size'),
644- 'no_device': bdm.get('no_device')}
645- self.db.block_device_mapping_create(elevated, values)
646+ # BlockDeviceMapping table
647+ self._update_image_block_device_mapping(elevated, instance_id,
648+ image['properties'].get('mappings', []))
649+ self._update_block_device_mapping(elevated, instance_id,
650+ image['properties'].get('block_device_mapping', []))
651+ # override via command line option
652+ self._update_block_device_mapping(elevated, instance_id,
653+ block_device_mapping)
654
655 # Set sane defaults if not specified
656 updates = {}
657@@ -356,7 +401,7 @@
658 """Provision the instances by passing the whole request to
659 the Scheduler for execution. Returns a Reservation ID
660 related to the creation of all of these instances."""
661- num_instances, base_options = self._check_create_parameters(
662+ num_instances, base_options, image = self._check_create_parameters(
663 context, instance_type,
664 image_href, kernel_id, ramdisk_id,
665 min_count, max_count,
666@@ -394,7 +439,7 @@
667 Returns a list of instance dicts.
668 """
669
670- num_instances, base_options = self._check_create_parameters(
671+ num_instances, base_options, image = self._check_create_parameters(
672 context, instance_type,
673 image_href, kernel_id, ramdisk_id,
674 min_count, max_count,
675@@ -404,10 +449,11 @@
676 injected_files, admin_password, zone_blob,
677 reservation_id)
678
679+ block_device_mapping = block_device_mapping or []
680 instances = []
681 LOG.debug(_("Going to run %s instances..."), num_instances)
682 for num in range(num_instances):
683- instance = self.create_db_entry_for_new_instance(context,
684+ instance = self.create_db_entry_for_new_instance(context, image,
685 base_options, security_group,
686 block_device_mapping, num=num)
687 instances.append(instance)
688
689=== modified file 'nova/compute/manager.py'
690--- nova/compute/manager.py 2011-07-01 14:26:05 +0000
691+++ nova/compute/manager.py 2011-07-08 09:39:31 +0000
692@@ -220,6 +220,17 @@
693 for bdm in self.db.block_device_mapping_get_all_by_instance(
694 context, instance_id):
695 LOG.debug(_("setting up bdm %s"), bdm)
696+
697+ if bdm['no_device']:
698+ continue
699+ if bdm['virtual_name']:
700+ # TODO(yamahata):
701+ # block devices for swap and ephemeralN will be
702+ # created by virt driver locally in compute node.
703+ assert (bdm['virtual_name'] == 'swap' or
704+ bdm['virtual_name'].startswith('ephemeral'))
705+ continue
706+
707 if ((bdm['snapshot_id'] is not None) and
708 (bdm['volume_id'] is None)):
709 # TODO(yamahata): default name and description
710@@ -252,15 +263,6 @@
711 block_device_mapping.append({'device_path': dev_path,
712 'mount_device':
713 bdm['device_name']})
714- elif bdm['virtual_name'] is not None:
715- # TODO(yamahata): ephemeral/swap device support
716- LOG.debug(_('block_device_mapping: '
717- 'ephemeral device is not supported yet'))
718- else:
719- # TODO(yamahata): NoDevice support
720- assert bdm['no_device']
721- LOG.debug(_('block_device_mapping: '
722- 'no device is not supported yet'))
723
724 return block_device_mapping
725
726
727=== modified file 'nova/db/api.py'
728--- nova/db/api.py 2011-06-30 19:20:59 +0000
729+++ nova/db/api.py 2011-07-08 09:39:31 +0000
730@@ -989,10 +989,16 @@
731
732
733 def block_device_mapping_update(context, bdm_id, values):
734- """Create an entry of block device mapping"""
735+ """Update an entry of block device mapping"""
736 return IMPL.block_device_mapping_update(context, bdm_id, values)
737
738
739+def block_device_mapping_update_or_create(context, values):
740+ """Update an entry of block device mapping.
741+ If not existed, create a new entry"""
742+ return IMPL.block_device_mapping_update_or_create(context, values)
743+
744+
745 def block_device_mapping_get_all_by_instance(context, instance_id):
746 """Get all block device mapping belonging to a instance"""
747 return IMPL.block_device_mapping_get_all_by_instance(context, instance_id)
748
749=== modified file 'nova/db/sqlalchemy/api.py'
750--- nova/db/sqlalchemy/api.py 2011-07-01 15:07:08 +0000
751+++ nova/db/sqlalchemy/api.py 2011-07-08 09:39:31 +0000
752@@ -2208,6 +2208,23 @@
753
754
755 @require_context
756+def block_device_mapping_update_or_create(context, values):
757+ session = get_session()
758+ with session.begin():
759+ result = session.query(models.BlockDeviceMapping).\
760+ filter_by(instance_id=values['instance_id']).\
761+ filter_by(device_name=values['device_name']).\
762+ filter_by(deleted=False).\
763+ first()
764+ if not result:
765+ bdm_ref = models.BlockDeviceMapping()
766+ bdm_ref.update(values)
767+ bdm_ref.save(session=session)
768+ else:
769+ result.update(values)
770+
771+
772+@require_context
773 def block_device_mapping_get_all_by_instance(context, instance_id):
774 session = get_session()
775 result = session.query(models.BlockDeviceMapping).\
776
777=== added file 'nova/db/sqlalchemy/migrate_repo/versions/032_add_root_device_name.py'
778--- nova/db/sqlalchemy/migrate_repo/versions/032_add_root_device_name.py 1970-01-01 00:00:00 +0000
779+++ nova/db/sqlalchemy/migrate_repo/versions/032_add_root_device_name.py 2011-07-08 09:39:31 +0000
780@@ -0,0 +1,47 @@
781+# Copyright 2011 OpenStack LLC.
782+# Copyright 2011 Isaku Yamahata
783+#
784+# Licensed under the Apache License, Version 2.0 (the "License"); you may
785+# not use this file except in compliance with the License. You may obtain
786+# a copy of the License at
787+#
788+# http://www.apache.org/licenses/LICENSE-2.0
789+#
790+# Unless required by applicable law or agreed to in writing, software
791+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
792+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
793+# License for the specific language governing permissions and limitations
794+# under the License.
795+
796+from sqlalchemy import Column, Integer, MetaData, Table, String
797+
798+meta = MetaData()
799+
800+
801+# Just for the ForeignKey and column creation to succeed, these are not the
802+# actual definitions of instances or services.
803+instances = Table('instances', meta,
804+ Column('id', Integer(), primary_key=True, nullable=False),
805+ )
806+
807+#
808+# New Column
809+#
810+root_device_name = Column(
811+ 'root_device_name',
812+ String(length=255, convert_unicode=False, assert_unicode=None,
813+ unicode_error=None, _warn_on_bytestring=False),
814+ nullable=True)
815+
816+
817+def upgrade(migrate_engine):
818+ # Upgrade operations go here. Don't create your own engine;
819+ # bind migrate_engine to your metadata
820+ meta.bind = migrate_engine
821+ instances.create_column(root_device_name)
822+
823+
824+def downgrade(migrate_engine):
825+ # Operations to reverse the above upgrade go here.
826+ meta.bind = migrate_engine
827+ instances.drop_column('root_device_name')
828
829=== modified file 'nova/db/sqlalchemy/models.py'
830--- nova/db/sqlalchemy/models.py 2011-06-30 19:20:59 +0000
831+++ nova/db/sqlalchemy/models.py 2011-07-08 09:39:31 +0000
832@@ -236,6 +236,8 @@
833 vm_mode = Column(String(255))
834 uuid = Column(String(36))
835
836+ root_device_name = Column(String(255))
837+
838 # TODO(vish): see Ewan's email about state improvements, probably
839 # should be in a driver base class or some such
840 # vmstate_state = running, halted, suspended, paused
841
842=== modified file 'nova/image/fake.py'
843--- nova/image/fake.py 2011-06-24 12:01:51 +0000
844+++ nova/image/fake.py 2011-07-08 09:39:31 +0000
845@@ -137,7 +137,11 @@
846 try:
847 image_id = metadata['id']
848 except KeyError:
849- image_id = random.randint(0, 2 ** 31 - 1)
850+ while True:
851+ image_id = random.randint(0, 2 ** 31 - 1)
852+ if not self.images.get(str(image_id)):
853+ break
854+
855 image_id = str(image_id)
856
857 if self.images.get(image_id):
858@@ -176,3 +180,8 @@
859
860 def FakeImageService():
861 return _fakeImageService
862+
863+
864+def FakeImageService_reset():
865+ global _fakeImageService
866+ _fakeImageService = _FakeImageService()
867
868=== modified file 'nova/image/s3.py'
869--- nova/image/s3.py 2011-06-01 03:16:22 +0000
870+++ nova/image/s3.py 2011-07-08 09:39:31 +0000
871@@ -102,18 +102,7 @@
872 key.get_contents_to_filename(local_filename)
873 return local_filename
874
875- def _s3_create(self, context, metadata):
876- """Gets a manifext from s3 and makes an image."""
877-
878- image_path = tempfile.mkdtemp(dir=FLAGS.image_decryption_dir)
879-
880- image_location = metadata['properties']['image_location']
881- bucket_name = image_location.split('/')[0]
882- manifest_path = image_location[len(bucket_name) + 1:]
883- bucket = self._conn(context).get_bucket(bucket_name)
884- key = bucket.get_key(manifest_path)
885- manifest = key.get_contents_as_string()
886-
887+ def _s3_parse_manifest(self, context, metadata, manifest):
888 manifest = ElementTree.fromstring(manifest)
889 image_format = 'ami'
890 image_type = 'machine'
891@@ -141,6 +130,28 @@
892 except Exception:
893 arch = 'x86_64'
894
895+ # NOTE(yamahata):
896+ # EC2 ec2-budlne-image --block-device-mapping accepts
897+ # <virtual name>=<device name> where
898+ # virtual name = {ami, root, swap, ephemeral<N>}
899+ # where N is no negative integer
900+ # device name = the device name seen by guest kernel.
901+ # They are converted into
902+ # block_device_mapping/mapping/{virtual, device}
903+ #
904+ # Do NOT confuse this with ec2-register's block device mapping
905+ # argument.
906+ mappings = []
907+ try:
908+ block_device_mapping = manifest.findall('machine_configuration/'
909+ 'block_device_mapping/'
910+ 'mapping')
911+ for bdm in block_device_mapping:
912+ mappings.append({'virtual': bdm.find('virtual').text,
913+ 'device': bdm.find('device').text})
914+ except Exception:
915+ mappings = []
916+
917 properties = metadata['properties']
918 properties['project_id'] = context.project_id
919 properties['architecture'] = arch
920@@ -151,6 +162,9 @@
921 if ramdisk_id:
922 properties['ramdisk_id'] = ec2utils.ec2_id_to_id(ramdisk_id)
923
924+ if mappings:
925+ properties['mappings'] = mappings
926+
927 metadata.update({'disk_format': image_format,
928 'container_format': image_format,
929 'status': 'queued',
930@@ -158,6 +172,21 @@
931 'properties': properties})
932 metadata['properties']['image_state'] = 'pending'
933 image = self.service.create(context, metadata)
934+ return manifest, image
935+
936+ def _s3_create(self, context, metadata):
937+ """Gets a manifext from s3 and makes an image."""
938+
939+ image_path = tempfile.mkdtemp(dir=FLAGS.image_decryption_dir)
940+
941+ image_location = metadata['properties']['image_location']
942+ bucket_name = image_location.split('/')[0]
943+ manifest_path = image_location[len(bucket_name) + 1:]
944+ bucket = self._conn(context).get_bucket(bucket_name)
945+ key = bucket.get_key(manifest_path)
946+ manifest = key.get_contents_as_string()
947+
948+ manifest, image = self._s3_parse_manifest(context, metadata, manifest)
949 image_id = image['id']
950
951 def delayed_create():
952
953=== modified file 'nova/test.py'
954--- nova/test.py 2011-06-29 17:58:10 +0000
955+++ nova/test.py 2011-07-08 09:39:31 +0000
956@@ -31,6 +31,7 @@
957
958 import mox
959 import nose.plugins.skip
960+import nova.image.fake
961 import shutil
962 import stubout
963 from eventlet import greenthread
964@@ -119,6 +120,9 @@
965 if hasattr(fake.FakeConnection, '_instance'):
966 del fake.FakeConnection._instance
967
968+ if FLAGS.image_service == 'nova.image.fake.FakeImageService':
969+ nova.image.fake.FakeImageService_reset()
970+
971 # Reset any overriden flags
972 self.reset_flags()
973
974@@ -248,3 +252,15 @@
975 for d1, d2 in zip(L1, L2):
976 self.assertDictMatch(d1, d2, approx_equal=approx_equal,
977 tolerance=tolerance)
978+
979+ def assertSubDictMatch(self, sub_dict, super_dict):
980+ """Assert a sub_dict is subset of super_dict."""
981+ self.assertTrue(set(sub_dict.keys()).issubset(set(super_dict.keys())))
982+ for k, sub_value in sub_dict.items():
983+ super_value = super_dict[k]
984+ if isinstance(sub_value, dict):
985+ self.assertSubDictMatch(sub_value, super_value)
986+ elif 'DONTCARE' in (sub_value, super_value):
987+ continue
988+ else:
989+ self.assertEqual(sub_value, super_value)
990
991=== added file 'nova/tests/image/test_s3.py'
992--- nova/tests/image/test_s3.py 1970-01-01 00:00:00 +0000
993+++ nova/tests/image/test_s3.py 2011-07-08 09:39:31 +0000
994@@ -0,0 +1,122 @@
995+# vim: tabstop=4 shiftwidth=4 softtabstop=4
996+
997+# Copyright 2011 Isaku Yamahata
998+# All Rights Reserved.
999+#
1000+# Licensed under the Apache License, Version 2.0 (the "License"); you may
1001+# not use this file except in compliance with the License. You may obtain
1002+# a copy of the License at
1003+#
1004+# http://www.apache.org/licenses/LICENSE-2.0
1005+#
1006+# Unless required by applicable law or agreed to in writing, software
1007+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
1008+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1009+# License for the specific language governing permissions and limitations
1010+# under the License.
1011+
1012+from nova import context
1013+from nova import flags
1014+from nova import test
1015+from nova.image import s3
1016+
1017+FLAGS = flags.FLAGS
1018+
1019+
1020+ami_manifest_xml = """<?xml version="1.0" ?>
1021+<manifest>
1022+ <version>2011-06-17</version>
1023+ <bundler>
1024+ <name>test-s3</name>
1025+ <version>0</version>
1026+ <release>0</release>
1027+ </bundler>
1028+ <machine_configuration>
1029+ <architecture>x86_64</architecture>
1030+ <block_device_mapping>
1031+ <mapping>
1032+ <virtual>ami</virtual>
1033+ <device>sda1</device>
1034+ </mapping>
1035+ <mapping>
1036+ <virtual>root</virtual>
1037+ <device>/dev/sda1</device>
1038+ </mapping>
1039+ <mapping>
1040+ <virtual>ephemeral0</virtual>
1041+ <device>sda2</device>
1042+ </mapping>
1043+ <mapping>
1044+ <virtual>swap</virtual>
1045+ <device>sda3</device>
1046+ </mapping>
1047+ </block_device_mapping>
1048+ </machine_configuration>
1049+</manifest>
1050+"""
1051+
1052+
1053+class TestS3ImageService(test.TestCase):
1054+ def setUp(self):
1055+ super(TestS3ImageService, self).setUp()
1056+ self.orig_image_service = FLAGS.image_service
1057+ FLAGS.image_service = 'nova.image.fake.FakeImageService'
1058+ self.image_service = s3.S3ImageService()
1059+ self.context = context.RequestContext(None, None)
1060+
1061+ def tearDown(self):
1062+ super(TestS3ImageService, self).tearDown()
1063+ FLAGS.image_service = self.orig_image_service
1064+
1065+ def _assertEqualList(self, list0, list1, keys):
1066+ self.assertEqual(len(list0), len(list1))
1067+ key = keys[0]
1068+ for x in list0:
1069+ self.assertEqual(len(x), len(keys))
1070+ self.assertTrue(key in x)
1071+ for y in list1:
1072+ self.assertTrue(key in y)
1073+ if x[key] == y[key]:
1074+ for k in keys:
1075+ self.assertEqual(x[k], y[k])
1076+
1077+ def test_s3_create(self):
1078+ metadata = {'properties': {
1079+ 'root_device_name': '/dev/sda1',
1080+ 'block_device_mapping': [
1081+ {'device_name': '/dev/sda1',
1082+ 'snapshot_id': 'snap-12345678',
1083+ 'delete_on_termination': True},
1084+ {'device_name': '/dev/sda2',
1085+ 'virutal_name': 'ephemeral0'},
1086+ {'device_name': '/dev/sdb0',
1087+ 'no_device': True}]}}
1088+ _manifest, image = self.image_service._s3_parse_manifest(
1089+ self.context, metadata, ami_manifest_xml)
1090+ image_id = image['id']
1091+
1092+ ret_image = self.image_service.show(self.context, image_id)
1093+ self.assertTrue('properties' in ret_image)
1094+ properties = ret_image['properties']
1095+
1096+ self.assertTrue('mappings' in properties)
1097+ mappings = properties['mappings']
1098+ expected_mappings = [
1099+ {"device": "sda1", "virtual": "ami"},
1100+ {"device": "/dev/sda1", "virtual": "root"},
1101+ {"device": "sda2", "virtual": "ephemeral0"},
1102+ {"device": "sda3", "virtual": "swap"}]
1103+ self._assertEqualList(mappings, expected_mappings,
1104+ ['device', 'virtual'])
1105+
1106+ self.assertTrue('block_device_mapping', properties)
1107+ block_device_mapping = properties['block_device_mapping']
1108+ expected_bdm = [
1109+ {'device_name': '/dev/sda1',
1110+ 'snapshot_id': 'snap-12345678',
1111+ 'delete_on_termination': True},
1112+ {'device_name': '/dev/sda2',
1113+ 'virutal_name': 'ephemeral0'},
1114+ {'device_name': '/dev/sdb0',
1115+ 'no_device': True}]
1116+ self.assertEqual(block_device_mapping, expected_bdm)
1117
1118=== modified file 'nova/tests/test_api.py'
1119--- nova/tests/test_api.py 2011-06-24 12:01:51 +0000
1120+++ nova/tests/test_api.py 2011-07-08 09:39:31 +0000
1121@@ -92,7 +92,9 @@
1122 conv = ec2utils._try_convert
1123 self.assertEqual(conv('None'), None)
1124 self.assertEqual(conv('True'), True)
1125+ self.assertEqual(conv('true'), True)
1126 self.assertEqual(conv('False'), False)
1127+ self.assertEqual(conv('false'), False)
1128 self.assertEqual(conv('0'), 0)
1129 self.assertEqual(conv('42'), 42)
1130 self.assertEqual(conv('3.14'), 3.14)
1131@@ -107,6 +109,8 @@
1132 def test_ec2_id_to_id(self):
1133 self.assertEqual(ec2utils.ec2_id_to_id('i-0000001e'), 30)
1134 self.assertEqual(ec2utils.ec2_id_to_id('ami-1d'), 29)
1135+ self.assertEqual(ec2utils.ec2_id_to_id('snap-0000001c'), 28)
1136+ self.assertEqual(ec2utils.ec2_id_to_id('vol-0000001b'), 27)
1137
1138 def test_bad_ec2_id(self):
1139 self.assertRaises(exception.InvalidEc2Id,
1140@@ -116,6 +120,72 @@
1141 def test_id_to_ec2_id(self):
1142 self.assertEqual(ec2utils.id_to_ec2_id(30), 'i-0000001e')
1143 self.assertEqual(ec2utils.id_to_ec2_id(29, 'ami-%08x'), 'ami-0000001d')
1144+ self.assertEqual(ec2utils.id_to_ec2_snap_id(28), 'snap-0000001c')
1145+ self.assertEqual(ec2utils.id_to_ec2_vol_id(27), 'vol-0000001b')
1146+
1147+ def test_dict_from_dotted_str(self):
1148+ in_str = [('BlockDeviceMapping.1.DeviceName', '/dev/sda1'),
1149+ ('BlockDeviceMapping.1.Ebs.SnapshotId', 'snap-0000001c'),
1150+ ('BlockDeviceMapping.1.Ebs.VolumeSize', '80'),
1151+ ('BlockDeviceMapping.1.Ebs.DeleteOnTermination', 'false'),
1152+ ('BlockDeviceMapping.2.DeviceName', '/dev/sdc'),
1153+ ('BlockDeviceMapping.2.VirtualName', 'ephemeral0')]
1154+ expected_dict = {
1155+ 'block_device_mapping': {
1156+ '1': {'device_name': '/dev/sda1',
1157+ 'ebs': {'snapshot_id': 'snap-0000001c',
1158+ 'volume_size': 80,
1159+ 'delete_on_termination': False}},
1160+ '2': {'device_name': '/dev/sdc',
1161+ 'virtual_name': 'ephemeral0'}}}
1162+ out_dict = ec2utils.dict_from_dotted_str(in_str)
1163+
1164+ self.assertDictMatch(out_dict, expected_dict)
1165+
1166+ def test_properties_root_defice_name(self):
1167+ mappings = [{"device": "/dev/sda1", "virtual": "root"}]
1168+ properties0 = {'mappings': mappings}
1169+ properties1 = {'root_device_name': '/dev/sdb', 'mappings': mappings}
1170+
1171+ root_device_name = ec2utils.properties_root_device_name(properties0)
1172+ self.assertEqual(root_device_name, '/dev/sda1')
1173+
1174+ root_device_name = ec2utils.properties_root_device_name(properties1)
1175+ self.assertEqual(root_device_name, '/dev/sdb')
1176+
1177+ def test_mapping_prepend_dev(self):
1178+ mappings = [
1179+ {'virtual': 'ami',
1180+ 'device': 'sda1'},
1181+ {'virtual': 'root',
1182+ 'device': '/dev/sda1'},
1183+
1184+ {'virtual': 'swap',
1185+ 'device': 'sdb1'},
1186+ {'virtual': 'swap',
1187+ 'device': '/dev/sdb2'},
1188+
1189+ {'virtual': 'ephemeral0',
1190+ 'device': 'sdc1'},
1191+ {'virtual': 'ephemeral1',
1192+ 'device': '/dev/sdc1'}]
1193+ expected_result = [
1194+ {'virtual': 'ami',
1195+ 'device': 'sda1'},
1196+ {'virtual': 'root',
1197+ 'device': '/dev/sda1'},
1198+
1199+ {'virtual': 'swap',
1200+ 'device': '/dev/sdb1'},
1201+ {'virtual': 'swap',
1202+ 'device': '/dev/sdb2'},
1203+
1204+ {'virtual': 'ephemeral0',
1205+ 'device': '/dev/sdc1'},
1206+ {'virtual': 'ephemeral1',
1207+ 'device': '/dev/sdc1'}]
1208+ self.assertDictListMatch(ec2utils.mappings_prepend_dev(mappings),
1209+ expected_result)
1210
1211
1212 class ApiEc2TestCase(test.TestCase):
1213
1214=== added file 'nova/tests/test_bdm.py'
1215--- nova/tests/test_bdm.py 1970-01-01 00:00:00 +0000
1216+++ nova/tests/test_bdm.py 2011-07-08 09:39:31 +0000
1217@@ -0,0 +1,233 @@
1218+# vim: tabstop=4 shiftwidth=4 softtabstop=4
1219+
1220+# Copyright 2011 Isaku Yamahata
1221+# All Rights Reserved.
1222+#
1223+# Licensed under the Apache License, Version 2.0 (the "License"); you may
1224+# not use this file except in compliance with the License. You may obtain
1225+# a copy of the License at
1226+#
1227+# http://www.apache.org/licenses/LICENSE-2.0
1228+#
1229+# Unless required by applicable law or agreed to in writing, software
1230+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
1231+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1232+# License for the specific language governing permissions and limitations
1233+# under the License.
1234+
1235+"""
1236+Tests for Block Device Mapping Code.
1237+"""
1238+
1239+from nova.api.ec2 import cloud
1240+from nova import test
1241+
1242+
1243+class BlockDeviceMappingEc2CloudTestCase(test.TestCase):
1244+ """Test Case for Block Device Mapping"""
1245+
1246+ def setUp(self):
1247+ super(BlockDeviceMappingEc2CloudTestCase, self).setUp()
1248+
1249+ def tearDown(self):
1250+ super(BlockDeviceMappingEc2CloudTestCase, self).tearDown()
1251+
1252+ def _assertApply(self, action, bdm_list):
1253+ for bdm, expected_result in bdm_list:
1254+ self.assertDictMatch(action(bdm), expected_result)
1255+
1256+ def test_parse_block_device_mapping(self):
1257+ bdm_list = [
1258+ ({'device_name': '/dev/fake0',
1259+ 'ebs': {'snapshot_id': 'snap-12345678',
1260+ 'volume_size': 1}},
1261+ {'device_name': '/dev/fake0',
1262+ 'snapshot_id': 0x12345678,
1263+ 'volume_size': 1,
1264+ 'delete_on_termination': True}),
1265+
1266+ ({'device_name': '/dev/fake1',
1267+ 'ebs': {'snapshot_id': 'snap-23456789',
1268+ 'delete_on_termination': False}},
1269+ {'device_name': '/dev/fake1',
1270+ 'snapshot_id': 0x23456789,
1271+ 'delete_on_termination': False}),
1272+
1273+ ({'device_name': '/dev/fake2',
1274+ 'ebs': {'snapshot_id': 'vol-87654321',
1275+ 'volume_size': 2}},
1276+ {'device_name': '/dev/fake2',
1277+ 'volume_id': 0x87654321,
1278+ 'volume_size': 2,
1279+ 'delete_on_termination': True}),
1280+
1281+ ({'device_name': '/dev/fake3',
1282+ 'ebs': {'snapshot_id': 'vol-98765432',
1283+ 'delete_on_termination': False}},
1284+ {'device_name': '/dev/fake3',
1285+ 'volume_id': 0x98765432,
1286+ 'delete_on_termination': False}),
1287+
1288+ ({'device_name': '/dev/fake4',
1289+ 'ebs': {'no_device': True}},
1290+ {'device_name': '/dev/fake4',
1291+ 'no_device': True}),
1292+
1293+ ({'device_name': '/dev/fake5',
1294+ 'virtual_name': 'ephemeral0'},
1295+ {'device_name': '/dev/fake5',
1296+ 'virtual_name': 'ephemeral0'}),
1297+
1298+ ({'device_name': '/dev/fake6',
1299+ 'virtual_name': 'swap'},
1300+ {'device_name': '/dev/fake6',
1301+ 'virtual_name': 'swap'}),
1302+ ]
1303+ self._assertApply(cloud._parse_block_device_mapping, bdm_list)
1304+
1305+ def test_format_block_device_mapping(self):
1306+ bdm_list = [
1307+ ({'device_name': '/dev/fake0',
1308+ 'snapshot_id': 0x12345678,
1309+ 'volume_size': 1,
1310+ 'delete_on_termination': True},
1311+ {'deviceName': '/dev/fake0',
1312+ 'ebs': {'snapshotId': 'snap-12345678',
1313+ 'volumeSize': 1,
1314+ 'deleteOnTermination': True}}),
1315+
1316+ ({'device_name': '/dev/fake1',
1317+ 'snapshot_id': 0x23456789},
1318+ {'deviceName': '/dev/fake1',
1319+ 'ebs': {'snapshotId': 'snap-23456789'}}),
1320+
1321+ ({'device_name': '/dev/fake2',
1322+ 'snapshot_id': 0x23456789,
1323+ 'delete_on_termination': False},
1324+ {'deviceName': '/dev/fake2',
1325+ 'ebs': {'snapshotId': 'snap-23456789',
1326+ 'deleteOnTermination': False}}),
1327+
1328+ ({'device_name': '/dev/fake3',
1329+ 'volume_id': 0x12345678,
1330+ 'volume_size': 1,
1331+ 'delete_on_termination': True},
1332+ {'deviceName': '/dev/fake3',
1333+ 'ebs': {'snapshotId': 'vol-12345678',
1334+ 'volumeSize': 1,
1335+ 'deleteOnTermination': True}}),
1336+
1337+ ({'device_name': '/dev/fake4',
1338+ 'volume_id': 0x23456789},
1339+ {'deviceName': '/dev/fake4',
1340+ 'ebs': {'snapshotId': 'vol-23456789'}}),
1341+
1342+ ({'device_name': '/dev/fake5',
1343+ 'volume_id': 0x23456789,
1344+ 'delete_on_termination': False},
1345+ {'deviceName': '/dev/fake5',
1346+ 'ebs': {'snapshotId': 'vol-23456789',
1347+ 'deleteOnTermination': False}}),
1348+ ]
1349+ self._assertApply(cloud._format_block_device_mapping, bdm_list)
1350+
1351+ def test_format_mapping(self):
1352+ properties = {
1353+ 'mappings': [
1354+ {'virtual': 'ami',
1355+ 'device': 'sda1'},
1356+ {'virtual': 'root',
1357+ 'device': '/dev/sda1'},
1358+
1359+ {'virtual': 'swap',
1360+ 'device': 'sdb1'},
1361+ {'virtual': 'swap',
1362+ 'device': 'sdb2'},
1363+ {'virtual': 'swap',
1364+ 'device': 'sdb3'},
1365+ {'virtual': 'swap',
1366+ 'device': 'sdb4'},
1367+
1368+ {'virtual': 'ephemeral0',
1369+ 'device': 'sdc1'},
1370+ {'virtual': 'ephemeral1',
1371+ 'device': 'sdc2'},
1372+ {'virtual': 'ephemeral2',
1373+ 'device': 'sdc3'},
1374+ ],
1375+
1376+ 'block_device_mapping': [
1377+ # root
1378+ {'device_name': '/dev/sda1',
1379+ 'snapshot_id': 0x12345678,
1380+ 'delete_on_termination': False},
1381+
1382+
1383+ # overwrite swap
1384+ {'device_name': '/dev/sdb2',
1385+ 'snapshot_id': 0x23456789,
1386+ 'delete_on_termination': False},
1387+ {'device_name': '/dev/sdb3',
1388+ 'snapshot_id': 0x3456789A},
1389+ {'device_name': '/dev/sdb4',
1390+ 'no_device': True},
1391+
1392+ # overwrite ephemeral
1393+ {'device_name': '/dev/sdc2',
1394+ 'snapshot_id': 0x3456789A,
1395+ 'delete_on_termination': False},
1396+ {'device_name': '/dev/sdc3',
1397+ 'snapshot_id': 0x456789AB},
1398+ {'device_name': '/dev/sdc4',
1399+ 'no_device': True},
1400+
1401+ # volume
1402+ {'device_name': '/dev/sdd1',
1403+ 'snapshot_id': 0x87654321,
1404+ 'delete_on_termination': False},
1405+ {'device_name': '/dev/sdd2',
1406+ 'snapshot_id': 0x98765432},
1407+ {'device_name': '/dev/sdd3',
1408+ 'snapshot_id': 0xA9875463},
1409+ {'device_name': '/dev/sdd4',
1410+ 'no_device': True}]}
1411+
1412+ expected_result = {
1413+ 'blockDeviceMapping': [
1414+ # root
1415+ {'deviceName': '/dev/sda1',
1416+ 'ebs': {'snapshotId': 'snap-12345678',
1417+ 'deleteOnTermination': False}},
1418+
1419+ # swap
1420+ {'deviceName': '/dev/sdb1',
1421+ 'virtualName': 'swap'},
1422+ {'deviceName': '/dev/sdb2',
1423+ 'ebs': {'snapshotId': 'snap-23456789',
1424+ 'deleteOnTermination': False}},
1425+ {'deviceName': '/dev/sdb3',
1426+ 'ebs': {'snapshotId': 'snap-3456789a'}},
1427+
1428+ # ephemeral
1429+ {'deviceName': '/dev/sdc1',
1430+ 'virtualName': 'ephemeral0'},
1431+ {'deviceName': '/dev/sdc2',
1432+ 'ebs': {'snapshotId': 'snap-3456789a',
1433+ 'deleteOnTermination': False}},
1434+ {'deviceName': '/dev/sdc3',
1435+ 'ebs': {'snapshotId': 'snap-456789ab'}},
1436+
1437+ # volume
1438+ {'deviceName': '/dev/sdd1',
1439+ 'ebs': {'snapshotId': 'snap-87654321',
1440+ 'deleteOnTermination': False}},
1441+ {'deviceName': '/dev/sdd2',
1442+ 'ebs': {'snapshotId': 'snap-98765432'}},
1443+ {'deviceName': '/dev/sdd3',
1444+ 'ebs': {'snapshotId': 'snap-a9875463'}}]}
1445+
1446+ result = {}
1447+ cloud._format_mappings(properties, result)
1448+ print result
1449+ self.assertEqual(result['blockDeviceMapping'].sort(),
1450+ expected_result['blockDeviceMapping'].sort())
1451
1452=== modified file 'nova/tests/test_cloud.py'
1453--- nova/tests/test_cloud.py 2011-07-01 15:47:33 +0000
1454+++ nova/tests/test_cloud.py 2011-07-08 09:39:31 +0000
1455@@ -45,7 +45,8 @@
1456 class CloudTestCase(test.TestCase):
1457 def setUp(self):
1458 super(CloudTestCase, self).setUp()
1459- self.flags(connection_type='fake')
1460+ self.flags(connection_type='fake',
1461+ stub_network=True)
1462
1463 self.conn = rpc.Connection.instance()
1464
1465@@ -289,7 +290,7 @@
1466 vol2 = db.volume_create(self.context, {})
1467 result = self.cloud.describe_volumes(self.context)
1468 self.assertEqual(len(result['volumeSet']), 2)
1469- volume_id = ec2utils.id_to_ec2_id(vol2['id'], 'vol-%08x')
1470+ volume_id = ec2utils.id_to_ec2_vol_id(vol2['id'])
1471 result = self.cloud.describe_volumes(self.context,
1472 volume_id=[volume_id])
1473 self.assertEqual(len(result['volumeSet']), 1)
1474@@ -305,7 +306,7 @@
1475 snap = db.snapshot_create(self.context, {'volume_id': vol['id'],
1476 'volume_size': vol['size'],
1477 'status': "available"})
1478- snapshot_id = ec2utils.id_to_ec2_id(snap['id'], 'snap-%08x')
1479+ snapshot_id = ec2utils.id_to_ec2_snap_id(snap['id'])
1480
1481 result = self.cloud.create_volume(self.context,
1482 snapshot_id=snapshot_id)
1483@@ -344,7 +345,7 @@
1484 snap2 = db.snapshot_create(self.context, {'volume_id': vol['id']})
1485 result = self.cloud.describe_snapshots(self.context)
1486 self.assertEqual(len(result['snapshotSet']), 2)
1487- snapshot_id = ec2utils.id_to_ec2_id(snap2['id'], 'snap-%08x')
1488+ snapshot_id = ec2utils.id_to_ec2_snap_id(snap2['id'])
1489 result = self.cloud.describe_snapshots(self.context,
1490 snapshot_id=[snapshot_id])
1491 self.assertEqual(len(result['snapshotSet']), 1)
1492@@ -358,7 +359,7 @@
1493 def test_create_snapshot(self):
1494 """Makes sure create_snapshot works."""
1495 vol = db.volume_create(self.context, {'status': "available"})
1496- volume_id = ec2utils.id_to_ec2_id(vol['id'], 'vol-%08x')
1497+ volume_id = ec2utils.id_to_ec2_vol_id(vol['id'])
1498
1499 result = self.cloud.create_snapshot(self.context,
1500 volume_id=volume_id)
1501@@ -375,7 +376,7 @@
1502 vol = db.volume_create(self.context, {'status': "available"})
1503 snap = db.snapshot_create(self.context, {'volume_id': vol['id'],
1504 'status': "available"})
1505- snapshot_id = ec2utils.id_to_ec2_id(snap['id'], 'snap-%08x')
1506+ snapshot_id = ec2utils.id_to_ec2_snap_id(snap['id'])
1507
1508 result = self.cloud.delete_snapshot(self.context,
1509 snapshot_id=snapshot_id)
1510@@ -414,6 +415,185 @@
1511 db.service_destroy(self.context, comp1['id'])
1512 db.service_destroy(self.context, comp2['id'])
1513
1514+ def _block_device_mapping_create(self, instance_id, mappings):
1515+ volumes = []
1516+ for bdm in mappings:
1517+ db.block_device_mapping_create(self.context, bdm)
1518+ if 'volume_id' in bdm:
1519+ values = {'id': bdm['volume_id']}
1520+ for bdm_key, vol_key in [('snapshot_id', 'snapshot_id'),
1521+ ('snapshot_size', 'volume_size'),
1522+ ('delete_on_termination',
1523+ 'delete_on_termination')]:
1524+ if bdm_key in bdm:
1525+ values[vol_key] = bdm[bdm_key]
1526+ vol = db.volume_create(self.context, values)
1527+ db.volume_attached(self.context, vol['id'],
1528+ instance_id, bdm['device_name'])
1529+ volumes.append(vol)
1530+ return volumes
1531+
1532+ def _setUpBlockDeviceMapping(self):
1533+ inst1 = db.instance_create(self.context,
1534+ {'image_ref': 1,
1535+ 'root_device_name': '/dev/sdb1'})
1536+ inst2 = db.instance_create(self.context,
1537+ {'image_ref': 2,
1538+ 'root_device_name': '/dev/sdc1'})
1539+
1540+ instance_id = inst1['id']
1541+ mappings0 = [
1542+ {'instance_id': instance_id,
1543+ 'device_name': '/dev/sdb1',
1544+ 'snapshot_id': '1',
1545+ 'volume_id': '2'},
1546+ {'instance_id': instance_id,
1547+ 'device_name': '/dev/sdb2',
1548+ 'volume_id': '3',
1549+ 'volume_size': 1},
1550+ {'instance_id': instance_id,
1551+ 'device_name': '/dev/sdb3',
1552+ 'delete_on_termination': True,
1553+ 'snapshot_id': '4',
1554+ 'volume_id': '5'},
1555+ {'instance_id': instance_id,
1556+ 'device_name': '/dev/sdb4',
1557+ 'delete_on_termination': False,
1558+ 'snapshot_id': '6',
1559+ 'volume_id': '7'},
1560+ {'instance_id': instance_id,
1561+ 'device_name': '/dev/sdb5',
1562+ 'snapshot_id': '8',
1563+ 'volume_id': '9',
1564+ 'volume_size': 0},
1565+ {'instance_id': instance_id,
1566+ 'device_name': '/dev/sdb6',
1567+ 'snapshot_id': '10',
1568+ 'volume_id': '11',
1569+ 'volume_size': 1},
1570+ {'instance_id': instance_id,
1571+ 'device_name': '/dev/sdb7',
1572+ 'no_device': True},
1573+ {'instance_id': instance_id,
1574+ 'device_name': '/dev/sdb8',
1575+ 'virtual_name': 'swap'},
1576+ {'instance_id': instance_id,
1577+ 'device_name': '/dev/sdb9',
1578+ 'virtual_name': 'ephemeral3'}]
1579+
1580+ volumes = self._block_device_mapping_create(instance_id, mappings0)
1581+ return (inst1, inst2, volumes)
1582+
1583+ def _tearDownBlockDeviceMapping(self, inst1, inst2, volumes):
1584+ for vol in volumes:
1585+ db.volume_destroy(self.context, vol['id'])
1586+ for id in (inst1['id'], inst2['id']):
1587+ for bdm in db.block_device_mapping_get_all_by_instance(
1588+ self.context, id):
1589+ db.block_device_mapping_destroy(self.context, bdm['id'])
1590+ db.instance_destroy(self.context, inst2['id'])
1591+ db.instance_destroy(self.context, inst1['id'])
1592+
1593+ _expected_instance_bdm1 = {
1594+ 'instanceId': 'i-00000001',
1595+ 'rootDeviceName': '/dev/sdb1',
1596+ 'rootDeviceType': 'ebs'}
1597+
1598+ _expected_block_device_mapping0 = [
1599+ {'deviceName': '/dev/sdb1',
1600+ 'ebs': {'status': 'in-use',
1601+ 'deleteOnTermination': False,
1602+ 'volumeId': 2,
1603+ }},
1604+ {'deviceName': '/dev/sdb2',
1605+ 'ebs': {'status': 'in-use',
1606+ 'deleteOnTermination': False,
1607+ 'volumeId': 3,
1608+ }},
1609+ {'deviceName': '/dev/sdb3',
1610+ 'ebs': {'status': 'in-use',
1611+ 'deleteOnTermination': True,
1612+ 'volumeId': 5,
1613+ }},
1614+ {'deviceName': '/dev/sdb4',
1615+ 'ebs': {'status': 'in-use',
1616+ 'deleteOnTermination': False,
1617+ 'volumeId': 7,
1618+ }},
1619+ {'deviceName': '/dev/sdb5',
1620+ 'ebs': {'status': 'in-use',
1621+ 'deleteOnTermination': False,
1622+ 'volumeId': 9,
1623+ }},
1624+ {'deviceName': '/dev/sdb6',
1625+ 'ebs': {'status': 'in-use',
1626+ 'deleteOnTermination': False,
1627+ 'volumeId': 11, }}]
1628+ # NOTE(yamahata): swap/ephemeral device case isn't supported yet.
1629+
1630+ _expected_instance_bdm2 = {
1631+ 'instanceId': 'i-00000002',
1632+ 'rootDeviceName': '/dev/sdc1',
1633+ 'rootDeviceType': 'instance-store'}
1634+
1635+ def test_format_instance_bdm(self):
1636+ (inst1, inst2, volumes) = self._setUpBlockDeviceMapping()
1637+
1638+ result = {}
1639+ self.cloud._format_instance_bdm(self.context, inst1['id'], '/dev/sdb1',
1640+ result)
1641+ self.assertSubDictMatch(
1642+ {'rootDeviceType': self._expected_instance_bdm1['rootDeviceType']},
1643+ result)
1644+ self._assertEqualBlockDeviceMapping(
1645+ self._expected_block_device_mapping0, result['blockDeviceMapping'])
1646+
1647+ result = {}
1648+ self.cloud._format_instance_bdm(self.context, inst2['id'], '/dev/sdc1',
1649+ result)
1650+ self.assertSubDictMatch(
1651+ {'rootDeviceType': self._expected_instance_bdm2['rootDeviceType']},
1652+ result)
1653+
1654+ self._tearDownBlockDeviceMapping(inst1, inst2, volumes)
1655+
1656+ def _assertInstance(self, instance_id):
1657+ ec2_instance_id = ec2utils.id_to_ec2_id(instance_id)
1658+ result = self.cloud.describe_instances(self.context,
1659+ instance_id=[ec2_instance_id])
1660+ result = result['reservationSet'][0]
1661+ self.assertEqual(len(result['instancesSet']), 1)
1662+ result = result['instancesSet'][0]
1663+ self.assertEqual(result['instanceId'], ec2_instance_id)
1664+ return result
1665+
1666+ def _assertEqualBlockDeviceMapping(self, expected, result):
1667+ self.assertEqual(len(expected), len(result))
1668+ for x in expected:
1669+ found = False
1670+ for y in result:
1671+ if x['deviceName'] == y['deviceName']:
1672+ self.assertSubDictMatch(x, y)
1673+ found = True
1674+ break
1675+ self.assertTrue(found)
1676+
1677+ def test_describe_instances_bdm(self):
1678+ """Make sure describe_instances works with root_device_name and
1679+ block device mappings
1680+ """
1681+ (inst1, inst2, volumes) = self._setUpBlockDeviceMapping()
1682+
1683+ result = self._assertInstance(inst1['id'])
1684+ self.assertSubDictMatch(self._expected_instance_bdm1, result)
1685+ self._assertEqualBlockDeviceMapping(
1686+ self._expected_block_device_mapping0, result['blockDeviceMapping'])
1687+
1688+ result = self._assertInstance(inst2['id'])
1689+ self.assertSubDictMatch(self._expected_instance_bdm2, result)
1690+
1691+ self._tearDownBlockDeviceMapping(inst1, inst2, volumes)
1692+
1693 def test_describe_images(self):
1694 describe_images = self.cloud.describe_images
1695
1696@@ -443,6 +623,161 @@
1697 self.assertRaises(exception.ImageNotFound, describe_images,
1698 self.context, ['ami-fake'])
1699
1700+ def assertDictListUnorderedMatch(self, L1, L2, key):
1701+ self.assertEqual(len(L1), len(L2))
1702+ for d1 in L1:
1703+ self.assertTrue(key in d1)
1704+ for d2 in L2:
1705+ self.assertTrue(key in d2)
1706+ if d1[key] == d2[key]:
1707+ self.assertDictMatch(d1, d2)
1708+
1709+ def _setUpImageSet(self, create_volumes_and_snapshots=False):
1710+ mappings1 = [
1711+ {'device': '/dev/sda1', 'virtual': 'root'},
1712+
1713+ {'device': 'sdb0', 'virtual': 'ephemeral0'},
1714+ {'device': 'sdb1', 'virtual': 'ephemeral1'},
1715+ {'device': 'sdb2', 'virtual': 'ephemeral2'},
1716+ {'device': 'sdb3', 'virtual': 'ephemeral3'},
1717+ {'device': 'sdb4', 'virtual': 'ephemeral4'},
1718+
1719+ {'device': 'sdc0', 'virtual': 'swap'},
1720+ {'device': 'sdc1', 'virtual': 'swap'},
1721+ {'device': 'sdc2', 'virtual': 'swap'},
1722+ {'device': 'sdc3', 'virtual': 'swap'},
1723+ {'device': 'sdc4', 'virtual': 'swap'}]
1724+ block_device_mapping1 = [
1725+ {'device_name': '/dev/sdb1', 'snapshot_id': 01234567},
1726+ {'device_name': '/dev/sdb2', 'volume_id': 01234567},
1727+ {'device_name': '/dev/sdb3', 'virtual_name': 'ephemeral5'},
1728+ {'device_name': '/dev/sdb4', 'no_device': True},
1729+
1730+ {'device_name': '/dev/sdc1', 'snapshot_id': 12345678},
1731+ {'device_name': '/dev/sdc2', 'volume_id': 12345678},
1732+ {'device_name': '/dev/sdc3', 'virtual_name': 'ephemeral6'},
1733+ {'device_name': '/dev/sdc4', 'no_device': True}]
1734+ image1 = {
1735+ 'id': 1,
1736+ 'properties': {
1737+ 'kernel_id': 1,
1738+ 'type': 'machine',
1739+ 'image_state': 'available',
1740+ 'mappings': mappings1,
1741+ 'block_device_mapping': block_device_mapping1,
1742+ }
1743+ }
1744+
1745+ mappings2 = [{'device': '/dev/sda1', 'virtual': 'root'}]
1746+ block_device_mapping2 = [{'device_name': '/dev/sdb1',
1747+ 'snapshot_id': 01234567}]
1748+ image2 = {
1749+ 'id': 2,
1750+ 'properties': {
1751+ 'kernel_id': 2,
1752+ 'type': 'machine',
1753+ 'root_device_name': '/dev/sdb1',
1754+ 'mappings': mappings2,
1755+ 'block_device_mapping': block_device_mapping2}}
1756+
1757+ def fake_show(meh, context, image_id):
1758+ for i in [image1, image2]:
1759+ if i['id'] == image_id:
1760+ return i
1761+ raise exception.ImageNotFound(image_id=image_id)
1762+
1763+ def fake_detail(meh, context):
1764+ return [image1, image2]
1765+
1766+ self.stubs.Set(fake._FakeImageService, 'show', fake_show)
1767+ self.stubs.Set(fake._FakeImageService, 'detail', fake_detail)
1768+
1769+ volumes = []
1770+ snapshots = []
1771+ if create_volumes_and_snapshots:
1772+ for bdm in block_device_mapping1:
1773+ if 'volume_id' in bdm:
1774+ vol = self._volume_create(bdm['volume_id'])
1775+ volumes.append(vol['id'])
1776+ if 'snapshot_id' in bdm:
1777+ snap = db.snapshot_create(self.context,
1778+ {'id': bdm['snapshot_id'],
1779+ 'volume_id': 76543210,
1780+ 'status': "available",
1781+ 'volume_size': 1})
1782+ snapshots.append(snap['id'])
1783+ return (volumes, snapshots)
1784+
1785+ def _assertImageSet(self, result, root_device_type, root_device_name):
1786+ self.assertEqual(1, len(result['imagesSet']))
1787+ result = result['imagesSet'][0]
1788+ self.assertTrue('rootDeviceType' in result)
1789+ self.assertEqual(result['rootDeviceType'], root_device_type)
1790+ self.assertTrue('rootDeviceName' in result)
1791+ self.assertEqual(result['rootDeviceName'], root_device_name)
1792+ self.assertTrue('blockDeviceMapping' in result)
1793+
1794+ return result
1795+
1796+ _expected_root_device_name1 = '/dev/sda1'
1797+ # NOTE(yamahata): noDevice doesn't make sense when returning mapping
1798+ # It makes sense only when user overriding existing
1799+ # mapping.
1800+ _expected_bdms1 = [
1801+ {'deviceName': '/dev/sdb0', 'virtualName': 'ephemeral0'},
1802+ {'deviceName': '/dev/sdb1', 'ebs': {'snapshotId':
1803+ 'snap-00053977'}},
1804+ {'deviceName': '/dev/sdb2', 'ebs': {'snapshotId':
1805+ 'vol-00053977'}},
1806+ {'deviceName': '/dev/sdb3', 'virtualName': 'ephemeral5'},
1807+ # {'deviceName': '/dev/sdb4', 'noDevice': True},
1808+
1809+ {'deviceName': '/dev/sdc0', 'virtualName': 'swap'},
1810+ {'deviceName': '/dev/sdc1', 'ebs': {'snapshotId':
1811+ 'snap-00bc614e'}},
1812+ {'deviceName': '/dev/sdc2', 'ebs': {'snapshotId':
1813+ 'vol-00bc614e'}},
1814+ {'deviceName': '/dev/sdc3', 'virtualName': 'ephemeral6'},
1815+ # {'deviceName': '/dev/sdc4', 'noDevice': True}
1816+ ]
1817+
1818+ _expected_root_device_name2 = '/dev/sdb1'
1819+ _expected_bdms2 = [{'deviceName': '/dev/sdb1',
1820+ 'ebs': {'snapshotId': 'snap-00053977'}}]
1821+
1822+ # NOTE(yamahata):
1823+ # InstanceBlockDeviceMappingItemType
1824+ # rootDeviceType
1825+ # rootDeviceName
1826+ # blockDeviceMapping
1827+ # deviceName
1828+ # virtualName
1829+ # ebs
1830+ # snapshotId
1831+ # volumeSize
1832+ # deleteOnTermination
1833+ # noDevice
1834+ def test_describe_image_mapping(self):
1835+ """test for rootDeviceName and blockDeiceMapping"""
1836+ describe_images = self.cloud.describe_images
1837+ self._setUpImageSet()
1838+
1839+ result = describe_images(self.context, ['ami-00000001'])
1840+ result = self._assertImageSet(result, 'instance-store',
1841+ self._expected_root_device_name1)
1842+
1843+ self.assertDictListUnorderedMatch(result['blockDeviceMapping'],
1844+ self._expected_bdms1, 'deviceName')
1845+
1846+ result = describe_images(self.context, ['ami-00000002'])
1847+ result = self._assertImageSet(result, 'ebs',
1848+ self._expected_root_device_name2)
1849+
1850+ self.assertDictListUnorderedMatch(result['blockDeviceMapping'],
1851+ self._expected_bdms2, 'deviceName')
1852+
1853+ self.stubs.UnsetAll()
1854+
1855 def test_describe_image_attribute(self):
1856 describe_image_attribute = self.cloud.describe_image_attribute
1857
1858@@ -456,6 +791,32 @@
1859 'launchPermission')
1860 self.assertEqual([{'group': 'all'}], result['launchPermission'])
1861
1862+ def test_describe_image_attribute_root_device_name(self):
1863+ describe_image_attribute = self.cloud.describe_image_attribute
1864+ self._setUpImageSet()
1865+
1866+ result = describe_image_attribute(self.context, 'ami-00000001',
1867+ 'rootDeviceName')
1868+ self.assertEqual(result['rootDeviceName'],
1869+ self._expected_root_device_name1)
1870+ result = describe_image_attribute(self.context, 'ami-00000002',
1871+ 'rootDeviceName')
1872+ self.assertEqual(result['rootDeviceName'],
1873+ self._expected_root_device_name2)
1874+
1875+ def test_describe_image_attribute_block_device_mapping(self):
1876+ describe_image_attribute = self.cloud.describe_image_attribute
1877+ self._setUpImageSet()
1878+
1879+ result = describe_image_attribute(self.context, 'ami-00000001',
1880+ 'blockDeviceMapping')
1881+ self.assertDictListUnorderedMatch(result['blockDeviceMapping'],
1882+ self._expected_bdms1, 'deviceName')
1883+ result = describe_image_attribute(self.context, 'ami-00000002',
1884+ 'blockDeviceMapping')
1885+ self.assertDictListUnorderedMatch(result['blockDeviceMapping'],
1886+ self._expected_bdms2, 'deviceName')
1887+
1888 def test_modify_image_attribute(self):
1889 modify_image_attribute = self.cloud.modify_image_attribute
1890
1891@@ -683,7 +1044,7 @@
1892 def test_update_of_volume_display_fields(self):
1893 vol = db.volume_create(self.context, {})
1894 self.cloud.update_volume(self.context,
1895- ec2utils.id_to_ec2_id(vol['id'], 'vol-%08x'),
1896+ ec2utils.id_to_ec2_vol_id(vol['id']),
1897 display_name='c00l v0lum3')
1898 vol = db.volume_get(self.context, vol['id'])
1899 self.assertEqual('c00l v0lum3', vol['display_name'])
1900@@ -692,7 +1053,7 @@
1901 def test_update_of_volume_wont_update_private_fields(self):
1902 vol = db.volume_create(self.context, {})
1903 self.cloud.update_volume(self.context,
1904- ec2utils.id_to_ec2_id(vol['id'], 'vol-%08x'),
1905+ ec2utils.id_to_ec2_vol_id(vol['id']),
1906 mountpoint='/not/here')
1907 vol = db.volume_get(self.context, vol['id'])
1908 self.assertEqual(None, vol['mountpoint'])
1909@@ -770,11 +1131,13 @@
1910
1911 self._restart_compute_service()
1912
1913- def _volume_create(self):
1914+ def _volume_create(self, volume_id=None):
1915 kwargs = {'status': 'available',
1916 'host': self.volume.host,
1917 'size': 1,
1918 'attach_status': 'detached', }
1919+ if volume_id:
1920+ kwargs['id'] = volume_id
1921 return db.volume_create(self.context, kwargs)
1922
1923 def _assert_volume_attached(self, vol, instance_id, mountpoint):
1924@@ -803,10 +1166,10 @@
1925 'max_count': 1,
1926 'block_device_mapping': [{'device_name': '/dev/vdb',
1927 'volume_id': vol1['id'],
1928- 'delete_on_termination': False, },
1929+ 'delete_on_termination': False},
1930 {'device_name': '/dev/vdc',
1931 'volume_id': vol2['id'],
1932- 'delete_on_termination': True, },
1933+ 'delete_on_termination': True},
1934 ]}
1935 ec2_instance_id = self._run_instance_wait(**kwargs)
1936 instance_id = ec2utils.ec2_id_to_id(ec2_instance_id)
1937@@ -938,7 +1301,7 @@
1938 def test_run_with_snapshot(self):
1939 """Makes sure run/stop/start instance with snapshot works."""
1940 vol = self._volume_create()
1941- ec2_volume_id = ec2utils.id_to_ec2_id(vol['id'], 'vol-%08x')
1942+ ec2_volume_id = ec2utils.id_to_ec2_vol_id(vol['id'])
1943
1944 ec2_snapshot1_id = self._create_snapshot(ec2_volume_id)
1945 snapshot1_id = ec2utils.ec2_id_to_id(ec2_snapshot1_id)
1946@@ -997,3 +1360,33 @@
1947 self.cloud.delete_snapshot(self.context, snapshot_id)
1948 greenthread.sleep(0.3)
1949 db.volume_destroy(self.context, vol['id'])
1950+
1951+ def test_create_image(self):
1952+ """Make sure that CreateImage works"""
1953+ # enforce periodic tasks run in short time to avoid wait for 60s.
1954+ self._restart_compute_service(periodic_interval=0.3)
1955+
1956+ (volumes, snapshots) = self._setUpImageSet(
1957+ create_volumes_and_snapshots=True)
1958+
1959+ kwargs = {'image_id': 'ami-1',
1960+ 'instance_type': FLAGS.default_instance_type,
1961+ 'max_count': 1}
1962+ ec2_instance_id = self._run_instance_wait(**kwargs)
1963+
1964+ # TODO(yamahata): s3._s3_create() can't be tested easily by unit test
1965+ # as there is no unit test for s3.create()
1966+ ## result = self.cloud.create_image(self.context, ec2_instance_id,
1967+ ## no_reboot=True)
1968+ ## ec2_image_id = result['imageId']
1969+ ## created_image = self.cloud.describe_images(self.context,
1970+ ## [ec2_image_id])
1971+
1972+ self.cloud.terminate_instances(self.context, [ec2_instance_id])
1973+ for vol in volumes:
1974+ db.volume_destroy(self.context, vol)
1975+ for snap in snapshots:
1976+ db.snapshot_destroy(self.context, snap)
1977+ # TODO(yamahata): clean up snapshot created by CreateImage.
1978+
1979+ self._restart_compute_service()
1980
1981=== modified file 'nova/tests/test_compute.py'
1982--- nova/tests/test_compute.py 2011-06-30 19:20:59 +0000
1983+++ nova/tests/test_compute.py 2011-07-08 09:39:31 +0000
1984@@ -810,3 +810,114 @@
1985 LOG.info(_("After force-killing instances: %s"), instances)
1986 self.assertEqual(len(instances), 1)
1987 self.assertEqual(power_state.SHUTOFF, instances[0]['state'])
1988+
1989+ @staticmethod
1990+ def _parse_db_block_device_mapping(bdm_ref):
1991+ attr_list = ('delete_on_termination', 'device_name', 'no_device',
1992+ 'virtual_name', 'volume_id', 'volume_size', 'snapshot_id')
1993+ bdm = {}
1994+ for attr in attr_list:
1995+ val = bdm_ref.get(attr, None)
1996+ if val:
1997+ bdm[attr] = val
1998+
1999+ return bdm
2000+
2001+ def test_update_block_device_mapping(self):
2002+ instance_id = self._create_instance()
2003+ mappings = [
2004+ {'virtual': 'ami', 'device': 'sda1'},
2005+ {'virtual': 'root', 'device': '/dev/sda1'},
2006+
2007+ {'virtual': 'swap', 'device': 'sdb1'},
2008+ {'virtual': 'swap', 'device': 'sdb2'},
2009+ {'virtual': 'swap', 'device': 'sdb3'},
2010+ {'virtual': 'swap', 'device': 'sdb4'},
2011+
2012+ {'virtual': 'ephemeral0', 'device': 'sdc1'},
2013+ {'virtual': 'ephemeral1', 'device': 'sdc2'},
2014+ {'virtual': 'ephemeral2', 'device': 'sdc3'}]
2015+ block_device_mapping = [
2016+ # root
2017+ {'device_name': '/dev/sda1',
2018+ 'snapshot_id': 0x12345678,
2019+ 'delete_on_termination': False},
2020+
2021+
2022+ # overwrite swap
2023+ {'device_name': '/dev/sdb2',
2024+ 'snapshot_id': 0x23456789,
2025+ 'delete_on_termination': False},
2026+ {'device_name': '/dev/sdb3',
2027+ 'snapshot_id': 0x3456789A},
2028+ {'device_name': '/dev/sdb4',
2029+ 'no_device': True},
2030+
2031+ # overwrite ephemeral
2032+ {'device_name': '/dev/sdc2',
2033+ 'snapshot_id': 0x456789AB,
2034+ 'delete_on_termination': False},
2035+ {'device_name': '/dev/sdc3',
2036+ 'snapshot_id': 0x56789ABC},
2037+ {'device_name': '/dev/sdc4',
2038+ 'no_device': True},
2039+
2040+ # volume
2041+ {'device_name': '/dev/sdd1',
2042+ 'snapshot_id': 0x87654321,
2043+ 'delete_on_termination': False},
2044+ {'device_name': '/dev/sdd2',
2045+ 'snapshot_id': 0x98765432},
2046+ {'device_name': '/dev/sdd3',
2047+ 'snapshot_id': 0xA9875463},
2048+ {'device_name': '/dev/sdd4',
2049+ 'no_device': True}]
2050+
2051+ self.compute_api._update_image_block_device_mapping(
2052+ self.context, instance_id, mappings)
2053+
2054+ bdms = [self._parse_db_block_device_mapping(bdm_ref)
2055+ for bdm_ref in db.block_device_mapping_get_all_by_instance(
2056+ self.context, instance_id)]
2057+ expected_result = [
2058+ {'virtual_name': 'swap', 'device_name': '/dev/sdb1'},
2059+ {'virtual_name': 'swap', 'device_name': '/dev/sdb2'},
2060+ {'virtual_name': 'swap', 'device_name': '/dev/sdb3'},
2061+ {'virtual_name': 'swap', 'device_name': '/dev/sdb4'},
2062+ {'virtual_name': 'ephemeral0', 'device_name': '/dev/sdc1'},
2063+ {'virtual_name': 'ephemeral1', 'device_name': '/dev/sdc2'},
2064+ {'virtual_name': 'ephemeral2', 'device_name': '/dev/sdc3'}]
2065+ bdms.sort()
2066+ expected_result.sort()
2067+ self.assertDictListMatch(bdms, expected_result)
2068+
2069+ self.compute_api._update_block_device_mapping(
2070+ self.context, instance_id, block_device_mapping)
2071+ bdms = [self._parse_db_block_device_mapping(bdm_ref)
2072+ for bdm_ref in db.block_device_mapping_get_all_by_instance(
2073+ self.context, instance_id)]
2074+ expected_result = [
2075+ {'snapshot_id': 0x12345678, 'device_name': '/dev/sda1'},
2076+
2077+ {'virtual_name': 'swap', 'device_name': '/dev/sdb1'},
2078+ {'snapshot_id': 0x23456789, 'device_name': '/dev/sdb2'},
2079+ {'snapshot_id': 0x3456789A, 'device_name': '/dev/sdb3'},
2080+ {'no_device': True, 'device_name': '/dev/sdb4'},
2081+
2082+ {'virtual_name': 'ephemeral0', 'device_name': '/dev/sdc1'},
2083+ {'snapshot_id': 0x456789AB, 'device_name': '/dev/sdc2'},
2084+ {'snapshot_id': 0x56789ABC, 'device_name': '/dev/sdc3'},
2085+ {'no_device': True, 'device_name': '/dev/sdc4'},
2086+
2087+ {'snapshot_id': 0x87654321, 'device_name': '/dev/sdd1'},
2088+ {'snapshot_id': 0x98765432, 'device_name': '/dev/sdd2'},
2089+ {'snapshot_id': 0xA9875463, 'device_name': '/dev/sdd3'},
2090+ {'no_device': True, 'device_name': '/dev/sdd4'}]
2091+ bdms.sort()
2092+ expected_result.sort()
2093+ self.assertDictListMatch(bdms, expected_result)
2094+
2095+ for bdm in db.block_device_mapping_get_all_by_instance(
2096+ self.context, instance_id):
2097+ db.block_device_mapping_destroy(self.context, bdm['id'])
2098+ self.compute.terminate_instance(self.context, instance_id)
2099
2100=== modified file 'nova/tests/test_volume.py'
2101--- nova/tests/test_volume.py 2011-06-06 17:20:08 +0000
2102+++ nova/tests/test_volume.py 2011-07-08 09:39:31 +0000
2103@@ -27,8 +27,10 @@
2104 from nova import db
2105 from nova import flags
2106 from nova import log as logging
2107+from nova import rpc
2108 from nova import test
2109 from nova import utils
2110+from nova import volume
2111
2112 FLAGS = flags.FLAGS
2113 LOG = logging.getLogger('nova.tests.volume')
2114@@ -43,6 +45,11 @@
2115 self.flags(connection_type='fake')
2116 self.volume = utils.import_object(FLAGS.volume_manager)
2117 self.context = context.get_admin_context()
2118+ self.instance_id = db.instance_create(self.context, {})['id']
2119+
2120+ def tearDown(self):
2121+ db.instance_destroy(self.context, self.instance_id)
2122+ super(VolumeTestCase, self).tearDown()
2123
2124 @staticmethod
2125 def _create_volume(size='0', snapshot_id=None):
2126@@ -223,6 +230,30 @@
2127 snapshot_id)
2128 self.volume.delete_volume(self.context, volume_id)
2129
2130+ def test_create_snapshot_force(self):
2131+ """Test snapshot in use can be created forcibly."""
2132+
2133+ def fake_cast(ctxt, topic, msg):
2134+ pass
2135+ self.stubs.Set(rpc, 'cast', fake_cast)
2136+
2137+ volume_id = self._create_volume()
2138+ self.volume.create_volume(self.context, volume_id)
2139+ db.volume_attached(self.context, volume_id, self.instance_id,
2140+ '/dev/sda1')
2141+
2142+ volume_api = volume.api.API()
2143+ self.assertRaises(exception.ApiError,
2144+ volume_api.create_snapshot,
2145+ self.context, volume_id,
2146+ 'fake_name', 'fake_description')
2147+ snapshot_ref = volume_api.create_snapshot_force(self.context,
2148+ volume_id,
2149+ 'fake_name',
2150+ 'fake_description')
2151+ db.snapshot_destroy(self.context, snapshot_ref['id'])
2152+ db.volume_destroy(self.context, volume_id)
2153+
2154
2155 class DriverTestCase(test.TestCase):
2156 """Base Test class for Drivers."""
2157
2158=== modified file 'nova/volume/api.py'
2159--- nova/volume/api.py 2011-06-24 12:01:51 +0000
2160+++ nova/volume/api.py 2011-07-08 09:39:31 +0000
2161@@ -140,9 +140,10 @@
2162 {"method": "remove_volume",
2163 "args": {'volume_id': volume_id}})
2164
2165- def create_snapshot(self, context, volume_id, name, description):
2166+ def _create_snapshot(self, context, volume_id, name, description,
2167+ force=False):
2168 volume = self.get(context, volume_id)
2169- if volume['status'] != "available":
2170+ if ((not force) and (volume['status'] != "available")):
2171 raise exception.ApiError(_("Volume status must be available"))
2172
2173 options = {
2174@@ -164,6 +165,14 @@
2175 "snapshot_id": snapshot['id']}})
2176 return snapshot
2177
2178+ def create_snapshot(self, context, volume_id, name, description):
2179+ return self._create_snapshot(context, volume_id, name, description,
2180+ False)
2181+
2182+ def create_snapshot_force(self, context, volume_id, name, description):
2183+ return self._create_snapshot(context, volume_id, name, description,
2184+ True)
2185+
2186 def delete_snapshot(self, context, snapshot_id):
2187 snapshot = self.get_snapshot(context, snapshot_id)
2188 if snapshot['status'] != "available":