Merge lp:~vishvananda/nova/kill-objectstore into lp:~hudson-openstack/nova/trunk
- kill-objectstore
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Rick Harris |
Approved revision: | 774 |
Merged at revision: | 781 |
Proposed branch: | lp:~vishvananda/nova/kill-objectstore |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
1965 lines (+813/-272) 24 files modified
bin/nova-manage (+155/-2) doc/source/man/novamanage.rst (+27/-3) doc/source/runnova/nova.manage.rst (+23/-0) nova/api/ec2/cloud.py (+115/-69) nova/api/ec2/ec2utils.py (+32/-0) nova/compute/api.py (+2/-2) nova/flags.py (+1/-1) nova/image/glance.py (+50/-14) nova/image/local.py (+83/-37) nova/image/s3.py (+218/-75) nova/image/service.py (+17/-7) nova/objectstore/image.py (+1/-2) nova/tests/api/openstack/fakes.py (+8/-6) nova/tests/api/openstack/test_images.py (+9/-6) nova/tests/fake_flags.py (+1/-0) nova/tests/test_cloud.py (+19/-11) nova/tests/test_compute.py (+7/-2) nova/tests/test_console.py (+1/-1) nova/tests/test_direct.py (+1/-2) nova/tests/test_quota.py (+19/-13) nova/tests/test_scheduler.py (+1/-3) nova/tests/test_volume.py (+1/-1) nova/virt/images.py (+17/-12) nova/virt/libvirt_conn.py (+5/-3) |
To merge this branch: | bzr merge lp:~vishvananda/nova/kill-objectstore |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Rick Harris (community) | Approve | ||
Devin Carlen (community) | Approve | ||
Jay Pipes (community) | Approve | ||
Review via email: mp+52163@code.launchpad.net |
Commit message
Description of the change
Modifies S3ImageService to wrap LocalImageService or GlanceImageService. It now pulls the parts out of s3, decrypts them locally, and sends them to the underlying service. It includes various fixes for image/glance.py, image/local.py and the tests.
I also uncovered a bug in glance so for the glance backend to work properly, it requires the patch to glance here lp:~vishvananda/glance/fix-update or Glance's Cactus trunk r80.
Vish Ishaya (vishvananda) wrote : | # |
No, it hasn't been required for a while. If it is none, the instance will be booted as a whole disk. keep in mind that get('kernel_id') is equivalent to get('kernel_id', None)
On Mar 3, 2011, at 11:29 PM, Devin Carlen wrote:
> Review: Needs Information
> 226 + kernel_
>
> So we seem to go back and forth on this a lot. :) Is kernel_id required or not these days?
> --
> https:/
> You are the owner of lp:~vishvananda/nova/kill-objectstore.
Jay Pipes (jaypipes) wrote : | # |
Hi Vish, nice work.
A question:
300 + for ec2_id in image_id:
301 + try:
302 + image = self.image_
Should line 302 be this, instead?
image = self.image_
Also, these lines in CloudController
322 + image = {"image_location": image_location}
323 + image_id = self.image_
There is a "location" field already for images in Glance. Simply setting image['location'] = image_location would trigger Glance to store the location of the image along with the other image information. It would also mean there would be no need to set the Content-type header to 'application/
Not sure how the local image service would handle 'location', but probably the same as it handles 'image_location'...
Other than that, looks very good.
-jay
Vish Ishaya (vishvananda) wrote : | # |
On Mar 4, 2011, at 9:02 AM, Jay Pipes wrote:
> Review: Needs Fixing
> Hi Vish, nice work.
>
> A question:
>
> 300 + for ec2_id in image_id:
> 301 + try:
> 302 + image = self.image_
>
> Should line 302 be this, instead?
>
> image = self.image_
I had wrapped all of the ids in the S3 service instead of cloud because initially I thought that it would need that info for metadata and such. Turns out it was unneccessary, so following your (sort of) suggestion, I went ahead and moved the mapping back into cloud and made the S3 service deal only with internal ids. So I suppose a non-ec2 cloud could theoretically use the S3 image service.
>
> Also, these lines in CloudController
>
> 322 + image = {"image_location": image_location}
> 323 + image_id = self.image_
>
> There is a "location" field already for images in Glance. Simply setting image['location'] = image_location would trigger Glance to store the location of the image along with the other image information. It would also mean there would be no need to set the Content-type header to 'application/
Changing location is not helpful. When i talked to rick about it, he mentioned that location is a field used by the glance backend. I'm using image_location as (an ec2 specific) field to represent which bucket the manifest came from.
The application/
(Side note: I'm attempting to construct a test case for the glance issue)
>
> Not sure how the local image service would handle 'location', but probably the same as it handles 'image_location'...
>
> Other than that, looks very good.
>
> -jay
> --
> https:/
> You are the owner of lp:~vishvananda/nova/kill-objectstore.
Jay Pipes (jaypipes) wrote : | # |
On Fri, Mar 4, 2011 at 3:11 PM, Vish Ishaya <email address hidden> wrote:
> I had wrapped all of the ids in the S3 service instead of cloud because initially I thought that it would need that info for metadata and such. Turns out it was unneccessary, so following your (sort of) suggestion, I went ahead and moved the mapping back into cloud and made the S3 service deal only with internal ids. So I suppose a non-ec2 cloud could theoretically use the S3 image service.
Ah, ok.
>> Also, these lines in CloudController
>>
>> 322 + image = {"image_location": image_location}
>> 323 + image_id = self.image_
>>
>> There is a "location" field already for images in Glance. Simply setting image['location'] = image_location would trigger Glance to store the location of the image along with the other image information. It would also mean there would be no need to set the Content-type header to 'application/
>
> Changing location is not helpful. When i talked to rick about it, he mentioned that location is a field used by the glance backend. I'm using image_location as (an ec2 specific) field to represent which bucket the manifest came from.
Ah, I see, OK, no that's not what that field is for in Glance...
> The application/
> (Side note: I'm attempting to construct a test case for the glance issue)
Thanks, much appreciated, Vish. Don't mean to shut you down on those
merge props, just that some of that has been addressed in other
branches currently in review...
-jay
Vish Ishaya (vishvananda) wrote : | # |
It occurs to me that we may need a utility to convert old images into new images. Or at least a command line utility to stick images in to local image service.
Vish Ishaya (vishvananda) wrote : | # |
Added a bit more to the patch, including some nice nova-manage commands for uploading and converting images.
Devin Carlen (devcamcar) wrote : | # |
A few nits:
66 + def register_all(self, image, kernel, ramdisk, owner, name=None,
78 + def image_register(
93 + def ramdisk_
This feels a little inconsistent. Should be xxx_register or register_xxx instead of a mix.
66 + def register_all(self, image, kernel, ramdisk, owner, name=None,
67 + is_public='T', architecture=
68 + """Uploads an image, kernel, and ramdisk into the image_service
69 + arguments: image kernel ramdisk owner [name] [is_public='T']
70 + [architecture=
71 + kernel_id = self._register(
72 + is_public, architecture)
73 + ramdisk_id = self._register(
74 + is_public, architecture)
75 + self._register(
76 + architecture, kernel_id, ramdisk_id)
77 +
Instead of the calls to self._register which results in some duplicated code, you could call the higher level method for better code reuse:
kernel_id = kernel_
ramdisk_id = ramdisk_
etc.
451 +<<<<<<< TREE
452 instance_id = ec2_id_
453 +=======
454 + instance_id = ec2utils.
455 +>>>>>>> MERGE-SOURCE
Merge issue here.
Jay Pipes (jaypipes) wrote : | # |
lgtm besides the merge issue. nice work on ImageCommands, btw.
- 767. By Vish Ishaya
-
rework register commands based on review
- 768. By Vish Ishaya
-
pep8
- 769. By Vish Ishaya
-
move the images_dir out of the way when converting
Anne Gentle (annegentle) wrote : | # |
Vish, could you document the new nova-manage commands in the man file? You may have already done so, but I poked around the commits and didn't see a change to doc/source/
- 770. By Vish Ishaya
-
modify nova manage doc
- 771. By Vish Ishaya
-
update code to work with new container and disk formats from glance
- 772. By Vish Ishaya
-
update manpage
Devin Carlen (devcamcar) wrote : | # |
Cool, looks good!
Rick Harris (rconradharris) wrote : | # |
Good Job. Just a couple of femto-nits:
> 829 + if image == None:
Better would be `if image is None:`
> 830 + raise exception.NotFound
> 993 + raise exception.NotFound
Should we include a helpful string here? Something like:
raise exception.
> 1253 + except:
Can we qualify this exception handler?
> 1340 + cloud_private_key, decrypted_
One space -----> thattaway :)
> 1168 + @staticmethod
> 1169 + def _filter(context, images):
Since this references the class, seems like it would make more sense as a @classmethod.
- 773. By Vish Ishaya
-
merged trunk
Vish Ishaya (vishvananda) wrote : | # |
> Good Job. Just a couple of femto-nits:
>
> > 829 + if image == None:
>
> Better would be `if image is None:`
Done
>
> > 830 + raise exception.NotFound
> > 993 + raise exception.NotFound
>
> Should we include a helpful string here? Something like:
>
> raise exception.
> locals())
The rest of the image services raise without a message. It is trapped and wrapped in a message with the proper ec2_id in cloud so it isn't a huge issue for that use case. I hesitate to make this patch bigger by changing all of the NotFounds in the image services.
>
> > 1253 + except:
>
> Can we qualify this exception handler?
Possibly, This is copied from objectstore code and I'm not sure what all of the possible expetions manifest.
>
> > 1340 + cloud_private_key, decrypted_
>
> One space -----> thattaway :)
>
> > 1168 + @staticmethod
> > 1169 + def _filter(context, images):
>
> Since this references the class, seems like it would make more sense as a
> @classmethod.
I suppose so, i was using staticmethod, but i had to use the class name to call other the other helper method that i defined. I suppose that is a fine time to switch to classmethod.
- 774. By Vish Ishaya
-
minor fixes from review
Preview Diff
1 | === modified file 'bin/nova-manage' |
2 | --- bin/nova-manage 2011-03-09 18:16:26 +0000 |
3 | +++ bin/nova-manage 2011-03-10 05:15:12 +0000 |
4 | @@ -55,6 +55,8 @@ |
5 | |
6 | import datetime |
7 | import gettext |
8 | +import glob |
9 | +import json |
10 | import os |
11 | import re |
12 | import sys |
13 | @@ -81,7 +83,7 @@ |
14 | from nova import quota |
15 | from nova import rpc |
16 | from nova import utils |
17 | -from nova.api.ec2.cloud import ec2_id_to_id |
18 | +from nova.api.ec2 import ec2utils |
19 | from nova.auth import manager |
20 | from nova.cloudpipe import pipelib |
21 | from nova.compute import instance_types |
22 | @@ -94,6 +96,7 @@ |
23 | flags.DECLARE('vlan_start', 'nova.network.manager') |
24 | flags.DECLARE('vpn_start', 'nova.network.manager') |
25 | flags.DECLARE('fixed_range_v6', 'nova.network.manager') |
26 | +flags.DECLARE('images_path', 'nova.image.local') |
27 | flags.DEFINE_flag(flags.HelpFlag()) |
28 | flags.DEFINE_flag(flags.HelpshortFlag()) |
29 | flags.DEFINE_flag(flags.HelpXMLFlag()) |
30 | @@ -104,7 +107,7 @@ |
31 | args: [object_id], e.g. 'vol-0000000a' or 'volume-0000000a' or '10' |
32 | """ |
33 | if '-' in object_id: |
34 | - return ec2_id_to_id(object_id) |
35 | + return ec2utils.ec2_id_to_id(object_id) |
36 | else: |
37 | return int(object_id) |
38 | |
39 | @@ -744,6 +747,155 @@ |
40 | self._print_instance_types(name, inst_types) |
41 | |
42 | |
43 | +class ImageCommands(object): |
44 | + """Methods for dealing with a cloud in an odd state""" |
45 | + |
46 | + def __init__(self, *args, **kwargs): |
47 | + self.image_service = utils.import_object(FLAGS.image_service) |
48 | + |
49 | + def _register(self, image_type, disk_format, container_format, |
50 | + path, owner, name=None, is_public='T', |
51 | + architecture='x86_64', kernel_id=None, ramdisk_id=None): |
52 | + meta = {'is_public': True, |
53 | + 'name': name, |
54 | + 'disk_format': disk_format, |
55 | + 'container_format': container_format, |
56 | + 'properties': {'image_state': 'available', |
57 | + 'owner': owner, |
58 | + 'type': image_type, |
59 | + 'architecture': architecture, |
60 | + 'image_location': 'local', |
61 | + 'is_public': (is_public == 'T')}} |
62 | + print image_type, meta |
63 | + if kernel_id: |
64 | + meta['properties']['kernel_id'] = int(kernel_id) |
65 | + if ramdisk_id: |
66 | + meta['properties']['ramdisk_id'] = int(ramdisk_id) |
67 | + elevated = context.get_admin_context() |
68 | + try: |
69 | + with open(path) as ifile: |
70 | + image = self.image_service.create(elevated, meta, ifile) |
71 | + new = image['id'] |
72 | + print _("Image registered to %(new)s (%(new)08x).") % locals() |
73 | + return new |
74 | + except Exception as exc: |
75 | + print _("Failed to register %(path)s: %(exc)s") % locals() |
76 | + |
77 | + def all_register(self, image, kernel, ramdisk, owner, name=None, |
78 | + is_public='T', architecture='x86_64'): |
79 | + """Uploads an image, kernel, and ramdisk into the image_service |
80 | + arguments: image kernel ramdisk owner [name] [is_public='T'] |
81 | + [architecture='x86_64']""" |
82 | + kernel_id = self.kernel_register(kernel, owner, None, |
83 | + is_public, architecture) |
84 | + ramdisk_id = self.ramdisk_register(ramdisk, owner, None, |
85 | + is_public, architecture) |
86 | + self.image_register(image, owner, name, is_public, |
87 | + architecture, kernel_id, ramdisk_id) |
88 | + |
89 | + def image_register(self, path, owner, name=None, is_public='T', |
90 | + architecture='x86_64', kernel_id=None, ramdisk_id=None, |
91 | + disk_format='ami', container_format='ami'): |
92 | + """Uploads an image into the image_service |
93 | + arguments: path owner [name] [is_public='T'] [architecture='x86_64'] |
94 | + [kernel_id=None] [ramdisk_id=None] |
95 | + [disk_format='ami'] [container_format='ami']""" |
96 | + return self._register('machine', disk_format, container_format, path, |
97 | + owner, name, is_public, architecture, |
98 | + kernel_id, ramdisk_id) |
99 | + |
100 | + def kernel_register(self, path, owner, name=None, is_public='T', |
101 | + architecture='x86_64'): |
102 | + """Uploads a kernel into the image_service |
103 | + arguments: path owner [name] [is_public='T'] [architecture='x86_64'] |
104 | + """ |
105 | + return self._register('kernel', 'aki', 'aki', path, owner, name, |
106 | + is_public, architecture) |
107 | + |
108 | + def ramdisk_register(self, path, owner, name=None, is_public='T', |
109 | + architecture='x86_64'): |
110 | + """Uploads a ramdisk into the image_service |
111 | + arguments: path owner [name] [is_public='T'] [architecture='x86_64'] |
112 | + """ |
113 | + return self._register('ramdisk', 'ari', 'ari', path, owner, name, |
114 | + is_public, architecture) |
115 | + |
116 | + def _lookup(self, old_image_id): |
117 | + try: |
118 | + internal_id = ec2utils.ec2_id_to_id(old_image_id) |
119 | + image = self.image_service.show(context, internal_id) |
120 | + except exception.NotFound: |
121 | + image = self.image_service.show_by_name(context, old_image_id) |
122 | + return image['id'] |
123 | + |
124 | + def _old_to_new(self, old): |
125 | + mapping = {'machine': 'ami', |
126 | + 'kernel': 'aki', |
127 | + 'ramdisk': 'ari'} |
128 | + container_format = mapping[old['type']] |
129 | + disk_format = container_format |
130 | + new = {'disk_format': disk_format, |
131 | + 'container_format': container_format, |
132 | + 'is_public': True, |
133 | + 'name': old['imageId'], |
134 | + 'properties': {'image_state': old['imageState'], |
135 | + 'owner': old['imageOwnerId'], |
136 | + 'architecture': old['architecture'], |
137 | + 'type': old['type'], |
138 | + 'image_location': old['imageLocation'], |
139 | + 'is_public': old['isPublic']}} |
140 | + if old.get('kernelId'): |
141 | + new['properties']['kernel_id'] = self._lookup(old['kernelId']) |
142 | + if old.get('ramdiskId'): |
143 | + new['properties']['ramdisk_id'] = self._lookup(old['ramdiskId']) |
144 | + return new |
145 | + |
146 | + def _convert_images(self, images): |
147 | + elevated = context.get_admin_context() |
148 | + for image_path, image_metadata in images.iteritems(): |
149 | + meta = self._old_to_new(image_metadata) |
150 | + old = meta['name'] |
151 | + try: |
152 | + with open(image_path) as ifile: |
153 | + image = self.image_service.create(elevated, meta, ifile) |
154 | + new = image['id'] |
155 | + print _("Image %(old)s converted to " \ |
156 | + "%(new)s (%(new)08x).") % locals() |
157 | + except Exception as exc: |
158 | + print _("Failed to convert %(old)s: %(exc)s") % locals() |
159 | + |
160 | + def convert(self, directory): |
161 | + """Uploads old objectstore images in directory to new service |
162 | + arguments: directory""" |
163 | + machine_images = {} |
164 | + other_images = {} |
165 | + directory = os.path.abspath(directory) |
166 | + # NOTE(vish): If we're importing from the images path dir, attempt |
167 | + # to move the files out of the way before importing |
168 | + # so we aren't writing to the same directory. This |
169 | + # may fail if the dir was a mointpoint. |
170 | + if (FLAGS.image_service == 'nova.image.local.LocalImageService' |
171 | + and directory == os.path.abspath(FLAGS.images_path)): |
172 | + new_dir = "%s_bak" % directory |
173 | + os.move(directory, new_dir) |
174 | + os.mkdir(directory) |
175 | + directory = new_dir |
176 | + for fn in glob.glob("%s/*/info.json" % directory): |
177 | + try: |
178 | + image_path = os.path.join(fn.rpartition('/')[0], 'image') |
179 | + with open(fn) as metadata_file: |
180 | + image_metadata = json.load(metadata_file) |
181 | + if image_metadata['type'] == 'machine': |
182 | + machine_images[image_path] = image_metadata |
183 | + else: |
184 | + other_images[image_path] = image_metadata |
185 | + except Exception as exc: |
186 | + print _("Failed to load %(fn)s.") % locals() |
187 | + # NOTE(vish): do kernels and ramdisks first so images |
188 | + self._convert_images(other_images) |
189 | + self._convert_images(machine_images) |
190 | + |
191 | + |
192 | CATEGORIES = [ |
193 | ('user', UserCommands), |
194 | ('project', ProjectCommands), |
195 | @@ -758,6 +910,7 @@ |
196 | ('db', DbCommands), |
197 | ('volume', VolumeCommands), |
198 | ('instance_type', InstanceTypeCommands), |
199 | + ('image', ImageCommands), |
200 | ('flavor', InstanceTypeCommands)] |
201 | |
202 | |
203 | |
204 | === modified file 'doc/source/man/novamanage.rst' |
205 | --- doc/source/man/novamanage.rst 2011-03-01 02:32:55 +0000 |
206 | +++ doc/source/man/novamanage.rst 2011-03-10 05:15:12 +0000 |
207 | @@ -173,7 +173,10 @@ |
208 | ``nova-manage floating create <host> <ip_range>`` |
209 | |
210 | Creates floating IP addresses for the named host by the given range. |
211 | - floating delete <ip_range> Deletes floating IP addresses in the range given. |
212 | + |
213 | +``nova-manage floating delete <ip_range>`` |
214 | + |
215 | + Deletes floating IP addresses in the range given. |
216 | |
217 | ``nova-manage floating list`` |
218 | |
219 | @@ -193,7 +196,7 @@ |
220 | ``nova-manage flavor create <name> <memory> <vCPU> <local_storage> <flavorID> <(optional) swap> <(optional) RXTX Quota> <(optional) RXTX Cap>`` |
221 | |
222 | creates a flavor with the following positional arguments: |
223 | - * memory (expressed in megabytes) |
224 | + * memory (expressed in megabytes) |
225 | * vcpu(s) (integer) |
226 | * local storage (expressed in gigabytes) |
227 | * flavorid (unique integer) |
228 | @@ -209,12 +212,33 @@ |
229 | |
230 | Purges the flavor with the name <name>. This removes this flavor from the database. |
231 | |
232 | - |
233 | Nova Instance_type |
234 | ~~~~~~~~~~~~~~~~~~ |
235 | |
236 | The instance_type command is provided as an alias for the flavor command. All the same subcommands and arguments from nova-manage flavor can be used. |
237 | |
238 | +Nova Images |
239 | +~~~~~~~~~~~ |
240 | + |
241 | +``nova-manage image image_register <path> <owner>`` |
242 | + |
243 | + Registers an image with the image service. |
244 | + |
245 | +``nova-manage image kernel_register <path> <owner>`` |
246 | + |
247 | + Registers a kernel with the image service. |
248 | + |
249 | +``nova-manage image ramdisk_register <path> <owner>`` |
250 | + |
251 | + Registers a ramdisk with the image service. |
252 | + |
253 | +``nova-manage image all_register <image_path> <kernel_path> <ramdisk_path> <owner>`` |
254 | + |
255 | + Registers an image kernel and ramdisk with the image service. |
256 | + |
257 | +``nova-manage image convert <directory>`` |
258 | + |
259 | + Converts all images in directory from the old (Bexar) format to the new format. |
260 | |
261 | FILES |
262 | ======== |
263 | |
264 | === modified file 'doc/source/runnova/nova.manage.rst' |
265 | --- doc/source/runnova/nova.manage.rst 2011-02-21 20:30:20 +0000 |
266 | +++ doc/source/runnova/nova.manage.rst 2011-03-10 05:15:12 +0000 |
267 | @@ -182,6 +182,29 @@ |
268 | |
269 | Displays a list of all floating IP addresses. |
270 | |
271 | +Nova Images |
272 | +~~~~~~~~~~~ |
273 | + |
274 | +``nova-manage image image_register <path> <owner>`` |
275 | + |
276 | + Registers an image with the image service. |
277 | + |
278 | +``nova-manage image kernel_register <path> <owner>`` |
279 | + |
280 | + Registers a kernel with the image service. |
281 | + |
282 | +``nova-manage image ramdisk_register <path> <owner>`` |
283 | + |
284 | + Registers a ramdisk with the image service. |
285 | + |
286 | +``nova-manage image all_register <image_path> <kernel_path> <ramdisk_path> <owner>`` |
287 | + |
288 | + Registers an image kernel and ramdisk with the image service. |
289 | + |
290 | +``nova-manage image convert <directory>`` |
291 | + |
292 | + Converts all images in directory from the old (Bexar) format to the new format. |
293 | + |
294 | Concept: Flags |
295 | -------------- |
296 | |
297 | |
298 | === modified file 'nova/api/ec2/cloud.py' |
299 | --- nova/api/ec2/cloud.py 2011-03-09 05:30:05 +0000 |
300 | +++ nova/api/ec2/cloud.py 2011-03-10 05:15:12 +0000 |
301 | @@ -39,7 +39,9 @@ |
302 | from nova import network |
303 | from nova import utils |
304 | from nova import volume |
305 | +from nova.api.ec2 import ec2utils |
306 | from nova.compute import instance_types |
307 | +from nova.image import s3 |
308 | |
309 | |
310 | FLAGS = flags.FLAGS |
311 | @@ -73,30 +75,19 @@ |
312 | return {'private_key': private_key, 'fingerprint': fingerprint} |
313 | |
314 | |
315 | -def ec2_id_to_id(ec2_id): |
316 | - """Convert an ec2 ID (i-[base 16 number]) to an instance id (int)""" |
317 | - return int(ec2_id.split('-')[-1], 16) |
318 | - |
319 | - |
320 | -def id_to_ec2_id(instance_id, template='i-%08x'): |
321 | - """Convert an instance ID (int) to an ec2 ID (i-[base 16 number])""" |
322 | - return template % instance_id |
323 | - |
324 | - |
325 | class CloudController(object): |
326 | """ CloudController provides the critical dispatch between |
327 | inbound API calls through the endpoint and messages |
328 | sent to the other nodes. |
329 | """ |
330 | def __init__(self): |
331 | - self.image_service = utils.import_object(FLAGS.image_service) |
332 | + self.image_service = s3.S3ImageService() |
333 | self.network_api = network.API() |
334 | self.volume_api = volume.API() |
335 | self.compute_api = compute.API( |
336 | network_api=self.network_api, |
337 | - image_service=self.image_service, |
338 | volume_api=self.volume_api, |
339 | - hostname_factory=id_to_ec2_id) |
340 | + hostname_factory=ec2utils.id_to_ec2_id) |
341 | self.setup() |
342 | |
343 | def __str__(self): |
344 | @@ -154,11 +145,14 @@ |
345 | availability_zone = self._get_availability_zone_by_host(ctxt, host) |
346 | floating_ip = db.instance_get_floating_address(ctxt, |
347 | instance_ref['id']) |
348 | - ec2_id = id_to_ec2_id(instance_ref['id']) |
349 | + ec2_id = ec2utils.id_to_ec2_id(instance_ref['id']) |
350 | + image_ec2_id = self._image_ec2_id(instance_ref['image_id'], 'machine') |
351 | + k_ec2_id = self._image_ec2_id(instance_ref['kernel_id'], 'kernel') |
352 | + r_ec2_id = self._image_ec2_id(instance_ref['ramdisk_id'], 'ramdisk') |
353 | data = { |
354 | 'user-data': base64.b64decode(instance_ref['user_data']), |
355 | 'meta-data': { |
356 | - 'ami-id': instance_ref['image_id'], |
357 | + 'ami-id': image_ec2_id, |
358 | 'ami-launch-index': instance_ref['launch_index'], |
359 | 'ami-manifest-path': 'FIXME', |
360 | 'block-device-mapping': { |
361 | @@ -173,12 +167,12 @@ |
362 | 'instance-type': instance_ref['instance_type'], |
363 | 'local-hostname': hostname, |
364 | 'local-ipv4': address, |
365 | - 'kernel-id': instance_ref['kernel_id'], |
366 | + 'kernel-id': k_ec2_id, |
367 | + 'ramdisk-id': r_ec2_id, |
368 | 'placement': {'availability-zone': availability_zone}, |
369 | 'public-hostname': hostname, |
370 | 'public-ipv4': floating_ip or '', |
371 | 'public-keys': keys, |
372 | - 'ramdisk-id': instance_ref['ramdisk_id'], |
373 | 'reservation-id': instance_ref['reservation_id'], |
374 | 'security-groups': '', |
375 | 'mpi': mpi}} |
376 | @@ -525,7 +519,7 @@ |
377 | ec2_id = instance_id[0] |
378 | else: |
379 | ec2_id = instance_id |
380 | - instance_id = ec2_id_to_id(ec2_id) |
381 | + instance_id = ec2utils.ec2_id_to_id(ec2_id) |
382 | output = self.compute_api.get_console_output( |
383 | context, instance_id=instance_id) |
384 | now = datetime.datetime.utcnow() |
385 | @@ -535,7 +529,7 @@ |
386 | |
387 | def get_ajax_console(self, context, instance_id, **kwargs): |
388 | ec2_id = instance_id[0] |
389 | - instance_id = ec2_id_to_id(ec2_id) |
390 | + instance_id = ec2utils.ec2_id_to_id(ec2_id) |
391 | return self.compute_api.get_ajax_console(context, |
392 | instance_id=instance_id) |
393 | |
394 | @@ -543,7 +537,7 @@ |
395 | if volume_id: |
396 | volumes = [] |
397 | for ec2_id in volume_id: |
398 | - internal_id = ec2_id_to_id(ec2_id) |
399 | + internal_id = ec2utils.ec2_id_to_id(ec2_id) |
400 | volume = self.volume_api.get(context, internal_id) |
401 | volumes.append(volume) |
402 | else: |
403 | @@ -556,11 +550,11 @@ |
404 | instance_data = None |
405 | if volume.get('instance', None): |
406 | instance_id = volume['instance']['id'] |
407 | - instance_ec2_id = id_to_ec2_id(instance_id) |
408 | + instance_ec2_id = ec2utils.id_to_ec2_id(instance_id) |
409 | instance_data = '%s[%s]' % (instance_ec2_id, |
410 | volume['instance']['host']) |
411 | v = {} |
412 | - v['volumeId'] = id_to_ec2_id(volume['id'], 'vol-%08x') |
413 | + v['volumeId'] = ec2utils.id_to_ec2_id(volume['id'], 'vol-%08x') |
414 | v['status'] = volume['status'] |
415 | v['size'] = volume['size'] |
416 | v['availabilityZone'] = volume['availability_zone'] |
417 | @@ -578,8 +572,7 @@ |
418 | 'device': volume['mountpoint'], |
419 | 'instanceId': instance_ec2_id, |
420 | 'status': 'attached', |
421 | - 'volumeId': id_to_ec2_id(volume['id'], |
422 | - 'vol-%08x')}] |
423 | + 'volumeId': v['volumeId']}] |
424 | else: |
425 | v['attachmentSet'] = [{}] |
426 | |
427 | @@ -598,12 +591,12 @@ |
428 | return {'volumeSet': [self._format_volume(context, dict(volume))]} |
429 | |
430 | def delete_volume(self, context, volume_id, **kwargs): |
431 | - volume_id = ec2_id_to_id(volume_id) |
432 | + volume_id = ec2utils.ec2_id_to_id(volume_id) |
433 | self.volume_api.delete(context, volume_id=volume_id) |
434 | return True |
435 | |
436 | def update_volume(self, context, volume_id, **kwargs): |
437 | - volume_id = ec2_id_to_id(volume_id) |
438 | + volume_id = ec2utils.ec2_id_to_id(volume_id) |
439 | updatable_fields = ['display_name', 'display_description'] |
440 | changes = {} |
441 | for field in updatable_fields: |
442 | @@ -614,8 +607,8 @@ |
443 | return True |
444 | |
445 | def attach_volume(self, context, volume_id, instance_id, device, **kwargs): |
446 | - volume_id = ec2_id_to_id(volume_id) |
447 | - instance_id = ec2_id_to_id(instance_id) |
448 | + volume_id = ec2utils.ec2_id_to_id(volume_id) |
449 | + instance_id = ec2utils.ec2_id_to_id(instance_id) |
450 | msg = _("Attach volume %(volume_id)s to instance %(instance_id)s" |
451 | " at %(device)s") % locals() |
452 | LOG.audit(msg, context=context) |
453 | @@ -626,22 +619,22 @@ |
454 | volume = self.volume_api.get(context, volume_id) |
455 | return {'attachTime': volume['attach_time'], |
456 | 'device': volume['mountpoint'], |
457 | - 'instanceId': id_to_ec2_id(instance_id), |
458 | + 'instanceId': ec2utils.id_to_ec2_id(instance_id), |
459 | 'requestId': context.request_id, |
460 | 'status': volume['attach_status'], |
461 | - 'volumeId': id_to_ec2_id(volume_id, 'vol-%08x')} |
462 | + 'volumeId': ec2utils.id_to_ec2_id(volume_id, 'vol-%08x')} |
463 | |
464 | def detach_volume(self, context, volume_id, **kwargs): |
465 | - volume_id = ec2_id_to_id(volume_id) |
466 | + volume_id = ec2utils.ec2_id_to_id(volume_id) |
467 | LOG.audit(_("Detach volume %s"), volume_id, context=context) |
468 | volume = self.volume_api.get(context, volume_id) |
469 | instance = self.compute_api.detach_volume(context, volume_id=volume_id) |
470 | return {'attachTime': volume['attach_time'], |
471 | 'device': volume['mountpoint'], |
472 | - 'instanceId': id_to_ec2_id(instance['id']), |
473 | + 'instanceId': ec2utils.id_to_ec2_id(instance['id']), |
474 | 'requestId': context.request_id, |
475 | 'status': volume['attach_status'], |
476 | - 'volumeId': id_to_ec2_id(volume_id, 'vol-%08x')} |
477 | + 'volumeId': ec2utils.id_to_ec2_id(volume_id, 'vol-%08x')} |
478 | |
479 | def _convert_to_set(self, lst, label): |
480 | if lst == None or lst == []: |
481 | @@ -675,7 +668,7 @@ |
482 | if instance_id: |
483 | instances = [] |
484 | for ec2_id in instance_id: |
485 | - internal_id = ec2_id_to_id(ec2_id) |
486 | + internal_id = ec2utils.ec2_id_to_id(ec2_id) |
487 | instance = self.compute_api.get(context, |
488 | instance_id=internal_id) |
489 | instances.append(instance) |
490 | @@ -687,9 +680,9 @@ |
491 | continue |
492 | i = {} |
493 | instance_id = instance['id'] |
494 | - ec2_id = id_to_ec2_id(instance_id) |
495 | + ec2_id = ec2utils.id_to_ec2_id(instance_id) |
496 | i['instanceId'] = ec2_id |
497 | - i['imageId'] = instance['image_id'] |
498 | + i['imageId'] = self._image_ec2_id(instance['image_id']) |
499 | i['instanceState'] = { |
500 | 'code': instance['state'], |
501 | 'name': instance['state_description']} |
502 | @@ -755,7 +748,7 @@ |
503 | if (floating_ip_ref['fixed_ip'] |
504 | and floating_ip_ref['fixed_ip']['instance']): |
505 | instance_id = floating_ip_ref['fixed_ip']['instance']['id'] |
506 | - ec2_id = id_to_ec2_id(instance_id) |
507 | + ec2_id = ec2utils.id_to_ec2_id(instance_id) |
508 | address_rv = {'public_ip': address, |
509 | 'instance_id': ec2_id} |
510 | if context.is_admin: |
511 | @@ -778,7 +771,7 @@ |
512 | def associate_address(self, context, instance_id, public_ip, **kwargs): |
513 | LOG.audit(_("Associate address %(public_ip)s to" |
514 | " instance %(instance_id)s") % locals(), context=context) |
515 | - instance_id = ec2_id_to_id(instance_id) |
516 | + instance_id = ec2utils.ec2_id_to_id(instance_id) |
517 | self.compute_api.associate_floating_ip(context, |
518 | instance_id=instance_id, |
519 | address=public_ip) |
520 | @@ -791,13 +784,19 @@ |
521 | |
522 | def run_instances(self, context, **kwargs): |
523 | max_count = int(kwargs.get('max_count', 1)) |
524 | + if kwargs.get('kernel_id'): |
525 | + kernel = self._get_image(context, kwargs['kernel_id']) |
526 | + kwargs['kernel_id'] = kernel['id'] |
527 | + if kwargs.get('ramdisk_id'): |
528 | + ramdisk = self._get_image(context, kwargs['ramdisk_id']) |
529 | + kwargs['ramdisk_id'] = ramdisk['id'] |
530 | instances = self.compute_api.create(context, |
531 | instance_type=instance_types.get_by_type( |
532 | kwargs.get('instance_type', None)), |
533 | - image_id=kwargs['image_id'], |
534 | + image_id=self._get_image(context, kwargs['image_id'])['id'], |
535 | min_count=int(kwargs.get('min_count', max_count)), |
536 | max_count=max_count, |
537 | - kernel_id=kwargs.get('kernel_id', None), |
538 | + kernel_id=kwargs.get('kernel_id'), |
539 | ramdisk_id=kwargs.get('ramdisk_id'), |
540 | display_name=kwargs.get('display_name'), |
541 | display_description=kwargs.get('display_description'), |
542 | @@ -814,7 +813,7 @@ |
543 | instance_id is a kwarg so its name cannot be modified.""" |
544 | LOG.debug(_("Going to start terminating instances")) |
545 | for ec2_id in instance_id: |
546 | - instance_id = ec2_id_to_id(ec2_id) |
547 | + instance_id = ec2utils.ec2_id_to_id(ec2_id) |
548 | self.compute_api.delete(context, instance_id=instance_id) |
549 | return True |
550 | |
551 | @@ -822,19 +821,19 @@ |
552 | """instance_id is a list of instance ids""" |
553 | LOG.audit(_("Reboot instance %r"), instance_id, context=context) |
554 | for ec2_id in instance_id: |
555 | - instance_id = ec2_id_to_id(ec2_id) |
556 | + instance_id = ec2utils.ec2_id_to_id(ec2_id) |
557 | self.compute_api.reboot(context, instance_id=instance_id) |
558 | return True |
559 | |
560 | def rescue_instance(self, context, instance_id, **kwargs): |
561 | """This is an extension to the normal ec2_api""" |
562 | - instance_id = ec2_id_to_id(instance_id) |
563 | + instance_id = ec2utils.ec2_id_to_id(instance_id) |
564 | self.compute_api.rescue(context, instance_id=instance_id) |
565 | return True |
566 | |
567 | def unrescue_instance(self, context, instance_id, **kwargs): |
568 | """This is an extension to the normal ec2_api""" |
569 | - instance_id = ec2_id_to_id(instance_id) |
570 | + instance_id = ec2utils.ec2_id_to_id(instance_id) |
571 | self.compute_api.unrescue(context, instance_id=instance_id) |
572 | return True |
573 | |
574 | @@ -845,41 +844,80 @@ |
575 | if field in kwargs: |
576 | changes[field] = kwargs[field] |
577 | if changes: |
578 | - instance_id = ec2_id_to_id(instance_id) |
579 | + instance_id = ec2utils.ec2_id_to_id(instance_id) |
580 | self.compute_api.update(context, instance_id=instance_id, **kwargs) |
581 | return True |
582 | |
583 | - def _format_image(self, context, image): |
584 | + _type_prefix_map = {'machine': 'ami', |
585 | + 'kernel': 'aki', |
586 | + 'ramdisk': 'ari'} |
587 | + |
588 | + def _image_ec2_id(self, image_id, image_type='machine'): |
589 | + prefix = self._type_prefix_map[image_type] |
590 | + template = prefix + '-%08x' |
591 | + return ec2utils.id_to_ec2_id(int(image_id), template=template) |
592 | + |
593 | + def _get_image(self, context, ec2_id): |
594 | + try: |
595 | + internal_id = ec2utils.ec2_id_to_id(ec2_id) |
596 | + return self.image_service.show(context, internal_id) |
597 | + except exception.NotFound: |
598 | + return self.image_service.show_by_name(context, ec2_id) |
599 | + |
600 | + def _format_image(self, image): |
601 | """Convert from format defined by BaseImageService to S3 format.""" |
602 | i = {} |
603 | - i['imageId'] = image.get('id') |
604 | - i['kernelId'] = image.get('kernel_id') |
605 | - i['ramdiskId'] = image.get('ramdisk_id') |
606 | - i['imageOwnerId'] = image.get('owner_id') |
607 | - i['imageLocation'] = image.get('location') |
608 | - i['imageState'] = image.get('status') |
609 | - i['type'] = image.get('type') |
610 | - i['isPublic'] = image.get('is_public') |
611 | - i['architecture'] = image.get('architecture') |
612 | + image_type = image['properties'].get('type') |
613 | + ec2_id = self._image_ec2_id(image.get('id'), image_type) |
614 | + name = image.get('name') |
615 | + if name: |
616 | + i['imageId'] = "%s (%s)" % (ec2_id, name) |
617 | + else: |
618 | + i['imageId'] = ec2_id |
619 | + kernel_id = image['properties'].get('kernel_id') |
620 | + if kernel_id: |
621 | + i['kernelId'] = self._image_ec2_id(kernel_id, 'kernel') |
622 | + ramdisk_id = image['properties'].get('ramdisk_id') |
623 | + if ramdisk_id: |
624 | + i['ramdiskId'] = self._image_ec2_id(ramdisk_id, 'ramdisk') |
625 | + i['imageOwnerId'] = image['properties'].get('owner_id') |
626 | + i['imageLocation'] = image['properties'].get('image_location') |
627 | + i['imageState'] = image['properties'].get('image_state') |
628 | + i['type'] = image_type |
629 | + i['isPublic'] = str(image['properties'].get('is_public', '')) == 'True' |
630 | + i['architecture'] = image['properties'].get('architecture') |
631 | return i |
632 | |
633 | def describe_images(self, context, image_id=None, **kwargs): |
634 | # NOTE: image_id is a list! |
635 | - images = self.image_service.index(context) |
636 | if image_id: |
637 | - images = filter(lambda x: x['id'] in image_id, images) |
638 | - images = [self._format_image(context, i) for i in images] |
639 | + images = [] |
640 | + for ec2_id in image_id: |
641 | + try: |
642 | + image = self._get_image(context, ec2_id) |
643 | + except exception.NotFound: |
644 | + raise exception.NotFound(_('Image %s not found') % |
645 | + ec2_id) |
646 | + images.append(image) |
647 | + else: |
648 | + images = self.image_service.detail(context) |
649 | + images = [self._format_image(i) for i in images] |
650 | return {'imagesSet': images} |
651 | |
652 | def deregister_image(self, context, image_id, **kwargs): |
653 | LOG.audit(_("De-registering image %s"), image_id, context=context) |
654 | - self.image_service.deregister(context, image_id) |
655 | + image = self._get_image(context, image_id) |
656 | + internal_id = image['id'] |
657 | + self.image_service.delete(context, internal_id) |
658 | return {'imageId': image_id} |
659 | |
660 | def register_image(self, context, image_location=None, **kwargs): |
661 | if image_location is None and 'name' in kwargs: |
662 | image_location = kwargs['name'] |
663 | - image_id = self.image_service.register(context, image_location) |
664 | + metadata = {'properties': {'image_location': image_location}} |
665 | + image = self.image_service.create(context, metadata) |
666 | + image_id = self._image_ec2_id(image['id'], |
667 | + image['properties']['type']) |
668 | msg = _("Registered image %(image_location)s with" |
669 | " id %(image_id)s") % locals() |
670 | LOG.audit(msg, context=context) |
671 | @@ -890,13 +928,11 @@ |
672 | raise exception.ApiError(_('attribute not supported: %s') |
673 | % attribute) |
674 | try: |
675 | - image = self._format_image(context, |
676 | - self.image_service.show(context, |
677 | - image_id)) |
678 | - except IndexError: |
679 | - raise exception.ApiError(_('invalid id: %s') % image_id) |
680 | - result = {'image_id': image_id, 'launchPermission': []} |
681 | - if image['isPublic']: |
682 | + image = self._get_image(context, image_id) |
683 | + except exception.NotFound: |
684 | + raise exception.NotFound(_('Image %s not found') % image_id) |
685 | + result = {'imageId': image_id, 'launchPermission': []} |
686 | + if image['properties']['is_public']: |
687 | result['launchPermission'].append({'group': 'all'}) |
688 | return result |
689 | |
690 | @@ -913,8 +949,18 @@ |
691 | if not operation_type in ['add', 'remove']: |
692 | raise exception.ApiError(_('operation_type must be add or remove')) |
693 | LOG.audit(_("Updating image %s publicity"), image_id, context=context) |
694 | - return self.image_service.modify(context, image_id, operation_type) |
695 | + |
696 | + try: |
697 | + image = self._get_image(context, image_id) |
698 | + except exception.NotFound: |
699 | + raise exception.NotFound(_('Image %s not found') % image_id) |
700 | + internal_id = image['id'] |
701 | + del(image['id']) |
702 | + raise Exception(image) |
703 | + image['properties']['is_public'] = (operation_type == 'add') |
704 | + return self.image_service.update(context, internal_id, image) |
705 | |
706 | def update_image(self, context, image_id, **kwargs): |
707 | - result = self.image_service.update(context, image_id, dict(kwargs)) |
708 | + internal_id = ec2utils.ec2_id_to_id(image_id) |
709 | + result = self.image_service.update(context, internal_id, dict(kwargs)) |
710 | return result |
711 | |
712 | === added file 'nova/api/ec2/ec2utils.py' |
713 | --- nova/api/ec2/ec2utils.py 1970-01-01 00:00:00 +0000 |
714 | +++ nova/api/ec2/ec2utils.py 2011-03-10 05:15:12 +0000 |
715 | @@ -0,0 +1,32 @@ |
716 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
717 | + |
718 | +# Copyright 2010 United States Government as represented by the |
719 | +# Administrator of the National Aeronautics and Space Administration. |
720 | +# All Rights Reserved. |
721 | +# |
722 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
723 | +# not use this file except in compliance with the License. You may obtain |
724 | +# a copy of the License at |
725 | +# |
726 | +# http://www.apache.org/licenses/LICENSE-2.0 |
727 | +# |
728 | +# Unless required by applicable law or agreed to in writing, software |
729 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
730 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
731 | +# License for the specific language governing permissions and limitations |
732 | +# under the License. |
733 | + |
734 | +from nova import exception |
735 | + |
736 | + |
737 | +def ec2_id_to_id(ec2_id): |
738 | + """Convert an ec2 ID (i-[base 16 number]) to an instance id (int)""" |
739 | + try: |
740 | + return int(ec2_id.split('-')[-1], 16) |
741 | + except ValueError: |
742 | + raise exception.NotFound(_("Id %s Not Found") % ec2_id) |
743 | + |
744 | + |
745 | +def id_to_ec2_id(instance_id, template='i-%08x'): |
746 | + """Convert an instance ID (int) to an ec2 ID (i-[base 16 number])""" |
747 | + return template % instance_id |
748 | |
749 | === modified file 'nova/compute/api.py' |
750 | --- nova/compute/api.py 2011-03-09 19:53:44 +0000 |
751 | +++ nova/compute/api.py 2011-03-10 05:15:12 +0000 |
752 | @@ -126,9 +126,9 @@ |
753 | |
754 | image = self.image_service.show(context, image_id) |
755 | if kernel_id is None: |
756 | - kernel_id = image.get('kernel_id', None) |
757 | + kernel_id = image['properties'].get('kernel_id', None) |
758 | if ramdisk_id is None: |
759 | - ramdisk_id = image.get('ramdisk_id', None) |
760 | + ramdisk_id = image['properties'].get('ramdisk_id', None) |
761 | # FIXME(sirp): is there a way we can remove null_kernel? |
762 | # No kernel and ramdisk for raw images |
763 | if kernel_id == str(FLAGS.null_kernel): |
764 | |
765 | === modified file 'nova/flags.py' |
766 | --- nova/flags.py 2011-02-28 22:31:09 +0000 |
767 | +++ nova/flags.py 2011-03-10 05:15:12 +0000 |
768 | @@ -348,7 +348,7 @@ |
769 | 'Manager for scheduler') |
770 | |
771 | # The service to use for image search and retrieval |
772 | -DEFINE_string('image_service', 'nova.image.s3.S3ImageService', |
773 | +DEFINE_string('image_service', 'nova.image.local.LocalImageService', |
774 | 'The service to use for retrieving and searching for images.') |
775 | |
776 | DEFINE_string('host', socket.gethostname(), |
777 | |
778 | === modified file 'nova/image/glance.py' |
779 | --- nova/image/glance.py 2011-01-17 19:32:34 +0000 |
780 | +++ nova/image/glance.py 2011-03-10 05:15:12 +0000 |
781 | @@ -17,9 +17,8 @@ |
782 | """Implementation of an image service that uses Glance as the backend""" |
783 | |
784 | from __future__ import absolute_import |
785 | -import httplib |
786 | -import json |
787 | -import urlparse |
788 | + |
789 | +from glance.common import exception as glance_exception |
790 | |
791 | from nova import exception |
792 | from nova import flags |
793 | @@ -53,31 +52,64 @@ |
794 | """ |
795 | return self.client.get_images_detailed() |
796 | |
797 | - def show(self, context, id): |
798 | + def show(self, context, image_id): |
799 | """ |
800 | Returns a dict containing image data for the given opaque image id. |
801 | """ |
802 | - image = self.client.get_image_meta(id) |
803 | - if image: |
804 | - return image |
805 | - raise exception.NotFound |
806 | - |
807 | - def create(self, context, data): |
808 | + try: |
809 | + image = self.client.get_image_meta(image_id) |
810 | + except glance_exception.NotFound: |
811 | + raise exception.NotFound |
812 | + return image |
813 | + |
814 | + def show_by_name(self, context, name): |
815 | + """ |
816 | + Returns a dict containing image data for the given name. |
817 | + """ |
818 | + # TODO(vish): replace this with more efficient call when glance |
819 | + # supports it. |
820 | + images = self.detail(context) |
821 | + image = None |
822 | + for cantidate in images: |
823 | + if name == cantidate.get('name'): |
824 | + image = cantidate |
825 | + break |
826 | + if image is None: |
827 | + raise exception.NotFound |
828 | + return image |
829 | + |
830 | + def get(self, context, image_id, data): |
831 | + """ |
832 | + Calls out to Glance for metadata and data and writes data. |
833 | + """ |
834 | + try: |
835 | + metadata, image_chunks = self.client.get_image(image_id) |
836 | + except glance_exception.NotFound: |
837 | + raise exception.NotFound |
838 | + for chunk in image_chunks: |
839 | + data.write(chunk) |
840 | + return metadata |
841 | + |
842 | + def create(self, context, metadata, data=None): |
843 | """ |
844 | Store the image data and return the new image id. |
845 | |
846 | :raises AlreadyExists if the image already exist. |
847 | |
848 | """ |
849 | - return self.client.add_image(image_meta=data) |
850 | + return self.client.add_image(metadata, data) |
851 | |
852 | - def update(self, context, image_id, data): |
853 | + def update(self, context, image_id, metadata, data=None): |
854 | """Replace the contents of the given image with the new data. |
855 | |
856 | :raises NotFound if the image does not exist. |
857 | |
858 | """ |
859 | - return self.client.update_image(image_id, data) |
860 | + try: |
861 | + result = self.client.update_image(image_id, metadata, data) |
862 | + except glance_exception.NotFound: |
863 | + raise exception.NotFound |
864 | + return result |
865 | |
866 | def delete(self, context, image_id): |
867 | """ |
868 | @@ -86,7 +118,11 @@ |
869 | :raises NotFound if the image does not exist. |
870 | |
871 | """ |
872 | - return self.client.delete_image(image_id) |
873 | + try: |
874 | + result = self.client.delete_image(image_id) |
875 | + except glance_exception.NotFound: |
876 | + raise exception.NotFound |
877 | + return result |
878 | |
879 | def delete_all(self): |
880 | """ |
881 | |
882 | === modified file 'nova/image/local.py' |
883 | --- nova/image/local.py 2011-01-26 02:16:25 +0000 |
884 | +++ nova/image/local.py 2011-03-10 05:15:12 +0000 |
885 | @@ -15,57 +15,110 @@ |
886 | # License for the specific language governing permissions and limitations |
887 | # under the License. |
888 | |
889 | -import cPickle as pickle |
890 | +import json |
891 | import os.path |
892 | import random |
893 | -import tempfile |
894 | +import shutil |
895 | |
896 | +from nova import flags |
897 | from nova import exception |
898 | from nova.image import service |
899 | |
900 | |
901 | +FLAGS = flags.FLAGS |
902 | +flags.DEFINE_string('images_path', '$state_path/images', |
903 | + 'path to decrypted images') |
904 | + |
905 | + |
906 | class LocalImageService(service.BaseImageService): |
907 | - |
908 | """Image service storing images to local disk. |
909 | + |
910 | It assumes that image_ids are integers. |
911 | |
912 | """ |
913 | |
914 | def __init__(self): |
915 | - self._path = tempfile.mkdtemp() |
916 | + self._path = FLAGS.images_path |
917 | |
918 | - def _path_to(self, image_id): |
919 | - return os.path.join(self._path, str(image_id)) |
920 | + def _path_to(self, image_id, fname='info.json'): |
921 | + if fname: |
922 | + return os.path.join(self._path, '%08x' % int(image_id), fname) |
923 | + return os.path.join(self._path, '%08x' % int(image_id)) |
924 | |
925 | def _ids(self): |
926 | """The list of all image ids.""" |
927 | - return [int(i) for i in os.listdir(self._path)] |
928 | + return [int(i, 16) for i in os.listdir(self._path)] |
929 | |
930 | def index(self, context): |
931 | - return [dict(id=i['id'], name=i['name']) for i in self.detail(context)] |
932 | + return [dict(image_id=i['id'], name=i.get('name')) |
933 | + for i in self.detail(context)] |
934 | |
935 | def detail(self, context): |
936 | - return [self.show(context, id) for id in self._ids()] |
937 | - |
938 | - def show(self, context, id): |
939 | - try: |
940 | - return pickle.load(open(self._path_to(id))) |
941 | - except IOError: |
942 | - raise exception.NotFound |
943 | - |
944 | - def create(self, context, data): |
945 | - """Store the image data and return the new image id.""" |
946 | - id = random.randint(0, 2 ** 31 - 1) |
947 | - data['id'] = id |
948 | - self.update(context, id, data) |
949 | - return id |
950 | - |
951 | - def update(self, context, image_id, data): |
952 | + images = [] |
953 | + for image_id in self._ids(): |
954 | + try: |
955 | + image = self.show(context, image_id) |
956 | + images.append(image) |
957 | + except exception.NotFound: |
958 | + continue |
959 | + return images |
960 | + |
961 | + def show(self, context, image_id): |
962 | + try: |
963 | + with open(self._path_to(image_id)) as metadata_file: |
964 | + return json.load(metadata_file) |
965 | + except (IOError, ValueError): |
966 | + raise exception.NotFound |
967 | + |
968 | + def show_by_name(self, context, name): |
969 | + """Returns a dict containing image data for the given name.""" |
970 | + # NOTE(vish): Not very efficient, but the local image service |
971 | + # is for testing so it should be fine. |
972 | + images = self.detail(context) |
973 | + image = None |
974 | + for cantidate in images: |
975 | + if name == cantidate.get('name'): |
976 | + image = cantidate |
977 | + break |
978 | + if image == None: |
979 | + raise exception.NotFound |
980 | + return image |
981 | + |
982 | + def get(self, context, image_id, data): |
983 | + """Get image and metadata.""" |
984 | + try: |
985 | + with open(self._path_to(image_id)) as metadata_file: |
986 | + metadata = json.load(metadata_file) |
987 | + with open(self._path_to(image_id, 'image')) as image_file: |
988 | + shutil.copyfileobj(image_file, data) |
989 | + except (IOError, ValueError): |
990 | + raise exception.NotFound |
991 | + return metadata |
992 | + |
993 | + def create(self, context, metadata, data=None): |
994 | + """Store the image data and return the new image.""" |
995 | + image_id = random.randint(0, 2 ** 31 - 1) |
996 | + image_path = self._path_to(image_id, None) |
997 | + if not os.path.exists(image_path): |
998 | + os.mkdir(image_path) |
999 | + return self.update(context, image_id, metadata, data) |
1000 | + |
1001 | + def update(self, context, image_id, metadata, data=None): |
1002 | """Replace the contents of the given image with the new data.""" |
1003 | + metadata['id'] = image_id |
1004 | try: |
1005 | - pickle.dump(data, open(self._path_to(image_id), 'w')) |
1006 | - except IOError: |
1007 | + if data: |
1008 | + location = self._path_to(image_id, 'image') |
1009 | + with open(location, 'w') as image_file: |
1010 | + shutil.copyfileobj(data, image_file) |
1011 | + # NOTE(vish): update metadata similarly to glance |
1012 | + metadata['status'] = 'active' |
1013 | + metadata['location'] = location |
1014 | + with open(self._path_to(image_id), 'w') as metadata_file: |
1015 | + json.dump(metadata, metadata_file) |
1016 | + except (IOError, ValueError): |
1017 | raise exception.NotFound |
1018 | + return metadata |
1019 | |
1020 | def delete(self, context, image_id): |
1021 | """Delete the given image. |
1022 | @@ -73,18 +126,11 @@ |
1023 | |
1024 | """ |
1025 | try: |
1026 | - os.unlink(self._path_to(image_id)) |
1027 | - except IOError: |
1028 | + shutil.rmtree(self._path_to(image_id, None)) |
1029 | + except (IOError, ValueError): |
1030 | raise exception.NotFound |
1031 | |
1032 | def delete_all(self): |
1033 | """Clears out all images in local directory.""" |
1034 | - for id in self._ids(): |
1035 | - os.unlink(self._path_to(id)) |
1036 | - |
1037 | - def delete_imagedir(self): |
1038 | - """Deletes the local directory. |
1039 | - Raises OSError if directory is not empty. |
1040 | - |
1041 | - """ |
1042 | - os.rmdir(self._path) |
1043 | + for image_id in self._ids(): |
1044 | + shutil.rmtree(self._path_to(image_id, None)) |
1045 | |
1046 | === modified file 'nova/image/s3.py' |
1047 | --- nova/image/s3.py 2011-02-15 00:51:51 +0000 |
1048 | +++ nova/image/s3.py 2011-03-10 05:15:12 +0000 |
1049 | @@ -21,8 +21,13 @@ |
1050 | objectstore service. |
1051 | """ |
1052 | |
1053 | -import json |
1054 | -import urllib |
1055 | +import binascii |
1056 | +import eventlet |
1057 | +import os |
1058 | +import shutil |
1059 | +import tarfile |
1060 | +import tempfile |
1061 | +from xml.etree import ElementTree |
1062 | |
1063 | import boto.s3.connection |
1064 | |
1065 | @@ -31,84 +36,78 @@ |
1066 | from nova import utils |
1067 | from nova.auth import manager |
1068 | from nova.image import service |
1069 | +from nova.api.ec2 import ec2utils |
1070 | |
1071 | |
1072 | FLAGS = flags.FLAGS |
1073 | - |
1074 | - |
1075 | -def map_s3_to_base(image): |
1076 | - """Convert from S3 format to format defined by BaseImageService.""" |
1077 | - i = {} |
1078 | - i['id'] = image.get('imageId') |
1079 | - i['name'] = image.get('imageId') |
1080 | - i['kernel_id'] = image.get('kernelId') |
1081 | - i['ramdisk_id'] = image.get('ramdiskId') |
1082 | - i['location'] = image.get('imageLocation') |
1083 | - i['owner_id'] = image.get('imageOwnerId') |
1084 | - i['status'] = image.get('imageState') |
1085 | - i['type'] = image.get('type') |
1086 | - i['is_public'] = image.get('isPublic') |
1087 | - i['architecture'] = image.get('architecture') |
1088 | - return i |
1089 | +flags.DEFINE_string('image_decryption_dir', '/tmp', |
1090 | + 'parent dir for tempdir used for image decryption') |
1091 | |
1092 | |
1093 | class S3ImageService(service.BaseImageService): |
1094 | - |
1095 | - def modify(self, context, image_id, operation): |
1096 | - self._conn(context).make_request( |
1097 | - method='POST', |
1098 | - bucket='_images', |
1099 | - query_args=self._qs({'image_id': image_id, |
1100 | - 'operation': operation})) |
1101 | - return True |
1102 | - |
1103 | - def update(self, context, image_id, attributes): |
1104 | - """update an image's attributes / info.json""" |
1105 | - attributes.update({"image_id": image_id}) |
1106 | - self._conn(context).make_request( |
1107 | - method='POST', |
1108 | - bucket='_images', |
1109 | - query_args=self._qs(attributes)) |
1110 | - return True |
1111 | - |
1112 | - def register(self, context, image_location): |
1113 | - """ rpc call to register a new image based from a manifest """ |
1114 | - image_id = utils.generate_uid('ami') |
1115 | - self._conn(context).make_request( |
1116 | - method='PUT', |
1117 | - bucket='_images', |
1118 | - query_args=self._qs({'image_location': image_location, |
1119 | - 'image_id': image_id})) |
1120 | - return image_id |
1121 | + def __init__(self, service=None, *args, **kwargs): |
1122 | + if service == None: |
1123 | + service = utils.import_object(FLAGS.image_service) |
1124 | + self.service = service |
1125 | + self.service.__init__(*args, **kwargs) |
1126 | + |
1127 | + def create(self, context, metadata, data=None): |
1128 | + """metadata['properties'] should contain image_location""" |
1129 | + image = self._s3_create(context, metadata) |
1130 | + return image |
1131 | + |
1132 | + def delete(self, context, image_id): |
1133 | + # FIXME(vish): call to show is to check filter |
1134 | + self.show(context, image_id) |
1135 | + self.service.delete(context, image_id) |
1136 | + |
1137 | + def update(self, context, image_id, metadata, data=None): |
1138 | + # FIXME(vish): call to show is to check filter |
1139 | + self.show(context, image_id) |
1140 | + image = self.service.update(context, image_id, metadata, data) |
1141 | + return image |
1142 | |
1143 | def index(self, context): |
1144 | - """Return a list of all images that a user can see.""" |
1145 | - response = self._conn(context).make_request( |
1146 | - method='GET', |
1147 | - bucket='_images') |
1148 | - images = json.loads(response.read()) |
1149 | - return [map_s3_to_base(i) for i in images] |
1150 | + images = self.service.index(context) |
1151 | + # FIXME(vish): index doesn't filter so we do it manually |
1152 | + return self._filter(context, images) |
1153 | + |
1154 | + def detail(self, context): |
1155 | + images = self.service.detail(context) |
1156 | + # FIXME(vish): detail doesn't filter so we do it manually |
1157 | + return self._filter(context, images) |
1158 | + |
1159 | + @classmethod |
1160 | + def _is_visible(cls, context, image): |
1161 | + return (context.is_admin |
1162 | + or context.project_id == image['properties']['owner_id'] |
1163 | + or image['properties']['is_public'] == 'True') |
1164 | + |
1165 | + @classmethod |
1166 | + def _filter(cls, context, images): |
1167 | + filtered = [] |
1168 | + for image in images: |
1169 | + if not cls._is_visible(context, image): |
1170 | + continue |
1171 | + filtered.append(image) |
1172 | + return filtered |
1173 | |
1174 | def show(self, context, image_id): |
1175 | - """return a image object if the context has permissions""" |
1176 | - if FLAGS.connection_type == 'fake': |
1177 | - return {'imageId': 'bar'} |
1178 | - result = self.index(context) |
1179 | - result = [i for i in result if i['id'] == image_id] |
1180 | - if not result: |
1181 | - raise exception.NotFound(_('Image %s could not be found') |
1182 | - % image_id) |
1183 | - image = result[0] |
1184 | - return image |
1185 | - |
1186 | - def deregister(self, context, image_id): |
1187 | - """ unregister an image """ |
1188 | - self._conn(context).make_request( |
1189 | - method='DELETE', |
1190 | - bucket='_images', |
1191 | - query_args=self._qs({'image_id': image_id})) |
1192 | - |
1193 | - def _conn(self, context): |
1194 | + image = self.service.show(context, image_id) |
1195 | + if not self._is_visible(context, image): |
1196 | + raise exception.NotFound |
1197 | + return image |
1198 | + |
1199 | + def show_by_name(self, context, name): |
1200 | + image = self.service.show_by_name(context, name) |
1201 | + if not self._is_visible(context, image): |
1202 | + raise exception.NotFound |
1203 | + return image |
1204 | + |
1205 | + @staticmethod |
1206 | + def _conn(context): |
1207 | + # TODO(vish): is there a better way to get creds to sign |
1208 | + # for the user? |
1209 | access = manager.AuthManager().get_access_key(context.user, |
1210 | context.project) |
1211 | secret = str(context.user.secret) |
1212 | @@ -120,8 +119,152 @@ |
1213 | port=FLAGS.s3_port, |
1214 | host=FLAGS.s3_host) |
1215 | |
1216 | - def _qs(self, params): |
1217 | - pairs = [] |
1218 | - for key in params.keys(): |
1219 | - pairs.append(key + '=' + urllib.quote(params[key])) |
1220 | - return '&'.join(pairs) |
1221 | + @staticmethod |
1222 | + def _download_file(bucket, filename, local_dir): |
1223 | + key = bucket.get_key(filename) |
1224 | + local_filename = os.path.join(local_dir, filename) |
1225 | + key.get_contents_to_filename(local_filename) |
1226 | + return local_filename |
1227 | + |
1228 | + def _s3_create(self, context, metadata): |
1229 | + """Gets a manifext from s3 and makes an image""" |
1230 | + |
1231 | + image_path = tempfile.mkdtemp(dir=FLAGS.image_decryption_dir) |
1232 | + |
1233 | + image_location = metadata['properties']['image_location'] |
1234 | + bucket_name = image_location.split("/")[0] |
1235 | + manifest_path = image_location[len(bucket_name) + 1:] |
1236 | + bucket = self._conn(context).get_bucket(bucket_name) |
1237 | + key = bucket.get_key(manifest_path) |
1238 | + manifest = key.get_contents_as_string() |
1239 | + |
1240 | + manifest = ElementTree.fromstring(manifest) |
1241 | + image_format = 'ami' |
1242 | + image_type = 'machine' |
1243 | + |
1244 | + try: |
1245 | + kernel_id = manifest.find("machine_configuration/kernel_id").text |
1246 | + if kernel_id == 'true': |
1247 | + image_format = 'aki' |
1248 | + image_type = 'kernel' |
1249 | + kernel_id = None |
1250 | + except Exception: |
1251 | + kernel_id = None |
1252 | + |
1253 | + try: |
1254 | + ramdisk_id = manifest.find("machine_configuration/ramdisk_id").text |
1255 | + if ramdisk_id == 'true': |
1256 | + image_format = 'ari' |
1257 | + image_type = 'ramdisk' |
1258 | + ramdisk_id = None |
1259 | + except Exception: |
1260 | + ramdisk_id = None |
1261 | + |
1262 | + try: |
1263 | + arch = manifest.find("machine_configuration/architecture").text |
1264 | + except Exception: |
1265 | + arch = 'x86_64' |
1266 | + |
1267 | + properties = metadata['properties'] |
1268 | + properties['owner_id'] = context.project_id |
1269 | + properties['architecture'] = arch |
1270 | + |
1271 | + if kernel_id: |
1272 | + properties['kernel_id'] = ec2utils.ec2_id_to_id(kernel_id) |
1273 | + |
1274 | + if ramdisk_id: |
1275 | + properties['ramdisk_id'] = ec2utils.ec2_id_to_id(ramdisk_id) |
1276 | + |
1277 | + properties['is_public'] = False |
1278 | + properties['type'] = image_type |
1279 | + metadata.update({'disk_format': image_format, |
1280 | + 'container_format': image_format, |
1281 | + 'status': 'queued', |
1282 | + 'is_public': True, |
1283 | + 'properties': properties}) |
1284 | + metadata['properties']['image_state'] = 'pending' |
1285 | + image = self.service.create(context, metadata) |
1286 | + image_id = image['id'] |
1287 | + |
1288 | + def delayed_create(): |
1289 | + """This handles the fetching and decrypting of the part files.""" |
1290 | + parts = [] |
1291 | + for fn_element in manifest.find("image").getiterator("filename"): |
1292 | + part = self._download_file(bucket, fn_element.text, image_path) |
1293 | + parts.append(part) |
1294 | + |
1295 | + # NOTE(vish): this may be suboptimal, should we use cat? |
1296 | + encrypted_filename = os.path.join(image_path, 'image.encrypted') |
1297 | + with open(encrypted_filename, 'w') as combined: |
1298 | + for filename in parts: |
1299 | + with open(filename) as part: |
1300 | + shutil.copyfileobj(part, combined) |
1301 | + |
1302 | + metadata['properties']['image_state'] = 'decrypting' |
1303 | + self.service.update(context, image_id, metadata) |
1304 | + |
1305 | + hex_key = manifest.find("image/ec2_encrypted_key").text |
1306 | + encrypted_key = binascii.a2b_hex(hex_key) |
1307 | + hex_iv = manifest.find("image/ec2_encrypted_iv").text |
1308 | + encrypted_iv = binascii.a2b_hex(hex_iv) |
1309 | + |
1310 | + # FIXME(vish): grab key from common service so this can run on |
1311 | + # any host. |
1312 | + cloud_pk = os.path.join(FLAGS.ca_path, "private/cakey.pem") |
1313 | + |
1314 | + decrypted_filename = os.path.join(image_path, 'image.tar.gz') |
1315 | + self._decrypt_image(encrypted_filename, encrypted_key, |
1316 | + encrypted_iv, cloud_pk, decrypted_filename) |
1317 | + |
1318 | + metadata['properties']['image_state'] = 'untarring' |
1319 | + self.service.update(context, image_id, metadata) |
1320 | + |
1321 | + unz_filename = self._untarzip_image(image_path, decrypted_filename) |
1322 | + |
1323 | + metadata['properties']['image_state'] = 'uploading' |
1324 | + with open(unz_filename) as image_file: |
1325 | + self.service.update(context, image_id, metadata, image_file) |
1326 | + metadata['properties']['image_state'] = 'available' |
1327 | + self.service.update(context, image_id, metadata) |
1328 | + |
1329 | + shutil.rmtree(image_path) |
1330 | + |
1331 | + eventlet.spawn_n(delayed_create) |
1332 | + |
1333 | + return image |
1334 | + |
1335 | + @staticmethod |
1336 | + def _decrypt_image(encrypted_filename, encrypted_key, encrypted_iv, |
1337 | + cloud_private_key, decrypted_filename): |
1338 | + key, err = utils.execute( |
1339 | + 'openssl rsautl -decrypt -inkey %s' % cloud_private_key, |
1340 | + process_input=encrypted_key, |
1341 | + check_exit_code=False) |
1342 | + if err: |
1343 | + raise exception.Error(_("Failed to decrypt private key: %s") |
1344 | + % err) |
1345 | + iv, err = utils.execute( |
1346 | + 'openssl rsautl -decrypt -inkey %s' % cloud_private_key, |
1347 | + process_input=encrypted_iv, |
1348 | + check_exit_code=False) |
1349 | + if err: |
1350 | + raise exception.Error(_("Failed to decrypt initialization " |
1351 | + "vector: %s") % err) |
1352 | + |
1353 | + _out, err = utils.execute( |
1354 | + 'openssl enc -d -aes-128-cbc -in %s -K %s -iv %s -out %s' |
1355 | + % (encrypted_filename, key, iv, decrypted_filename), |
1356 | + check_exit_code=False) |
1357 | + if err: |
1358 | + raise exception.Error(_("Failed to decrypt image file " |
1359 | + "%(image_file)s: %(err)s") % |
1360 | + {'image_file': encrypted_filename, |
1361 | + 'err': err}) |
1362 | + |
1363 | + @staticmethod |
1364 | + def _untarzip_image(path, filename): |
1365 | + tar_file = tarfile.open(filename, "r|gz") |
1366 | + tar_file.extractall(path) |
1367 | + image_file = tar_file.getnames()[0] |
1368 | + tar_file.close() |
1369 | + return os.path.join(path, image_file) |
1370 | |
1371 | === modified file 'nova/image/service.py' |
1372 | --- nova/image/service.py 2010-11-18 21:27:52 +0000 |
1373 | +++ nova/image/service.py 2011-03-10 05:15:12 +0000 |
1374 | @@ -56,9 +56,9 @@ |
1375 | """ |
1376 | raise NotImplementedError |
1377 | |
1378 | - def show(self, context, id): |
1379 | + def show(self, context, image_id): |
1380 | """ |
1381 | - Returns a dict containing image data for the given opaque image id. |
1382 | + Returns a dict containing image metadata for the given opaque image id. |
1383 | |
1384 | :retval a mapping with the following signature: |
1385 | |
1386 | @@ -76,17 +76,27 @@ |
1387 | """ |
1388 | raise NotImplementedError |
1389 | |
1390 | - def create(self, context, data): |
1391 | - """ |
1392 | - Store the image data and return the new image id. |
1393 | + def get(self, context, data): |
1394 | + """ |
1395 | + Returns a dict containing image metadata and writes image data to data. |
1396 | + |
1397 | + :param data: a file-like object to hold binary image data |
1398 | + |
1399 | + :raises NotFound if the image does not exist |
1400 | + """ |
1401 | + raise NotImplementedError |
1402 | + |
1403 | + def create(self, context, metadata, data=None): |
1404 | + """ |
1405 | + Store the image metadata and data and return the new image id. |
1406 | |
1407 | :raises AlreadyExists if the image already exist. |
1408 | |
1409 | """ |
1410 | raise NotImplementedError |
1411 | |
1412 | - def update(self, context, image_id, data): |
1413 | - """Replace the contents of the given image with the new data. |
1414 | + def update(self, context, image_id, metadata, data=None): |
1415 | + """Update the given image with the new metadata and data. |
1416 | |
1417 | :raises NotFound if the image does not exist. |
1418 | |
1419 | |
1420 | === modified file 'nova/objectstore/image.py' |
1421 | --- nova/objectstore/image.py 2011-02-19 06:49:13 +0000 |
1422 | +++ nova/objectstore/image.py 2011-03-10 05:15:12 +0000 |
1423 | @@ -37,8 +37,7 @@ |
1424 | |
1425 | |
1426 | FLAGS = flags.FLAGS |
1427 | -flags.DEFINE_string('images_path', '$state_path/images', |
1428 | - 'path to decrypted images') |
1429 | +flags.DECLARE('images_path', 'nova.image.local') |
1430 | |
1431 | |
1432 | class Image(object): |
1433 | |
1434 | === modified file 'nova/tests/api/openstack/fakes.py' |
1435 | --- nova/tests/api/openstack/fakes.py 2011-02-25 04:51:17 +0000 |
1436 | +++ nova/tests/api/openstack/fakes.py 2011-03-10 05:15:12 +0000 |
1437 | @@ -25,6 +25,7 @@ |
1438 | from paste import urlmap |
1439 | |
1440 | from glance import client as glance_client |
1441 | +from glance.common import exception as glance_exc |
1442 | |
1443 | from nova import auth |
1444 | from nova import context |
1445 | @@ -149,25 +150,26 @@ |
1446 | for f in self.fixtures: |
1447 | if f['id'] == image_id: |
1448 | return f |
1449 | - return None |
1450 | + raise glance_exc.NotFound |
1451 | |
1452 | - def fake_add_image(self, image_meta): |
1453 | + def fake_add_image(self, image_meta, data=None): |
1454 | id = ''.join(random.choice(string.letters) for _ in range(20)) |
1455 | image_meta['id'] = id |
1456 | self.fixtures.append(image_meta) |
1457 | - return id |
1458 | + return image_meta |
1459 | |
1460 | - def fake_update_image(self, image_id, image_meta): |
1461 | + def fake_update_image(self, image_id, image_meta, data=None): |
1462 | f = self.fake_get_image_meta(image_id) |
1463 | if not f: |
1464 | - raise exc.NotFound |
1465 | + raise glance_exc.NotFound |
1466 | |
1467 | f.update(image_meta) |
1468 | + return f |
1469 | |
1470 | def fake_delete_image(self, image_id): |
1471 | f = self.fake_get_image_meta(image_id) |
1472 | if not f: |
1473 | - raise exc.NotFound |
1474 | + raise glance_exc.NotFound |
1475 | |
1476 | self.fixtures.remove(f) |
1477 | |
1478 | |
1479 | === modified file 'nova/tests/api/openstack/test_images.py' |
1480 | --- nova/tests/api/openstack/test_images.py 2011-02-23 19:56:37 +0000 |
1481 | +++ nova/tests/api/openstack/test_images.py 2011-03-10 05:15:12 +0000 |
1482 | @@ -22,6 +22,8 @@ |
1483 | |
1484 | import json |
1485 | import datetime |
1486 | +import shutil |
1487 | +import tempfile |
1488 | |
1489 | import stubout |
1490 | import webob |
1491 | @@ -54,7 +56,7 @@ |
1492 | |
1493 | num_images = len(self.service.index(self.context)) |
1494 | |
1495 | - id = self.service.create(self.context, fixture) |
1496 | + id = self.service.create(self.context, fixture)['id'] |
1497 | |
1498 | self.assertNotEquals(None, id) |
1499 | self.assertEquals(num_images + 1, |
1500 | @@ -71,7 +73,7 @@ |
1501 | |
1502 | num_images = len(self.service.index(self.context)) |
1503 | |
1504 | - id = self.service.create(self.context, fixture) |
1505 | + id = self.service.create(self.context, fixture)['id'] |
1506 | |
1507 | self.assertNotEquals(None, id) |
1508 | |
1509 | @@ -89,7 +91,7 @@ |
1510 | 'instance_id': None, |
1511 | 'progress': None} |
1512 | |
1513 | - id = self.service.create(self.context, fixture) |
1514 | + id = self.service.create(self.context, fixture)['id'] |
1515 | |
1516 | fixture['status'] = 'in progress' |
1517 | |
1518 | @@ -118,7 +120,7 @@ |
1519 | |
1520 | ids = [] |
1521 | for fixture in fixtures: |
1522 | - new_id = self.service.create(self.context, fixture) |
1523 | + new_id = self.service.create(self.context, fixture)['id'] |
1524 | ids.append(new_id) |
1525 | |
1526 | num_images = len(self.service.index(self.context)) |
1527 | @@ -137,14 +139,15 @@ |
1528 | |
1529 | def setUp(self): |
1530 | super(LocalImageServiceTest, self).setUp() |
1531 | + self.tempdir = tempfile.mkdtemp() |
1532 | + self.flags(images_path=self.tempdir) |
1533 | self.stubs = stubout.StubOutForTesting() |
1534 | service_class = 'nova.image.local.LocalImageService' |
1535 | self.service = utils.import_object(service_class) |
1536 | self.context = context.RequestContext(None, None) |
1537 | |
1538 | def tearDown(self): |
1539 | - self.service.delete_all() |
1540 | - self.service.delete_imagedir() |
1541 | + shutil.rmtree(self.tempdir) |
1542 | self.stubs.UnsetAll() |
1543 | super(LocalImageServiceTest, self).tearDown() |
1544 | |
1545 | |
1546 | === modified file 'nova/tests/fake_flags.py' |
1547 | --- nova/tests/fake_flags.py 2011-02-23 23:32:31 +0000 |
1548 | +++ nova/tests/fake_flags.py 2011-03-10 05:15:12 +0000 |
1549 | @@ -32,6 +32,7 @@ |
1550 | FLAGS.network_size = 8 |
1551 | FLAGS.num_networks = 2 |
1552 | FLAGS.fake_network = True |
1553 | +FLAGS.image_service = 'nova.image.local.LocalImageService' |
1554 | flags.DECLARE('num_shelves', 'nova.volume.driver') |
1555 | flags.DECLARE('blades_per_shelf', 'nova.volume.driver') |
1556 | flags.DECLARE('iscsi_num_targets', 'nova.volume.driver') |
1557 | |
1558 | === modified file 'nova/tests/test_cloud.py' |
1559 | --- nova/tests/test_cloud.py 2011-03-03 15:30:19 +0000 |
1560 | +++ nova/tests/test_cloud.py 2011-03-10 05:15:12 +0000 |
1561 | @@ -38,6 +38,8 @@ |
1562 | from nova.auth import manager |
1563 | from nova.compute import power_state |
1564 | from nova.api.ec2 import cloud |
1565 | +from nova.api.ec2 import ec2utils |
1566 | +from nova.image import local |
1567 | from nova.objectstore import image |
1568 | |
1569 | |
1570 | @@ -76,6 +78,12 @@ |
1571 | project=self.project) |
1572 | host = self.network.get_network_host(self.context.elevated()) |
1573 | |
1574 | + def fake_show(meh, context, id): |
1575 | + return {'id': 1, 'properties': {'kernel_id': 1, 'ramdisk_id': 1}} |
1576 | + |
1577 | + self.stubs.Set(local.LocalImageService, 'show', fake_show) |
1578 | + self.stubs.Set(local.LocalImageService, 'show_by_name', fake_show) |
1579 | + |
1580 | def tearDown(self): |
1581 | network_ref = db.project_get_network(self.context, |
1582 | self.project.id) |
1583 | @@ -122,7 +130,7 @@ |
1584 | self.cloud.allocate_address(self.context) |
1585 | inst = db.instance_create(self.context, {'host': self.compute.host}) |
1586 | fixed = self.network.allocate_fixed_ip(self.context, inst['id']) |
1587 | - ec2_id = cloud.id_to_ec2_id(inst['id']) |
1588 | + ec2_id = ec2utils.id_to_ec2_id(inst['id']) |
1589 | self.cloud.associate_address(self.context, |
1590 | instance_id=ec2_id, |
1591 | public_ip=address) |
1592 | @@ -158,12 +166,12 @@ |
1593 | vol2 = db.volume_create(self.context, {}) |
1594 | result = self.cloud.describe_volumes(self.context) |
1595 | self.assertEqual(len(result['volumeSet']), 2) |
1596 | - volume_id = cloud.id_to_ec2_id(vol2['id'], 'vol-%08x') |
1597 | + volume_id = ec2utils.id_to_ec2_id(vol2['id'], 'vol-%08x') |
1598 | result = self.cloud.describe_volumes(self.context, |
1599 | volume_id=[volume_id]) |
1600 | self.assertEqual(len(result['volumeSet']), 1) |
1601 | self.assertEqual( |
1602 | - cloud.ec2_id_to_id(result['volumeSet'][0]['volumeId']), |
1603 | + ec2utils.ec2_id_to_id(result['volumeSet'][0]['volumeId']), |
1604 | vol2['id']) |
1605 | db.volume_destroy(self.context, vol1['id']) |
1606 | db.volume_destroy(self.context, vol2['id']) |
1607 | @@ -188,8 +196,10 @@ |
1608 | def test_describe_instances(self): |
1609 | """Makes sure describe_instances works and filters results.""" |
1610 | inst1 = db.instance_create(self.context, {'reservation_id': 'a', |
1611 | + 'image_id': 1, |
1612 | 'host': 'host1'}) |
1613 | inst2 = db.instance_create(self.context, {'reservation_id': 'a', |
1614 | + 'image_id': 1, |
1615 | 'host': 'host2'}) |
1616 | comp1 = db.service_create(self.context, {'host': 'host1', |
1617 | 'availability_zone': 'zone1', |
1618 | @@ -200,7 +210,7 @@ |
1619 | result = self.cloud.describe_instances(self.context) |
1620 | result = result['reservationSet'][0] |
1621 | self.assertEqual(len(result['instancesSet']), 2) |
1622 | - instance_id = cloud.id_to_ec2_id(inst2['id']) |
1623 | + instance_id = ec2utils.id_to_ec2_id(inst2['id']) |
1624 | result = self.cloud.describe_instances(self.context, |
1625 | instance_id=[instance_id]) |
1626 | result = result['reservationSet'][0] |
1627 | @@ -215,10 +225,9 @@ |
1628 | db.service_destroy(self.context, comp2['id']) |
1629 | |
1630 | def test_console_output(self): |
1631 | - image_id = FLAGS.default_image |
1632 | instance_type = FLAGS.default_instance_type |
1633 | max_count = 1 |
1634 | - kwargs = {'image_id': image_id, |
1635 | + kwargs = {'image_id': 'ami-1', |
1636 | 'instance_type': instance_type, |
1637 | 'max_count': max_count} |
1638 | rv = self.cloud.run_instances(self.context, **kwargs) |
1639 | @@ -234,8 +243,7 @@ |
1640 | greenthread.sleep(0.3) |
1641 | |
1642 | def test_ajax_console(self): |
1643 | - image_id = FLAGS.default_image |
1644 | - kwargs = {'image_id': image_id} |
1645 | + kwargs = {'image_id': 'ami-1'} |
1646 | rv = self.cloud.run_instances(self.context, **kwargs) |
1647 | instance_id = rv['instancesSet'][0]['instanceId'] |
1648 | greenthread.sleep(0.3) |
1649 | @@ -347,7 +355,7 @@ |
1650 | |
1651 | def test_update_of_instance_display_fields(self): |
1652 | inst = db.instance_create(self.context, {}) |
1653 | - ec2_id = cloud.id_to_ec2_id(inst['id']) |
1654 | + ec2_id = ec2utils.id_to_ec2_id(inst['id']) |
1655 | self.cloud.update_instance(self.context, ec2_id, |
1656 | display_name='c00l 1m4g3') |
1657 | inst = db.instance_get(self.context, inst['id']) |
1658 | @@ -365,7 +373,7 @@ |
1659 | def test_update_of_volume_display_fields(self): |
1660 | vol = db.volume_create(self.context, {}) |
1661 | self.cloud.update_volume(self.context, |
1662 | - cloud.id_to_ec2_id(vol['id'], 'vol-%08x'), |
1663 | + ec2utils.id_to_ec2_id(vol['id'], 'vol-%08x'), |
1664 | display_name='c00l v0lum3') |
1665 | vol = db.volume_get(self.context, vol['id']) |
1666 | self.assertEqual('c00l v0lum3', vol['display_name']) |
1667 | @@ -374,7 +382,7 @@ |
1668 | def test_update_of_volume_wont_update_private_fields(self): |
1669 | vol = db.volume_create(self.context, {}) |
1670 | self.cloud.update_volume(self.context, |
1671 | - cloud.id_to_ec2_id(vol['id'], 'vol-%08x'), |
1672 | + ec2utils.id_to_ec2_id(vol['id'], 'vol-%08x'), |
1673 | mountpoint='/not/here') |
1674 | vol = db.volume_get(self.context, vol['id']) |
1675 | self.assertEqual(None, vol['mountpoint']) |
1676 | |
1677 | === modified file 'nova/tests/test_compute.py' |
1678 | --- nova/tests/test_compute.py 2011-02-28 21:42:54 +0000 |
1679 | +++ nova/tests/test_compute.py 2011-03-10 05:15:12 +0000 |
1680 | @@ -31,7 +31,7 @@ |
1681 | from nova import utils |
1682 | from nova.auth import manager |
1683 | from nova.compute import instance_types |
1684 | - |
1685 | +from nova.image import local |
1686 | |
1687 | LOG = logging.getLogger('nova.tests.compute') |
1688 | FLAGS = flags.FLAGS |
1689 | @@ -52,6 +52,11 @@ |
1690 | self.project = self.manager.create_project('fake', 'fake', 'fake') |
1691 | self.context = context.RequestContext('fake', 'fake', False) |
1692 | |
1693 | + def fake_show(meh, context, id): |
1694 | + return {'id': 1, 'properties': {'kernel_id': 1, 'ramdisk_id': 1}} |
1695 | + |
1696 | + self.stubs.Set(local.LocalImageService, 'show', fake_show) |
1697 | + |
1698 | def tearDown(self): |
1699 | self.manager.delete_user(self.user) |
1700 | self.manager.delete_project(self.project) |
1701 | @@ -60,7 +65,7 @@ |
1702 | def _create_instance(self, params={}): |
1703 | """Create a test instance""" |
1704 | inst = {} |
1705 | - inst['image_id'] = 'ami-test' |
1706 | + inst['image_id'] = 1 |
1707 | inst['reservation_id'] = 'r-fakeres' |
1708 | inst['launch_time'] = '10' |
1709 | inst['user_id'] = self.user.id |
1710 | |
1711 | === modified file 'nova/tests/test_console.py' |
1712 | --- nova/tests/test_console.py 2011-02-21 07:16:10 +0000 |
1713 | +++ nova/tests/test_console.py 2011-03-10 05:15:12 +0000 |
1714 | @@ -57,7 +57,7 @@ |
1715 | inst = {} |
1716 | #inst['host'] = self.host |
1717 | #inst['name'] = 'instance-1234' |
1718 | - inst['image_id'] = 'ami-test' |
1719 | + inst['image_id'] = 1 |
1720 | inst['reservation_id'] = 'r-fakeres' |
1721 | inst['launch_time'] = '10' |
1722 | inst['user_id'] = self.user.id |
1723 | |
1724 | === modified file 'nova/tests/test_direct.py' |
1725 | --- nova/tests/test_direct.py 2011-03-03 22:21:21 +0000 |
1726 | +++ nova/tests/test_direct.py 2011-03-10 05:15:12 +0000 |
1727 | @@ -93,8 +93,7 @@ |
1728 | class DirectCloudTestCase(test_cloud.CloudTestCase): |
1729 | def setUp(self): |
1730 | super(DirectCloudTestCase, self).setUp() |
1731 | - compute_handle = compute.API(image_service=self.cloud.image_service, |
1732 | - network_api=self.cloud.network_api, |
1733 | + compute_handle = compute.API(network_api=self.cloud.network_api, |
1734 | volume_api=self.cloud.volume_api) |
1735 | direct.register_service('compute', compute_handle) |
1736 | self.router = direct.JsonParamsMiddleware(direct.Router()) |
1737 | |
1738 | === modified file 'nova/tests/test_quota.py' |
1739 | --- nova/tests/test_quota.py 2011-03-01 00:28:46 +0000 |
1740 | +++ nova/tests/test_quota.py 2011-03-10 05:15:12 +0000 |
1741 | @@ -20,11 +20,12 @@ |
1742 | from nova import context |
1743 | from nova import db |
1744 | from nova import flags |
1745 | +from nova import network |
1746 | from nova import quota |
1747 | from nova import test |
1748 | from nova import utils |
1749 | +from nova import volume |
1750 | from nova.auth import manager |
1751 | -from nova.api.ec2 import cloud |
1752 | from nova.compute import instance_types |
1753 | |
1754 | |
1755 | @@ -41,7 +42,6 @@ |
1756 | quota_gigabytes=20, |
1757 | quota_floating_ips=1) |
1758 | |
1759 | - self.cloud = cloud.CloudController() |
1760 | self.manager = manager.AuthManager() |
1761 | self.user = self.manager.create_user('admin', 'admin', 'admin', True) |
1762 | self.project = self.manager.create_project('admin', 'admin', 'admin') |
1763 | @@ -57,7 +57,7 @@ |
1764 | def _create_instance(self, cores=2): |
1765 | """Create a test instance""" |
1766 | inst = {} |
1767 | - inst['image_id'] = 'ami-test' |
1768 | + inst['image_id'] = 1 |
1769 | inst['reservation_id'] = 'r-fakeres' |
1770 | inst['user_id'] = self.user.id |
1771 | inst['project_id'] = self.project.id |
1772 | @@ -118,12 +118,12 @@ |
1773 | for i in range(FLAGS.quota_instances): |
1774 | instance_id = self._create_instance() |
1775 | instance_ids.append(instance_id) |
1776 | - self.assertRaises(quota.QuotaError, self.cloud.run_instances, |
1777 | + self.assertRaises(quota.QuotaError, compute.API().create, |
1778 | self.context, |
1779 | min_count=1, |
1780 | max_count=1, |
1781 | instance_type='m1.small', |
1782 | - image_id='fake') |
1783 | + image_id=1) |
1784 | for instance_id in instance_ids: |
1785 | db.instance_destroy(self.context, instance_id) |
1786 | |
1787 | @@ -131,12 +131,12 @@ |
1788 | instance_ids = [] |
1789 | instance_id = self._create_instance(cores=4) |
1790 | instance_ids.append(instance_id) |
1791 | - self.assertRaises(quota.QuotaError, self.cloud.run_instances, |
1792 | + self.assertRaises(quota.QuotaError, compute.API().create, |
1793 | self.context, |
1794 | min_count=1, |
1795 | max_count=1, |
1796 | instance_type='m1.small', |
1797 | - image_id='fake') |
1798 | + image_id=1) |
1799 | for instance_id in instance_ids: |
1800 | db.instance_destroy(self.context, instance_id) |
1801 | |
1802 | @@ -145,9 +145,12 @@ |
1803 | for i in range(FLAGS.quota_volumes): |
1804 | volume_id = self._create_volume() |
1805 | volume_ids.append(volume_id) |
1806 | - self.assertRaises(quota.QuotaError, self.cloud.create_volume, |
1807 | - self.context, |
1808 | - size=10) |
1809 | + self.assertRaises(quota.QuotaError, |
1810 | + volume.API().create, |
1811 | + self.context, |
1812 | + size=10, |
1813 | + name='', |
1814 | + description='') |
1815 | for volume_id in volume_ids: |
1816 | db.volume_destroy(self.context, volume_id) |
1817 | |
1818 | @@ -156,9 +159,11 @@ |
1819 | volume_id = self._create_volume(size=20) |
1820 | volume_ids.append(volume_id) |
1821 | self.assertRaises(quota.QuotaError, |
1822 | - self.cloud.create_volume, |
1823 | + volume.API().create, |
1824 | self.context, |
1825 | - size=10) |
1826 | + size=10, |
1827 | + name='', |
1828 | + description='') |
1829 | for volume_id in volume_ids: |
1830 | db.volume_destroy(self.context, volume_id) |
1831 | |
1832 | @@ -172,7 +177,8 @@ |
1833 | # make an rpc.call, the test just finishes with OK. It |
1834 | # appears to be something in the magic inline callbacks |
1835 | # that is breaking. |
1836 | - self.assertRaises(quota.QuotaError, self.cloud.allocate_address, |
1837 | + self.assertRaises(quota.QuotaError, |
1838 | + network.API().allocate_floating_ip, |
1839 | self.context) |
1840 | db.floating_ip_destroy(context.get_admin_context(), address) |
1841 | |
1842 | |
1843 | === modified file 'nova/tests/test_scheduler.py' |
1844 | --- nova/tests/test_scheduler.py 2011-02-23 19:56:37 +0000 |
1845 | +++ nova/tests/test_scheduler.py 2011-03-10 05:15:12 +0000 |
1846 | @@ -155,7 +155,7 @@ |
1847 | def _create_instance(self, **kwargs): |
1848 | """Create a test instance""" |
1849 | inst = {} |
1850 | - inst['image_id'] = 'ami-test' |
1851 | + inst['image_id'] = 1 |
1852 | inst['reservation_id'] = 'r-fakeres' |
1853 | inst['user_id'] = self.user.id |
1854 | inst['project_id'] = self.project.id |
1855 | @@ -169,8 +169,6 @@ |
1856 | def _create_volume(self): |
1857 | """Create a test volume""" |
1858 | vol = {} |
1859 | - vol['image_id'] = 'ami-test' |
1860 | - vol['reservation_id'] = 'r-fakeres' |
1861 | vol['size'] = 1 |
1862 | vol['availability_zone'] = 'test' |
1863 | return db.volume_create(self.context, vol)['id'] |
1864 | |
1865 | === modified file 'nova/tests/test_volume.py' |
1866 | --- nova/tests/test_volume.py 2011-01-04 05:23:35 +0000 |
1867 | +++ nova/tests/test_volume.py 2011-03-10 05:15:12 +0000 |
1868 | @@ -99,7 +99,7 @@ |
1869 | def test_run_attach_detach_volume(self): |
1870 | """Make sure volume can be attached and detached from instance.""" |
1871 | inst = {} |
1872 | - inst['image_id'] = 'ami-test' |
1873 | + inst['image_id'] = 1 |
1874 | inst['reservation_id'] = 'r-fakeres' |
1875 | inst['launch_time'] = '10' |
1876 | inst['user_id'] = 'fake' |
1877 | |
1878 | === modified file 'nova/virt/images.py' |
1879 | --- nova/virt/images.py 2011-03-08 06:01:41 +0000 |
1880 | +++ nova/virt/images.py 2011-03-10 05:15:12 +0000 |
1881 | @@ -28,29 +28,32 @@ |
1882 | import urllib2 |
1883 | import urlparse |
1884 | |
1885 | +from nova import context |
1886 | from nova import flags |
1887 | from nova import log as logging |
1888 | from nova import utils |
1889 | from nova.auth import manager |
1890 | from nova.auth import signer |
1891 | -from nova.objectstore import image |
1892 | |
1893 | |
1894 | FLAGS = flags.FLAGS |
1895 | -flags.DEFINE_bool('use_s3', True, |
1896 | - 'whether to get images from s3 or use local copy') |
1897 | - |
1898 | LOG = logging.getLogger('nova.virt.images') |
1899 | |
1900 | |
1901 | -def fetch(image, path, user, project): |
1902 | - if FLAGS.use_s3: |
1903 | - f = _fetch_s3_image |
1904 | - else: |
1905 | - f = _fetch_local_image |
1906 | - return f(image, path, user, project) |
1907 | - |
1908 | - |
1909 | +def fetch(image_id, path, _user, _project): |
1910 | + # TODO(vish): Improve context handling and add owner and auth data |
1911 | + # when it is added to glance. Right now there is no |
1912 | + # auth checking in glance, so we assume that access was |
1913 | + # checked before we got here. |
1914 | + image_service = utils.import_object(FLAGS.image_service) |
1915 | + with open(path, "wb") as image_file: |
1916 | + elevated = context.get_admin_context() |
1917 | + metadata = image_service.get(elevated, image_id, image_file) |
1918 | + return metadata |
1919 | + |
1920 | + |
1921 | +# NOTE(vish): The methods below should be unnecessary, but I'm leaving |
1922 | +# them in case the glance client does not work on windows. |
1923 | def _fetch_image_no_curl(url, path, headers): |
1924 | request = urllib2.Request(url) |
1925 | for (k, v) in headers.iteritems(): |
1926 | @@ -109,6 +112,8 @@ |
1927 | return os.path.join(FLAGS.images_path, path) |
1928 | |
1929 | |
1930 | +# TODO(vish): xenapi should use the glance client code directly instead |
1931 | +# of retrieving the image using this method. |
1932 | def image_url(image): |
1933 | if FLAGS.image_service == "nova.image.glance.GlanceImageService": |
1934 | return "http://%s:%s/images/%s" % (FLAGS.glance_host, |
1935 | |
1936 | === modified file 'nova/virt/libvirt_conn.py' |
1937 | --- nova/virt/libvirt_conn.py 2011-03-09 23:45:00 +0000 |
1938 | +++ nova/virt/libvirt_conn.py 2011-03-10 05:15:12 +0000 |
1939 | @@ -591,21 +591,23 @@ |
1940 | 'ramdisk_id': inst['ramdisk_id']} |
1941 | |
1942 | if disk_images['kernel_id']: |
1943 | + fname = '%08x' % int(disk_images['kernel_id']) |
1944 | self._cache_image(fn=self._fetch_image, |
1945 | target=basepath('kernel'), |
1946 | - fname=disk_images['kernel_id'], |
1947 | + fname=fname, |
1948 | image_id=disk_images['kernel_id'], |
1949 | user=user, |
1950 | project=project) |
1951 | if disk_images['ramdisk_id']: |
1952 | + fname = '%08x' % int(disk_images['ramdisk_id']) |
1953 | self._cache_image(fn=self._fetch_image, |
1954 | target=basepath('ramdisk'), |
1955 | - fname=disk_images['ramdisk_id'], |
1956 | + fname=fname, |
1957 | image_id=disk_images['ramdisk_id'], |
1958 | user=user, |
1959 | project=project) |
1960 | |
1961 | - root_fname = disk_images['image_id'] |
1962 | + root_fname = '%08x' % int(disk_images['image_id']) |
1963 | size = FLAGS.minimum_root_size |
1964 | if inst['instance_type'] == 'm1.tiny' or suffix == '.rescue': |
1965 | size = None |
226 + kernel_ id=kwargs. get('kernel_ id'),
So we seem to go back and forth on this a lot. :) Is kernel_id required or not these days?