Merge lp:~jaypipes/glance/bug713154 into lp:~hudson-openstack/glance/trunk
- bug713154
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Brian Waldon |
Approved revision: | 148 |
Merged at revision: | 170 |
Proposed branch: | lp:~jaypipes/glance/bug713154 |
Merge into: | lp:~hudson-openstack/glance/trunk |
Diff against target: |
1523 lines (+1276/-84) 10 files modified
doc/source/configuring.rst (+82/-0) etc/glance-api.conf (+23/-0) glance/api/v1/images.py (+5/-0) glance/store/s3.py (+259/-66) tests/functional/__init__.py (+8/-0) tests/functional/test_s3.conf (+21/-0) tests/functional/test_s3.py (+532/-0) tests/unit/test_s3_store.py (+345/-0) tests/unit/test_stores.py (+0/-18) tools/pip-requires (+1/-0) |
To merge this branch: | bzr merge lp:~jaypipes/glance/bug713154 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Brian Waldon (community) | Approve | ||
Brian Lamar (community) | Approve | ||
Dan Prince | Pending | ||
Christopher MacGown | Pending | ||
Rick Harris | Pending | ||
Review via email: mp+68863@code.launchpad.net |
Commit message
Description of the change
Completes the S3 storage backend. The original code did not actually fit
the API from boto it turned out, and the stubs that were in the unit test
were hiding this fact.
Adds a unit test that gets S3 testing up to snuff with the Swift backend.
Brian Waldon (bcwaldon) wrote : | # |
If I try to upload to s3 with s3_store_
Jay Pipes (jaypipes) wrote : | # |
> I could successfully upload and download images through glance-cli and the api
> with this code over http. There are a few things I would like to bring up:
>
> 1) The suggested s3_store_bucket value did not work for me. Amazon rejected it
> complaining about uppercase characters.
Fixed. Added a note in the docs and in the configuration file.
> 2) When I tried https (s3_store_host=http://
> but download failed with the following exception: "UnsupportedBac
> backend found for 's3+https'"
Fixed.
> 3) Are the packages going to be updated with the boto dependency?
Yes, though the larger question is how best to keep these kinds of things in-sync...
> 4) Can you update this doc page w/ s3 configuration?
> http://
Fixed.
Cheers,
jay
Jay Pipes (jaypipes) wrote : | # |
> If I try to upload to s3 with s3_store_
> get the following output: "Error uploading image: global name 'httplib' is not
> defined". It correctly prevents the creation of the bucket, it just looks like
> bad error handling:
Fixed that, too. Just missing an import.
-jay
Brian Waldon (bcwaldon) wrote : | # |
> > If I try to upload to s3 with s3_store_
> > get the following output: "Error uploading image: global name 'httplib' is
> not
> > defined". It correctly prevents the creation of the bucket, it just looks
> like
> > bad error handling:
>
> Fixed that, too. Just missing an import.
Now I'm getting "Error uploading image: global name 'config' is not defined" with that flag set to True or False with a non-existant bucket. Looks like this line in glance/store/s3.py is the culprit:
add_bucket = config.
Brian Waldon (bcwaldon) wrote : | # |
Thanks for the documentation. It's should be much clearer now. However, I'm still having the same issue with #2 above (uploading with https). Any idea what it might be?
Jay Pipes (jaypipes) wrote : | # |
> Now I'm getting "Error uploading image: global name 'config' is not defined"
> with that flag set to True or False with a non-existant bucket. Looks like
> this line in glance/store/s3.py is the culprit:
>
> add_bucket = config.
> 's3_store_
> type='bool', default=False)
Strange. This doesn't come up for me...
I think this might be a MacOSX thingie. I'll log into our MacOSX server testing box to see what I can reproduce.
jay
Jay Pipes (jaypipes) wrote : | # |
> Now I'm getting "Error uploading image: global name 'config' is not defined"
> with that flag set to True or False with a non-existant bucket. Looks like
> this line in glance/store/s3.py is the culprit:
>
> add_bucket = config.
> 's3_store_
> type='bool', default=False)
Hmm, so I wasn't hitting this because the bucket I was using in testing was already created. I added the import for common.config and this should be fixed. I'll still look into the MacOSX testing.
-jay
Brian Waldon (bcwaldon) wrote : | # |
> > Now I'm getting "Error uploading image: global name 'config' is not defined"
> > with that flag set to True or False with a non-existant bucket. Looks like
> > this line in glance/store/s3.py is the culprit:
> >
> > add_bucket = config.
> > 's3_store_
> > type='bool', default=False)
>
> Strange. This doesn't come up for me...
>
> I think this might be a MacOSX thingie. I'll log into our MacOSX server
> testing box to see what I can reproduce.
>
> jay
I was testing on Maverick, btw.
Jay Pipes (jaypipes) wrote : | # |
Could you try it again, Brian?
Brian Waldon (bcwaldon) wrote : | # |
> Could you try it again, Brian?
The s3_store_
Brian Waldon (bcwaldon) wrote : | # |
Lamar and I came up with this diff to address the other outstanding problem:
=== modified file 'glance/
--- glance/
+++ glance/
@@ -78,7 +78,8 @@
pieces = urlparse.
if pieces.scheme not in SCHEME_
raise exception.
- loc = Location(
+ store_name = SCHEME_
+ loc = Location(
return loc
Brian Waldon (bcwaldon) wrote : | # |
Adding an image with a non-existant object works, but subsequent requests for that image fail with this traceback:
2011-07-25 19:36:11 DEBUG [eventlet.
File "/usr/lib/
for data in result:
File "/usr/lib/
options=
File "/usr/lib/
return backend_
File "/usr/lib/
key = get_key(bucket_obj, loc.key)
File "/usr/lib/
if not key.exists():
AttributeError: 'NoneType' object has no attribute 'exists'
Brian Waldon (bcwaldon) wrote : | # |
> Adding an image with a non-existant object works, but subsequent requests for
> that image fail with this traceback:
>
>
> 2011-07-25 19:36:11 DEBUG [eventlet.
> call last):
> File "/usr/lib/
> handle_one_response
> for data in result:
> File "/usr/lib/
> image_iterator
> options=
> File "/usr/lib/
> get_from_backend
> return backend_
> File "/usr/lib/
> key = get_key(bucket_obj, loc.key)
> File "/usr/lib/
> if not key.exists():
> AttributeError: 'NoneType' object has no attribute 'exists'
To be clear, I was using glance add and a location option to specifiy a valid s3 connection string to an invalid object identifier.
Brian Waldon (bcwaldon) wrote : | # |
Trying to delete an image with a non-existant s3 object also fails with the same traceback. This raises the following question: Should we be trying to delete s3 objects when we delete an image from the glance api? I do understand we could just remove the image from the registry, but our command-line tool doesn't support that.
Jay Pipes (jaypipes) wrote : | # |
> Lamar and I came up with this diff to address the other outstanding problem:
>
> === modified file 'glance/
> --- glance/
> +++ glance/
> @@ -78,7 +78,8 @@
> pieces = urlparse.
> if pieces.scheme not in SCHEME_
> raise exception.
> - loc = Location(
> + store_name = SCHEME_
> + loc = Location(
> return loc
Hmm, actually, the code in Location.
Pushed a fix and a test case testing variations of URLs in s3_store_host. Please re-verify. Thanks for your patience, Brian^2.
-jay
Jay Pipes (jaypipes) wrote : | # |
Hold on, still working through some issues...
Brian Waldon (bcwaldon) wrote : | # |
I can now fetch images over s3+https. Only outstanding issue is deletion of images with s3 store where object is non-existent.
Jay Pipes (jaypipes) wrote : | # |
Added a functional test case for the non-existing image. I had already fixed that in a prior patch, but this should verify the behaviour is fixed.
Please do a final review :)
-jay
Brian Waldon (bcwaldon) wrote : | # |
Deletion is now handled properly! I discovered that you can't successfully add a remote image (using a location attribute) with glance-cli. You can't set the size property, so you get zero data back! I could successfully add a remote image through the glance api. Maybe this is a bug to address in the future?
I have two minor comments, then this branch is good to go in:
1) You can't have a plus sign in your bucket name. The suggestion in the config file is very misleading :)
2) I know you didn't add this in your patch, but can you add a space between 'option' and 'to' in this error message: "Please set the s3_store_
Brian Waldon (bcwaldon) wrote : | # |
And you need to merge trunk :)
Jay Pipes (jaypipes) wrote : | # |
On Tue, Jul 26, 2011 at 10:02 AM, Brian Waldon
<email address hidden> wrote:
> And you need to merge trunk :)
OK, fixed and merged. Please try again. fingers crossed...
-jay
Brian Waldon (bcwaldon) wrote : | # |
Excellent work, Jay! All of my tests pass.
Brian Lamar (blamar) wrote : | # |
These are notes really, rather than a review, because the code looks good to me and the tests look awesome:
309 + from boto.s3.connection import S3Connection
354 + from boto.s3.connection import S3Connection
Any reason for these imports to be where it is? Waldon says this is probably a dependency management thing (you don't want to require boto if they're not using S3, right?), but I think there should be a blueprint for better optional importing in both Nova and Glance if this is the case because it's done several different ways.
252 + try:
253 + while True:
254 + chunk = self.fp.
255 + if chunk:
256 + yield chunk
257 + else:
258 + break
259 + finally:
260 + self.close()
Might consider shorter "with" syntax here, I think it would just be:
with self.fp:
yield self.fp.
Jay Pipes (jaypipes) wrote : | # |
On Tue, Jul 26, 2011 at 11:34 AM, Brian Lamar <email address hidden> wrote:
> Review: Approve
> These are notes really, rather than a review, because the code looks good to me and the tests look awesome:
>
> 309 Â Â + Â Â Â Â from boto.s3.connection import S3Connection
> 354 Â Â + Â Â Â Â from boto.s3.connection import S3Connection
>
> Any reason for these imports to be where it is? Waldon says this is probably a dependency management thing (you don't want to require boto if they're not using S3, right?), but I think there should be a blueprint for better optional importing in both Nova and Glance if this is the case because it's done several different ways.
Yup, and they get cleaned up in my refactor-stores branch...
> 252 Â Â + Â Â Â Â try:
> 253 Â Â + Â Â Â Â Â Â while True:
> 254 Â Â + Â Â Â Â Â Â Â Â chunk = self.fp.
> 255 Â Â + Â Â Â Â Â Â Â Â if chunk:
> 256 Â Â + Â Â Â Â Â Â Â Â Â Â yield chunk
> 257 Â Â + Â Â Â Â Â Â Â Â else:
> 258 Â Â + Â Â Â Â Â Â Â Â Â Â break
> 259 Â Â + Â Â Â Â finally:
> 260 Â Â + Â Â Â Â Â Â self.close()
>
> Might consider shorter "with" syntax here, I think it would just be:
>
> with self.fp:
> Â Â yield self.fp.
Noted. I'll clean that up in the refactor-stores branch, too :)
Cheers!
jay
OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~jaypipes/glance/bug713154 into lp:glance failed. Below is the output from the failed tests.
running test
running egg_info
creating glance.egg-info
writing glance.
writing top-level names to glance.
writing dependency_links to glance.
writing manifest file 'glance.
reading manifest file 'glance.
reading manifest template 'MANIFEST.in'
warning: no files found matching 'ChangeLog'
writing manifest file 'glance.
running build_ext
We test the following: ... ok
We test the following: ... ok
Test for LP Bugs #736295, #767203 ... ok
We test conditions that produced LP Bug #768969, where an image ... ok
Set up three test images and ensure each query param filter works ... ok
We test the following sequential series of actions: ... ok
Ensure marker and limit query params work ... ok
Set up three test images and ensure each query param filter works ... ok
We test the process flow where a user registers an image ... ok
A test against the actual datastore backend for the registry ... ok
A test that errors coming from the POST API do not ... ok
We test that various calls to the images and root endpoints are ... ok
Upload initial image, then attempt to upload duplicate image ... ok
Set up four test images and ensure each query param filter works ... ok
We test the following sequential series of actions: ... ok
Ensure marker and limit query params work ... ok
Set up three test images and ensure each query param filter works ... ok
We test the process flow where a user registers an image ... ok
A test against the actual datastore backend for the registry ... ok
A test that errors coming from the POST API do not ... ok
We test that various calls to the images and root endpoints are ... ok
Test logging output proper when verbose and debug ... ok
Test logging output proper when verbose and debug ... ok
A test for LP bug #704854 -- Exception thrown by registry ... ok
We test the following: ... ERROR
We test the following: ... ERROR
test that images don't get deleted immediatly and that the scrubber ... ok
test that images get deleted immediately by default ... ok
Tests raises BadRequest for invalid store header ... ok
Tests to add a basic image in the file store ... ok
Tests creates a queued image for no body and no loc header ... ok
Tests creates a queued image for no body and no loc header ... ok
Test that the image contents are checksummed properly ... ok
test_bad_
test_bad_
test_delete_image (tests.
test_delete_
Here, we try to delete an image that is in the queued state. ... ok
Test that the ETag header matches the x-image-
Tests that the /images/detail registry API returns a 400 ... ok
Tests that the /images registry API returns list of ... ok
Test that the image contents are checksummed properly ... ok
Test for HEAD /images/<ID> ... ok
test_show_
- 148. By Jay Pipes
-
Fix for boto1.9b issue 540 (http://
code.google. com/p/boto/ issues/ detail? id=540)
Jay Pipes (jaypipes) wrote : | # |
I pushed a fix for boto bug 540: http://
Please re-review.
-jay
Brian Waldon (bcwaldon) wrote : | # |
This definitely handles the boto issue. Tested at r147 w/ boto==1.9b, tests fail. Tested at r148 w/ boto==1.9b, tests pass.
Preview Diff
1 | === modified file 'doc/source/configuring.rst' |
2 | --- doc/source/configuring.rst 2011-05-11 23:52:06 +0000 |
3 | +++ doc/source/configuring.rst 2011-07-29 18:46:57 +0000 |
4 | @@ -150,6 +150,9 @@ |
5 | Sets the storage backend to use by default when storing images in Glance. |
6 | Available options for this option are (``file``, ``swift``, or ``s3``). |
7 | |
8 | +Configuring the Filesystem Storage Backend |
9 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
10 | + |
11 | * ``filesystem_store_datadir=PATH`` |
12 | |
13 | Optional. Default: ``/var/lib/glance/images/`` |
14 | @@ -163,6 +166,9 @@ |
15 | not exist. Ensure that the user that ``glance-api`` runs under has write |
16 | permissions to this directory. |
17 | |
18 | +Configuring the Swift Storage Backend |
19 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
20 | + |
21 | * ``swift_store_auth_address=URL`` |
22 | |
23 | Required when using the Swift storage backend. |
24 | @@ -219,6 +225,82 @@ |
25 | If true, Glance will attempt to create the container ``swift_store_container`` |
26 | if it does not exist. |
27 | |
28 | +Configuring the S3 Storage Backend |
29 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
30 | + |
31 | +* ``s3_store_host=URL`` |
32 | + |
33 | +Required when using the S3 storage backend. |
34 | + |
35 | +Can only be specified in configuration files. |
36 | + |
37 | +`This option is specific to the S3 storage backend.` |
38 | + |
39 | +Default: s3.amazonaws.com |
40 | + |
41 | +Sets the main service URL supplied to S3 when making calls to its storage |
42 | +system. For more information about the S3 authentication system, please |
43 | +see the `S3 documentation <http://aws.amazon.com/documentation/s3/>`_ |
44 | + |
45 | +* ``s3_store_access_key=ACCESS_KEY`` |
46 | + |
47 | +Required when using the S3 storage backend. |
48 | + |
49 | +Can only be specified in configuration files. |
50 | + |
51 | +`This option is specific to the S3 storage backend.` |
52 | + |
53 | +Sets the access key to authenticate against the ``s3_store_host`` with. |
54 | + |
55 | +You should set this to your 20-character Amazon AWS access key. |
56 | + |
57 | +* ``s3_store_secret_key=SECRET_KEY`` |
58 | + |
59 | +Required when using the S3 storage backend. |
60 | + |
61 | +Can only be specified in configuration files. |
62 | + |
63 | +`This option is specific to the S3 storage backend.` |
64 | + |
65 | +Sets the secret key to authenticate against the |
66 | +``s3_store_host`` with for the access key ``s3_store_access_key``. |
67 | + |
68 | +You should set this to your 40-character Amazon AWS secret key. |
69 | + |
70 | +* ``s3_store_bucket=BUCKET`` |
71 | + |
72 | +Required when using the S3 storage backend. |
73 | + |
74 | +Can only be specified in configuration files. |
75 | + |
76 | +`This option is specific to the S3 storage backend.` |
77 | + |
78 | +Sets the name of the bucket to use for Glance images in S3. |
79 | + |
80 | +Note that the namespace for S3 buckets is **global**, and |
81 | +therefore you must use a name for the bucket that is unique. It |
82 | +is recommended that you use a combination of your AWS access key, |
83 | +**lowercased** with "glance". |
84 | + |
85 | +For instance if your Amazon AWS access key is: |
86 | + |
87 | +``ABCDEFGHIJKLMNOPQRST`` |
88 | + |
89 | +then make your bucket value be: |
90 | + |
91 | +``abcdefghijklmnopqrstglance`` |
92 | + |
93 | +* ``s3_store_create_bucket_on_put`` |
94 | + |
95 | +Optional. Default: ``False`` |
96 | + |
97 | +Can only be specified in configuration files. |
98 | + |
99 | +`This option is specific to the S3 storage backend.` |
100 | + |
101 | +If true, Glance will attempt to create the bucket ``s3_store_bucket`` |
102 | +if it does not exist. |
103 | + |
104 | Configuring the Glance Registry |
105 | ------------------------------- |
106 | |
107 | |
108 | === modified file 'etc/glance-api.conf' |
109 | --- etc/glance-api.conf 2011-07-28 00:50:04 +0000 |
110 | +++ etc/glance-api.conf 2011-07-29 18:46:57 +0000 |
111 | @@ -51,6 +51,29 @@ |
112 | # Do we create the container if it does not exist? |
113 | swift_store_create_container_on_put = False |
114 | |
115 | +# ============ S3 Store Options ============================= |
116 | + |
117 | +# Address where the S3 authentication service lives |
118 | +s3_store_host = 127.0.0.1:8080/v1.0/ |
119 | + |
120 | +# User to authenticate against the S3 authentication service |
121 | +s3_store_access_key = <20-char AWS access key> |
122 | + |
123 | +# Auth key for the user authenticating against the |
124 | +# S3 authentication service |
125 | +s3_store_secret_key = <40-char AWS secret key> |
126 | + |
127 | +# Container within the account that the account should use |
128 | +# for storing images in S3. Note that S3 has a flat namespace, |
129 | +# so you need a unique bucket name for your glance images. An |
130 | +# easy way to do this is append your AWS access key to "glance". |
131 | +# S3 buckets in AWS *must* be lowercased, so remember to lowercase |
132 | +# your AWS access key if you use it in your bucket name below! |
133 | +s3_store_bucket = <lowercased 20-char aws access key>glance |
134 | + |
135 | +# Do we create the bucket if it does not exist? |
136 | +s3_store_create_bucket_on_put = False |
137 | + |
138 | # ============ Image Cache Options ======================== |
139 | |
140 | image_cache_enabled = False |
141 | |
142 | === modified file 'glance/api/v1/images.py' |
143 | --- glance/api/v1/images.py 2011-07-28 00:44:29 +0000 |
144 | +++ glance/api/v1/images.py 2011-07-29 18:46:57 +0000 |
145 | @@ -331,6 +331,7 @@ |
146 | try: |
147 | logger.debug("Uploading image data for image %(image_id)s " |
148 | "to %(store_name)s store", locals()) |
149 | + req.make_body_seekable() |
150 | location, size, checksum = store.add(image_meta['id'], |
151 | req.body_file, |
152 | self.options) |
153 | @@ -436,6 +437,10 @@ |
154 | :retval Mapping of updated image data |
155 | """ |
156 | image_id = image_meta['id'] |
157 | + # This is necessary because of a bug in Webob 1.0.2 - 1.0.7 |
158 | + # See: https://bitbucket.org/ianb/webob/ |
159 | + # issue/12/fix-for-issue-6-broke-chunked-transfer |
160 | + req.is_body_readable = True |
161 | location = self._upload(req, image_meta) |
162 | return self._activate(req, image_id, location) |
163 | |
164 | |
165 | === modified file 'glance/store/s3.py' |
166 | --- glance/store/s3.py 2011-07-27 18:15:47 +0000 |
167 | +++ glance/store/s3.py 2011-07-29 18:46:57 +0000 |
168 | @@ -15,14 +15,19 @@ |
169 | # License for the specific language governing permissions and limitations |
170 | # under the License. |
171 | |
172 | -"""The s3 backend adapter""" |
173 | +"""Storage backend for S3 or Storage Servers that follow the S3 Protocol""" |
174 | |
175 | +import logging |
176 | +import httplib |
177 | import urlparse |
178 | |
179 | +from glance.common import config |
180 | from glance.common import exception |
181 | import glance.store |
182 | import glance.store.location |
183 | |
184 | +logger = logging.getLogger('glance.store.s3') |
185 | + |
186 | glance.store.location.add_scheme_map({'s3': 's3', |
187 | 's3+http': 's3', |
188 | 's3+https': 's3'}) |
189 | @@ -45,10 +50,17 @@ |
190 | self.scheme = self.specs.get('scheme', 's3') |
191 | self.accesskey = self.specs.get('accesskey') |
192 | self.secretkey = self.specs.get('secretkey') |
193 | - self.s3serviceurl = self.specs.get('s3serviceurl') |
194 | + s3_host = self.specs.get('s3serviceurl') |
195 | self.bucket = self.specs.get('bucket') |
196 | self.key = self.specs.get('key') |
197 | |
198 | + if s3_host.startswith('https://'): |
199 | + self.scheme = 's3+https' |
200 | + s3_host = s3_host[8:].strip('/') |
201 | + elif s3_host.startswith('http://'): |
202 | + s3_host = s3_host[7:].strip('/') |
203 | + self.s3serviceurl = s3_host.strip('/') |
204 | + |
205 | def _get_credstring(self): |
206 | if self.accesskey: |
207 | return '%s:%s@' % (self.accesskey, self.secretkey) |
208 | @@ -117,7 +129,7 @@ |
209 | self.key = path_parts.pop() |
210 | self.bucket = path_parts.pop() |
211 | if len(path_parts) > 0: |
212 | - self.s3serviceurl = '/'.join(path_parts) |
213 | + self.s3serviceurl = '/'.join(path_parts).strip('/') |
214 | else: |
215 | reason = "Badly formed S3 URI. Missing s3 service URL." |
216 | raise exception.BadStoreUri(uri, reason) |
217 | @@ -126,13 +138,63 @@ |
218 | raise exception.BadStoreUri(uri, reason) |
219 | |
220 | |
221 | +class ChunkedFile(object): |
222 | + |
223 | + """ |
224 | + We send this back to the Glance API server as |
225 | + something that can iterate over a ``boto.s3.key.Key`` |
226 | + """ |
227 | + |
228 | + CHUNKSIZE = 65536 |
229 | + |
230 | + def __init__(self, fp): |
231 | + self.fp = fp |
232 | + |
233 | + def __iter__(self): |
234 | + """Return an iterator over the image file""" |
235 | + try: |
236 | + while True: |
237 | + chunk = self.fp.read(ChunkedFile.CHUNKSIZE) |
238 | + if chunk: |
239 | + yield chunk |
240 | + else: |
241 | + break |
242 | + finally: |
243 | + self.close() |
244 | + |
245 | + def getvalue(self): |
246 | + """Return entire string value... used in testing""" |
247 | + data = "" |
248 | + self.len = 0 |
249 | + for chunk in self: |
250 | + read_bytes = len(chunk) |
251 | + data = data + chunk |
252 | + self.len = self.len + read_bytes |
253 | + return data |
254 | + |
255 | + def close(self): |
256 | + """Close the internal file pointer""" |
257 | + if self.fp: |
258 | + self.fp.close() |
259 | + self.fp = None |
260 | + |
261 | + |
262 | class S3Backend(glance.store.Backend): |
263 | """An implementation of the s3 adapter.""" |
264 | |
265 | - EXAMPLE_URL = "s3://ACCESS_KEY:SECRET_KEY@s3_url/bucket/file.gz.0" |
266 | - |
267 | - @classmethod |
268 | - def get(cls, location, expected_size, conn_class=None): |
269 | + EXAMPLE_URL = "s3://<ACCESS_KEY>:<SECRET_KEY>@<S3_URL>/<BUCKET>/<OBJ>" |
270 | + |
271 | + @classmethod |
272 | + def _option_get(cls, options, param): |
273 | + result = options.get(param) |
274 | + if not result: |
275 | + msg = ("Could not find %s in configuration options." % param) |
276 | + logger.error(msg) |
277 | + raise glance.store.BackendException(msg) |
278 | + return result |
279 | + |
280 | + @classmethod |
281 | + def get(cls, location, expected_size=None, options=None): |
282 | """ |
283 | Takes a `glance.store.location.Location` object that indicates |
284 | where to find the image file, and returns a generator from S3 |
285 | @@ -141,71 +203,202 @@ |
286 | :location `glance.store.location.Location` object, supplied |
287 | from glance.store.location.get_location_from_uri() |
288 | """ |
289 | - if conn_class: |
290 | - pass |
291 | + loc = location.store_location |
292 | + from boto.s3.connection import S3Connection |
293 | + |
294 | + s3_conn = S3Connection(loc.accesskey, loc.secretkey, |
295 | + host=loc.s3serviceurl, |
296 | + is_secure=(loc.scheme == 's3+https')) |
297 | + bucket_obj = get_bucket(s3_conn, loc.bucket) |
298 | + |
299 | + key = get_key(bucket_obj, loc.key) |
300 | + |
301 | + logger.debug("Retrieved image object from S3 using " |
302 | + "(s3_host=%s, access_key=%s, " |
303 | + "bucket=%s, key=%s)" % (loc.s3serviceurl, loc.accesskey, |
304 | + loc.bucket, loc.key)) |
305 | + |
306 | + if expected_size and (key.size != expected_size): |
307 | + msg = "Expected %s bytes, got %s" % (expected_size, key.size) |
308 | + logger.error(msg) |
309 | + raise glance.store.BackendException(msg) |
310 | + |
311 | + key.BufferSize = cls.CHUNKSIZE |
312 | + return ChunkedFile(key) |
313 | + |
314 | + @classmethod |
315 | + def add(cls, id, data, options): |
316 | + """ |
317 | + Stores image data to S3 and returns a location that the image was |
318 | + written to. |
319 | + |
320 | + S3 writes the image data using the scheme: |
321 | + s3://<ACCESS_KEY>:<SECRET_KEY>@<S3_URL>/<BUCKET>/<OBJ> |
322 | + where: |
323 | + <USER> = ``s3_store_user`` |
324 | + <KEY> = ``s3_store_key`` |
325 | + <S3_HOST> = ``s3_store_host`` |
326 | + <BUCKET> = ``s3_store_bucket`` |
327 | + <ID> = The id of the image being added |
328 | + |
329 | + :param id: The opaque image identifier |
330 | + :param data: The image data to write, as a file-like object |
331 | + :param options: Conf mapping |
332 | + |
333 | + :retval Tuple with (location, size) |
334 | + The location that was written, |
335 | + and the size in bytes of the data written |
336 | + """ |
337 | + from boto.s3.connection import S3Connection |
338 | + |
339 | + # TODO(jaypipes): This needs to be checked every time |
340 | + # because of the decision to make glance.store.Backend's |
341 | + # interface all @classmethods. This is inefficient. Backend |
342 | + # should be a stateful object with options parsed once in |
343 | + # a constructor. |
344 | + s3_host = cls._option_get(options, 's3_store_host') |
345 | + access_key = cls._option_get(options, 's3_store_access_key') |
346 | + secret_key = cls._option_get(options, 's3_store_secret_key') |
347 | + # NOTE(jaypipes): Need to encode to UTF-8 here because of a |
348 | + # bug in the HMAC library that boto uses. |
349 | + # See: http://bugs.python.org/issue5285 |
350 | + # See: http://trac.edgewall.org/ticket/8083 |
351 | + access_key = access_key.encode('utf-8') |
352 | + secret_key = secret_key.encode('utf-8') |
353 | + bucket = cls._option_get(options, 's3_store_bucket') |
354 | + |
355 | + scheme = 's3' |
356 | + if s3_host.startswith('https://'): |
357 | + scheme = 'swift+https' |
358 | + full_s3_host = s3_host |
359 | + elif s3_host.startswith('http://'): |
360 | + full_s3_host = s3_host |
361 | else: |
362 | - import boto.s3.connection |
363 | - conn_class = boto.s3.connection.S3Connection |
364 | - |
365 | - loc = location.store_location |
366 | - |
367 | - # Close the connection when we're through. |
368 | - with conn_class(loc.accesskey, loc.secretkey, |
369 | - host=loc.s3serviceurl) as s3_conn: |
370 | - bucket = cls._get_bucket(s3_conn, loc.bucket) |
371 | - |
372 | - # Close the key when we're through. |
373 | - with cls._get_key(bucket, loc.obj) as key: |
374 | - if not key.size == expected_size: |
375 | - raise glance.store.BackendException( |
376 | - "Expected %s bytes, got %s" % |
377 | - (expected_size, key.size)) |
378 | - |
379 | - key.BufferSize = cls.CHUNKSIZE |
380 | - for chunk in key: |
381 | - yield chunk |
382 | + full_s3_host = 'http://' + s3_host # Defaults http |
383 | + |
384 | + loc = StoreLocation({'scheme': scheme, |
385 | + 'bucket': bucket, |
386 | + 'key': id, |
387 | + 's3serviceurl': full_s3_host, |
388 | + 'accesskey': access_key, |
389 | + 'secretkey': secret_key}) |
390 | + |
391 | + s3_conn = S3Connection(loc.accesskey, loc.secretkey, |
392 | + host=loc.s3serviceurl, |
393 | + is_secure=(loc.scheme == 's3+https')) |
394 | + |
395 | + create_bucket_if_missing(bucket, s3_conn, options) |
396 | + |
397 | + bucket_obj = get_bucket(s3_conn, bucket) |
398 | + obj_name = str(id) |
399 | + |
400 | + key = bucket_obj.get_key(obj_name) |
401 | + if key and key.exists(): |
402 | + raise exception.Duplicate("S3 already has an image at " |
403 | + "location %s" % loc.get_uri()) |
404 | + |
405 | + logger.debug("Adding image object to S3 using " |
406 | + "(s3_host=%(s3_host)s, access_key=%(access_key)s, " |
407 | + "bucket=%(bucket)s, key=%(obj_name)s)" % locals()) |
408 | + |
409 | + key = bucket_obj.new_key(obj_name) |
410 | + |
411 | + # OK, now upload the data into the key |
412 | + obj_md5, _base64_digest = key.compute_md5(data) |
413 | + key.set_contents_from_file(data, replace=False) |
414 | + size = key.size |
415 | + |
416 | + return (loc.get_uri(), size, obj_md5) |
417 | |
418 | @classmethod |
419 | - def delete(cls, location, conn_class=None): |
420 | + def delete(cls, location, options=None): |
421 | """ |
422 | - Takes a `glance.store.location.Location` object that indicates |
423 | - where to find the image file to delete |
424 | + Delete an object in a specific location |
425 | |
426 | :location `glance.store.location.Location` object, supplied |
427 | from glance.store.location.get_location_from_uri() |
428 | """ |
429 | - if conn_class: |
430 | - pass |
431 | - else: |
432 | - conn_class = boto.s3.connection.S3Connection |
433 | - |
434 | loc = location.store_location |
435 | - |
436 | - # Close the connection when we're through. |
437 | - with conn_class(loc.accesskey, loc.secretkey, |
438 | - host=loc.s3serviceurl) as s3_conn: |
439 | - bucket = cls._get_bucket(s3_conn, loc.bucket) |
440 | - |
441 | - # Close the key when we're through. |
442 | - with cls._get_key(bucket, loc.obj) as key: |
443 | - return key.delete() |
444 | - |
445 | - @classmethod |
446 | - def _get_bucket(cls, conn, bucket_id): |
447 | - """Get a bucket from an s3 connection""" |
448 | - |
449 | - bucket = conn.get_bucket(bucket_id) |
450 | - if not bucket: |
451 | - raise glance.store.BackendException("Could not find bucket: %s" % |
452 | - bucket_id) |
453 | - |
454 | - return bucket |
455 | - |
456 | - @classmethod |
457 | - def _get_key(cls, bucket, obj): |
458 | - """Get a key from a bucket""" |
459 | - |
460 | - key = bucket.get_key(obj) |
461 | - if not key: |
462 | - raise glance.store.BackendException("Could not get key: %s" % key) |
463 | - return key |
464 | + from boto.s3.connection import S3Connection |
465 | + s3_conn = S3Connection(loc.accesskey, loc.secretkey, |
466 | + host=loc.s3serviceurl, |
467 | + is_secure=(loc.scheme == 's3+https')) |
468 | + bucket_obj = get_bucket(s3_conn, loc.bucket) |
469 | + |
470 | + # Close the key when we're through. |
471 | + key = get_key(bucket_obj, loc.key) |
472 | + |
473 | + logger.debug("Deleting image object from S3 using " |
474 | + "(s3_host=%s, access_key=%s, " |
475 | + "bucket=%s, key=%s)" % (loc.s3serviceurl, loc.accesskey, |
476 | + loc.bucket, loc.key)) |
477 | + |
478 | + return key.delete() |
479 | + |
480 | + |
481 | +def get_bucket(conn, bucket_id): |
482 | + """ |
483 | + Get a bucket from an s3 connection |
484 | + |
485 | + :param conn: The ``boto.s3.connection.S3Connection`` |
486 | + :param bucket_id: ID of the bucket to fetch |
487 | + :raises ``glance.exception.NotFound`` if bucket is not found. |
488 | + """ |
489 | + |
490 | + bucket = conn.get_bucket(bucket_id) |
491 | + if not bucket: |
492 | + msg = ("Could not find bucket with ID %(bucket_id)s") % locals() |
493 | + logger.error(msg) |
494 | + raise exception.NotFound(msg) |
495 | + |
496 | + return bucket |
497 | + |
498 | + |
499 | +def create_bucket_if_missing(bucket, s3_conn, options): |
500 | + """ |
501 | + Creates a missing bucket in S3 if the |
502 | + ``s3_store_create_bucket_on_put`` option is set. |
503 | + |
504 | + :param bucket: Name of bucket to create |
505 | + :param s3_conn: Connection to S3 |
506 | + :param options: Option mapping |
507 | + """ |
508 | + from boto.exception import S3ResponseError |
509 | + try: |
510 | + s3_conn.get_bucket(bucket) |
511 | + except S3ResponseError, e: |
512 | + if e.status == httplib.NOT_FOUND: |
513 | + add_bucket = config.get_option(options, |
514 | + 's3_store_create_bucket_on_put', |
515 | + type='bool', default=False) |
516 | + if add_bucket: |
517 | + try: |
518 | + s3_conn.create_bucket(bucket) |
519 | + except S3ResponseError, e: |
520 | + msg = ("Failed to add bucket to S3.\n" |
521 | + "Got error from S3: %(e)s" % locals()) |
522 | + raise glance.store.BackendException(msg) |
523 | + else: |
524 | + msg = ("The bucket %(bucket)s does not exist in " |
525 | + "S3. Please set the " |
526 | + "s3_store_create_bucket_on_put option " |
527 | + "to add bucket to S3 automatically." |
528 | + % locals()) |
529 | + raise glance.store.BackendException(msg) |
530 | + |
531 | + |
532 | +def get_key(bucket, obj): |
533 | + """ |
534 | + Get a key from a bucket |
535 | + |
536 | + :param bucket: The ``boto.s3.Bucket`` |
537 | + :param obj: Object to get the key for |
538 | + :raises ``glance.exception.NotFound`` if key is not found. |
539 | + """ |
540 | + |
541 | + key = bucket.get_key(obj) |
542 | + if not key or not key.exists(): |
543 | + msg = ("Could not find key %(obj)s in bucket %(bucket)s") % locals() |
544 | + logger.error(msg) |
545 | + raise exception.NotFound(msg) |
546 | + return key |
547 | |
548 | === modified file 'tests/functional/__init__.py' |
549 | --- tests/functional/__init__.py 2011-07-23 03:38:20 +0000 |
550 | +++ tests/functional/__init__.py 2011-07-29 18:46:57 +0000 |
551 | @@ -137,6 +137,10 @@ |
552 | "api.pid") |
553 | self.log_file = os.path.join(self.test_dir, "api.log") |
554 | self.registry_port = registry_port |
555 | + self.s3_store_host = "s3.amazonaws.com" |
556 | + self.s3_store_access_key = "" |
557 | + self.s3_store_secret_key = "" |
558 | + self.s3_store_bucket = "" |
559 | self.delayed_delete = delayed_delete |
560 | self.conf_base = """[DEFAULT] |
561 | verbose = %(verbose)s |
562 | @@ -148,6 +152,10 @@ |
563 | registry_host = 0.0.0.0 |
564 | registry_port = %(registry_port)s |
565 | log_file = %(log_file)s |
566 | +s3_store_host = %(s3_store_host)s |
567 | +s3_store_access_key = %(s3_store_access_key)s |
568 | +s3_store_secret_key = %(s3_store_secret_key)s |
569 | +s3_store_bucket = %(s3_store_bucket)s |
570 | delayed_delete = %(delayed_delete)s |
571 | |
572 | [pipeline:glance-api] |
573 | |
574 | === added file 'tests/functional/test_s3.conf' |
575 | --- tests/functional/test_s3.conf 1970-01-01 00:00:00 +0000 |
576 | +++ tests/functional/test_s3.conf 2011-07-29 18:46:57 +0000 |
577 | @@ -0,0 +1,21 @@ |
578 | +[DEFAULT] |
579 | + |
580 | +# Set the following to Amazon S3 credentials that you want |
581 | +# to use while functional testing the S3 backend. |
582 | + |
583 | +# Address where the S3 authentication service lives |
584 | +s3_store_host = s3.amazonaws.com |
585 | + |
586 | +# User to authenticate against the S3 authentication service |
587 | +s3_store_access_key = <20-char AWS access key> |
588 | + |
589 | +# Auth key for the user authenticating against the |
590 | +# S3 authentication service |
591 | +s3_store_secret_key = <40-char AWS secret key> |
592 | + |
593 | +# Container within the account that the account should use |
594 | +# for storing images in S3. Note that S3 has a flat namespace, |
595 | +# so you need a unique bucket name for your glance images. An |
596 | +# easy way to do this is append your lower-cased AWS access key |
597 | +# to "glance" |
598 | +s3_store_bucket = <20-char AWS access key - lowercased>glance |
599 | |
600 | === added file 'tests/functional/test_s3.py' |
601 | --- tests/functional/test_s3.py 1970-01-01 00:00:00 +0000 |
602 | +++ tests/functional/test_s3.py 2011-07-29 18:46:57 +0000 |
603 | @@ -0,0 +1,532 @@ |
604 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
605 | + |
606 | +# Copyright 2011 OpenStack, LLC |
607 | +# All Rights Reserved. |
608 | +# |
609 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
610 | +# not use this file except in compliance with the License. You may obtain |
611 | +# a copy of the License at |
612 | +# |
613 | +# http://www.apache.org/licenses/LICENSE-2.0 |
614 | +# |
615 | +# Unless required by applicable law or agreed to in writing, software |
616 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
617 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
618 | +# License for the specific language governing permissions and limitations |
619 | +# under the License. |
620 | + |
621 | +""" |
622 | +Tests a Glance API server which uses an S3 backend by default |
623 | + |
624 | +This test requires that a real S3 account is available. It looks |
625 | +in a file /tests/functional/test_s3.conf for the credentials to |
626 | +use. |
627 | + |
628 | +Note that this test clears the entire bucket from the S3 account |
629 | +for use by the test case, so make sure you supply credentials for |
630 | +test accounts only. |
631 | + |
632 | +If a connection cannot be established, all the test cases are |
633 | +skipped. |
634 | +""" |
635 | + |
636 | +import ConfigParser |
637 | +import json |
638 | +import os |
639 | +import tempfile |
640 | +import unittest |
641 | + |
642 | +from tests import functional |
643 | +from tests.utils import execute |
644 | + |
645 | +FIVE_KB = 5 * 1024 |
646 | +FIVE_GB = 5 * 1024 * 1024 * 1024 |
647 | + |
648 | + |
649 | +class TestS3(functional.FunctionalTest): |
650 | + |
651 | + """Functional tests for the S3 backend""" |
652 | + |
653 | + # Test machines can set the GLANCE_TEST_MIGRATIONS_CONF variable |
654 | + # to override the location of the config file for migration testing |
655 | + CONFIG_FILE_PATH = os.environ.get('GLANCE_TEST_S3_CONF', |
656 | + os.path.join('tests', 'functional', |
657 | + 'test_s3.conf')) |
658 | + |
659 | + def setUp(self): |
660 | + """ |
661 | + Test a connection to an S3 store using the credentials |
662 | + found in the environs or /tests/functional/test_s3.conf, if found. |
663 | + If the connection fails, mark all tests to skip. |
664 | + """ |
665 | + self.inited = False |
666 | + self.skip_tests = True |
667 | + |
668 | + if self.inited: |
669 | + return |
670 | + |
671 | + if os.path.exists(TestS3.CONFIG_FILE_PATH): |
672 | + cp = ConfigParser.RawConfigParser() |
673 | + try: |
674 | + cp.read(TestS3.CONFIG_FILE_PATH) |
675 | + defaults = cp.defaults() |
676 | + for key, value in defaults.items(): |
677 | + self.__dict__[key] = value |
678 | + except ConfigParser.ParsingError, e: |
679 | + print ("Failed to read test_s3.conf config file. " |
680 | + "Got error: %s" % e) |
681 | + super(TestS3, self).setUp() |
682 | + self.inited = True |
683 | + return |
684 | + |
685 | + from boto.s3.connection import S3Connection |
686 | + from boto.exception import S3ResponseError |
687 | + |
688 | + try: |
689 | + s3_host = self.s3_store_host |
690 | + access_key = self.s3_store_access_key |
691 | + secret_key = self.s3_store_secret_key |
692 | + bucket_name = self.s3_store_bucket |
693 | + except AttributeError, e: |
694 | + print ("Failed to find required configuration options for " |
695 | + "S3 store. Got error: %s" % e) |
696 | + self.inited = True |
697 | + super(TestS3, self).setUp() |
698 | + return |
699 | + |
700 | + s3_conn = S3Connection(access_key, secret_key, host=s3_host) |
701 | + |
702 | + self.bucket = None |
703 | + try: |
704 | + buckets = s3_conn.get_all_buckets() |
705 | + for bucket in buckets: |
706 | + if bucket.name == bucket_name: |
707 | + self.bucket = bucket |
708 | + except S3ResponseError, e: |
709 | + print ("Failed to connect to S3 with credentials," |
710 | + "to find bucket. Got error: %s" % e) |
711 | + self.inited = True |
712 | + super(TestS3, self).setUp() |
713 | + return |
714 | + except TypeError, e: |
715 | + # This hack is necessary because of a bug in boto 1.9b: |
716 | + # http://code.google.com/p/boto/issues/detail?id=540 |
717 | + print ("Failed to connect to S3 with credentials. " |
718 | + "Got error: %s" % e) |
719 | + self.inited = True |
720 | + super(TestS3, self).setUp() |
721 | + return |
722 | + |
723 | + self.s3_conn = s3_conn |
724 | + |
725 | + if not self.bucket: |
726 | + try: |
727 | + self.bucket = s3_conn.create_bucket(bucket_name) |
728 | + except boto.exception.S3ResponseError, e: |
729 | + print ("Failed to create bucket. Got error: %s" % e) |
730 | + self.inited = True |
731 | + super(TestS3, self).setUp() |
732 | + return |
733 | + else: |
734 | + self.clear_bucket() |
735 | + |
736 | + self.skip_tests = False |
737 | + self.inited = True |
738 | + self.default_store = 's3' |
739 | + |
740 | + super(TestS3, self).setUp() |
741 | + |
742 | + def tearDown(self): |
743 | + if not self.skip_tests: |
744 | + self.clear_bucket() |
745 | + super(TestS3, self).tearDown() |
746 | + |
747 | + def clear_bucket(self): |
748 | + # It's not possible to simply clear a bucket. You |
749 | + # need to loop over all the keys and delete them |
750 | + # all first... |
751 | + keys = self.bucket.list() |
752 | + for key in keys: |
753 | + key.delete() |
754 | + |
755 | + def test_add_list_delete_list(self): |
756 | + """ |
757 | + We test the following: |
758 | + |
759 | + 0. GET /images |
760 | + - Verify no public images |
761 | + 1. GET /images/detail |
762 | + - Verify no public images |
763 | + 2. HEAD /images/1 |
764 | + - Verify 404 returned |
765 | + 3. POST /images with public image named Image1 with a location |
766 | + attribute and no custom properties |
767 | + - Verify 201 returned |
768 | + 4. HEAD /images/1 |
769 | + - Verify HTTP headers have correct information we just added |
770 | + 5. GET /images/1 |
771 | + - Verify all information on image we just added is correct |
772 | + 6. GET /images |
773 | + - Verify the image we just added is returned |
774 | + 7. GET /images/detail |
775 | + - Verify the image we just added is returned |
776 | + 8. PUT /images/1 with custom properties of "distro" and "arch" |
777 | + - Verify 200 returned |
778 | + 9. GET /images/1 |
779 | + - Verify updated information about image was stored |
780 | + 10. PUT /images/1 |
781 | + - Remove a previously existing property. |
782 | + 11. PUT /images/1 |
783 | + - Add a previously deleted property. |
784 | + """ |
785 | + if self.skip_tests: |
786 | + return True |
787 | + |
788 | + self.cleanup() |
789 | + self.start_servers(**self.__dict__.copy()) |
790 | + |
791 | + api_port = self.api_port |
792 | + registry_port = self.registry_port |
793 | + |
794 | + # 0. GET /images |
795 | + # Verify no public images |
796 | + cmd = "curl http://0.0.0.0:%d/v1/images" % api_port |
797 | + |
798 | + exitcode, out, err = execute(cmd) |
799 | + |
800 | + self.assertEqual(0, exitcode) |
801 | + self.assertEqual('{"images": []}', out.strip()) |
802 | + |
803 | + # 1. GET /images/detail |
804 | + # Verify no public images |
805 | + cmd = "curl http://0.0.0.0:%d/v1/images/detail" % api_port |
806 | + |
807 | + exitcode, out, err = execute(cmd) |
808 | + |
809 | + self.assertEqual(0, exitcode) |
810 | + self.assertEqual('{"images": []}', out.strip()) |
811 | + |
812 | + # 2. HEAD /images/1 |
813 | + # Verify 404 returned |
814 | + cmd = "curl -i -X HEAD http://0.0.0.0:%d/v1/images/1" % api_port |
815 | + |
816 | + exitcode, out, err = execute(cmd) |
817 | + |
818 | + self.assertEqual(0, exitcode) |
819 | + |
820 | + lines = out.split("\r\n") |
821 | + status_line = lines[0] |
822 | + |
823 | + self.assertEqual("HTTP/1.1 404 Not Found", status_line) |
824 | + |
825 | + # 3. POST /images with public image named Image1 |
826 | + # attribute and no custom properties. Verify a 200 OK is returned |
827 | + image_data = "*" * FIVE_KB |
828 | + |
829 | + cmd = ("curl -i -X POST " |
830 | + "-H 'Expect: ' " # Necessary otherwise sends 100 Continue |
831 | + "-H 'Content-Type: application/octet-stream' " |
832 | + "-H 'X-Image-Meta-Name: Image1' " |
833 | + "-H 'X-Image-Meta-Is-Public: True' " |
834 | + "--data-binary \"%s\" " |
835 | + "http://0.0.0.0:%d/v1/images") % (image_data, api_port) |
836 | + |
837 | + exitcode, out, err = execute(cmd) |
838 | + self.assertEqual(0, exitcode) |
839 | + |
840 | + lines = out.split("\r\n") |
841 | + status_line = lines[0] |
842 | + |
843 | + self.assertEqual("HTTP/1.1 201 Created", status_line, out) |
844 | + |
845 | + # 4. HEAD /images |
846 | + # Verify image found now |
847 | + cmd = "curl -i -X HEAD http://0.0.0.0:%d/v1/images/1" % api_port |
848 | + |
849 | + exitcode, out, err = execute(cmd) |
850 | + |
851 | + self.assertEqual(0, exitcode) |
852 | + |
853 | + lines = out.split("\r\n") |
854 | + status_line = lines[0] |
855 | + |
856 | + self.assertEqual("HTTP/1.1 200 OK", status_line) |
857 | + self.assertTrue("X-Image-Meta-Name: Image1" in out) |
858 | + |
859 | + # 5. GET /images/1 |
860 | + # Verify all information on image we just added is correct |
861 | + |
862 | + cmd = "curl -i http://0.0.0.0:%d/v1/images/1" % api_port |
863 | + |
864 | + exitcode, out, err = execute(cmd) |
865 | + |
866 | + self.assertEqual(0, exitcode) |
867 | + |
868 | + lines = out.split("\r\n") |
869 | + |
870 | + self.assertEqual("HTTP/1.1 200 OK", lines.pop(0)) |
871 | + |
872 | + # Handle the headers |
873 | + image_headers = {} |
874 | + std_headers = {} |
875 | + other_lines = [] |
876 | + for line in lines: |
877 | + if line.strip() == '': |
878 | + continue |
879 | + if line.startswith("X-Image"): |
880 | + pieces = line.split(":") |
881 | + key = pieces[0].strip() |
882 | + value = ":".join(pieces[1:]).strip() |
883 | + image_headers[key] = value |
884 | + elif ':' in line: |
885 | + pieces = line.split(":") |
886 | + key = pieces[0].strip() |
887 | + value = ":".join(pieces[1:]).strip() |
888 | + std_headers[key] = value |
889 | + else: |
890 | + other_lines.append(line) |
891 | + |
892 | + expected_image_headers = { |
893 | + 'X-Image-Meta-Id': '1', |
894 | + 'X-Image-Meta-Name': 'Image1', |
895 | + 'X-Image-Meta-Is_public': 'True', |
896 | + 'X-Image-Meta-Status': 'active', |
897 | + 'X-Image-Meta-Disk_format': '', |
898 | + 'X-Image-Meta-Container_format': '', |
899 | + 'X-Image-Meta-Size': str(FIVE_KB), |
900 | + 'X-Image-Meta-Location': 's3://%s:%s@%s/%s/1' % ( |
901 | + self.s3_store_access_key, |
902 | + self.s3_store_secret_key, |
903 | + self.s3_store_host, |
904 | + self.s3_store_bucket)} |
905 | + |
906 | + expected_std_headers = { |
907 | + 'Content-Length': str(FIVE_KB), |
908 | + 'Content-Type': 'application/octet-stream'} |
909 | + |
910 | + for expected_key, expected_value in expected_image_headers.items(): |
911 | + self.assertTrue(expected_key in image_headers, |
912 | + "Failed to find key %s in image_headers" |
913 | + % expected_key) |
914 | + self.assertEqual(expected_value, image_headers[expected_key], |
915 | + "For key '%s' expected header value '%s'. Got '%s'" |
916 | + % (expected_key, |
917 | + expected_value, |
918 | + image_headers[expected_key])) |
919 | + |
920 | + for expected_key, expected_value in expected_std_headers.items(): |
921 | + self.assertTrue(expected_key in std_headers, |
922 | + "Failed to find key %s in std_headers" |
923 | + % expected_key) |
924 | + self.assertEqual(expected_value, std_headers[expected_key], |
925 | + "For key '%s' expected header value '%s'. Got '%s'" |
926 | + % (expected_key, |
927 | + expected_value, |
928 | + std_headers[expected_key])) |
929 | + |
930 | + # Now the image data... |
931 | + expected_image_data = "*" * FIVE_KB |
932 | + |
933 | + # Should only be a single "line" left, and |
934 | + # that's the image data |
935 | + self.assertEqual(1, len(other_lines)) |
936 | + self.assertEqual(expected_image_data, other_lines[0]) |
937 | + |
938 | + # 6. GET /images |
939 | + # Verify no public images |
940 | + cmd = "curl http://0.0.0.0:%d/v1/images" % api_port |
941 | + |
942 | + exitcode, out, err = execute(cmd) |
943 | + |
944 | + self.assertEqual(0, exitcode) |
945 | + |
946 | + expected_result = {"images": [ |
947 | + {"container_format": None, |
948 | + "disk_format": None, |
949 | + "id": 1, |
950 | + "name": "Image1", |
951 | + "checksum": "c2e5db72bd7fd153f53ede5da5a06de3", |
952 | + "size": 5120}]} |
953 | + self.assertEqual(expected_result, json.loads(out.strip())) |
954 | + |
955 | + # 7. GET /images/detail |
956 | + # Verify image and all its metadata |
957 | + cmd = "curl http://0.0.0.0:%d/v1/images/detail" % api_port |
958 | + |
959 | + exitcode, out, err = execute(cmd) |
960 | + |
961 | + self.assertEqual(0, exitcode) |
962 | + |
963 | + expected_image = { |
964 | + "status": "active", |
965 | + "name": "Image1", |
966 | + "deleted": False, |
967 | + "container_format": None, |
968 | + "disk_format": None, |
969 | + "id": 1, |
970 | + 'location': 's3://%s:%s@%s/%s/1' % ( |
971 | + self.s3_store_access_key, |
972 | + self.s3_store_secret_key, |
973 | + self.s3_store_host, |
974 | + self.s3_store_bucket), |
975 | + "is_public": True, |
976 | + "deleted_at": None, |
977 | + "properties": {}, |
978 | + "size": 5120} |
979 | + |
980 | + image = json.loads(out.strip())['images'][0] |
981 | + |
982 | + for expected_key, expected_value in expected_image.items(): |
983 | + self.assertTrue(expected_key in image, |
984 | + "Failed to find key %s in image" |
985 | + % expected_key) |
986 | + self.assertEqual(expected_value, expected_image[expected_key], |
987 | + "For key '%s' expected header value '%s'. Got '%s'" |
988 | + % (expected_key, |
989 | + expected_value, |
990 | + image[expected_key])) |
991 | + |
992 | + # 8. PUT /images/1 with custom properties of "distro" and "arch" |
993 | + # Verify 200 returned |
994 | + |
995 | + cmd = ("curl -i -X PUT " |
996 | + "-H 'X-Image-Meta-Property-Distro: Ubuntu' " |
997 | + "-H 'X-Image-Meta-Property-Arch: x86_64' " |
998 | + "http://0.0.0.0:%d/v1/images/1") % api_port |
999 | + |
1000 | + exitcode, out, err = execute(cmd) |
1001 | + self.assertEqual(0, exitcode) |
1002 | + |
1003 | + lines = out.split("\r\n") |
1004 | + status_line = lines[0] |
1005 | + |
1006 | + self.assertEqual("HTTP/1.1 200 OK", status_line) |
1007 | + |
1008 | + # 9. GET /images/detail |
1009 | + # Verify image and all its metadata |
1010 | + cmd = "curl http://0.0.0.0:%d/v1/images/detail" % api_port |
1011 | + |
1012 | + exitcode, out, err = execute(cmd) |
1013 | + |
1014 | + self.assertEqual(0, exitcode) |
1015 | + |
1016 | + expected_image = { |
1017 | + "status": "active", |
1018 | + "name": "Image1", |
1019 | + "deleted": False, |
1020 | + "container_format": None, |
1021 | + "disk_format": None, |
1022 | + "id": 1, |
1023 | + 'location': 's3://%s:%s@%s/%s/1' % ( |
1024 | + self.s3_store_access_key, |
1025 | + self.s3_store_secret_key, |
1026 | + self.s3_store_host, |
1027 | + self.s3_store_bucket), |
1028 | + "is_public": True, |
1029 | + "deleted_at": None, |
1030 | + "properties": {'distro': 'Ubuntu', 'arch': 'x86_64'}, |
1031 | + "size": 5120} |
1032 | + |
1033 | + image = json.loads(out.strip())['images'][0] |
1034 | + |
1035 | + for expected_key, expected_value in expected_image.items(): |
1036 | + self.assertTrue(expected_key in image, |
1037 | + "Failed to find key %s in image" |
1038 | + % expected_key) |
1039 | + self.assertEqual(expected_value, image[expected_key], |
1040 | + "For key '%s' expected header value '%s'. Got '%s'" |
1041 | + % (expected_key, |
1042 | + expected_value, |
1043 | + image[expected_key])) |
1044 | + |
1045 | + # 10. PUT /images/1 and remove a previously existing property. |
1046 | + cmd = ("curl -i -X PUT " |
1047 | + "-H 'X-Image-Meta-Property-Arch: x86_64' " |
1048 | + "http://0.0.0.0:%d/v1/images/1") % api_port |
1049 | + |
1050 | + exitcode, out, err = execute(cmd) |
1051 | + self.assertEqual(0, exitcode) |
1052 | + |
1053 | + lines = out.split("\r\n") |
1054 | + status_line = lines[0] |
1055 | + |
1056 | + self.assertEqual("HTTP/1.1 200 OK", status_line) |
1057 | + |
1058 | + cmd = "curl http://0.0.0.0:%d/v1/images/detail" % api_port |
1059 | + |
1060 | + exitcode, out, err = execute(cmd) |
1061 | + |
1062 | + self.assertEqual(0, exitcode) |
1063 | + |
1064 | + image = json.loads(out.strip())['images'][0] |
1065 | + self.assertEqual(1, len(image['properties'])) |
1066 | + self.assertEqual('x86_64', image['properties']['arch']) |
1067 | + |
1068 | + # 11. PUT /images/1 and add a previously deleted property. |
1069 | + cmd = ("curl -i -X PUT " |
1070 | + "-H 'X-Image-Meta-Property-Distro: Ubuntu' " |
1071 | + "-H 'X-Image-Meta-Property-Arch: x86_64' " |
1072 | + "http://0.0.0.0:%d/v1/images/1") % api_port |
1073 | + |
1074 | + exitcode, out, err = execute(cmd) |
1075 | + self.assertEqual(0, exitcode) |
1076 | + |
1077 | + lines = out.split("\r\n") |
1078 | + status_line = lines[0] |
1079 | + |
1080 | + self.assertEqual("HTTP/1.1 200 OK", status_line) |
1081 | + |
1082 | + cmd = "curl http://0.0.0.0:%d/v1/images/detail" % api_port |
1083 | + |
1084 | + exitcode, out, err = execute(cmd) |
1085 | + |
1086 | + self.assertEqual(0, exitcode) |
1087 | + |
1088 | + image = json.loads(out.strip())['images'][0] |
1089 | + self.assertEqual(2, len(image['properties'])) |
1090 | + self.assertEqual('x86_64', image['properties']['arch']) |
1091 | + self.assertEqual('Ubuntu', image['properties']['distro']) |
1092 | + |
1093 | + self.stop_servers() |
1094 | + |
1095 | + def test_delete_not_existing(self): |
1096 | + """ |
1097 | + We test the following: |
1098 | + |
1099 | + 0. GET /images/1 |
1100 | + - Verify 404 |
1101 | + 1. DELETE /images/1 |
1102 | + - Verify 404 |
1103 | + """ |
1104 | + if self.skip_tests: |
1105 | + return True |
1106 | + |
1107 | + self.cleanup() |
1108 | + self.start_servers(**self.__dict__.copy()) |
1109 | + |
1110 | + api_port = self.api_port |
1111 | + registry_port = self.registry_port |
1112 | + |
1113 | + # 0. GET /images/1 |
1114 | + # Verify 404 returned |
1115 | + cmd = "curl -i http://0.0.0.0:%d/v1/images/1" % api_port |
1116 | + |
1117 | + exitcode, out, err = execute(cmd) |
1118 | + |
1119 | + lines = out.split("\r\n") |
1120 | + status_line = lines[0] |
1121 | + |
1122 | + self.assertEqual("HTTP/1.1 404 Not Found", status_line) |
1123 | + |
1124 | + # 1. DELETE /images/1 |
1125 | + # Verify 404 returned |
1126 | + cmd = "curl -i -X DELETE http://0.0.0.0:%d/v1/images/1" % api_port |
1127 | + |
1128 | + exitcode, out, err = execute(cmd) |
1129 | + |
1130 | + lines = out.split("\r\n") |
1131 | + status_line = lines[0] |
1132 | + |
1133 | + self.assertEqual("HTTP/1.1 404 Not Found", status_line) |
1134 | + |
1135 | + self.stop_servers() |
1136 | |
1137 | === added file 'tests/unit/test_s3_store.py' |
1138 | --- tests/unit/test_s3_store.py 1970-01-01 00:00:00 +0000 |
1139 | +++ tests/unit/test_s3_store.py 2011-07-29 18:46:57 +0000 |
1140 | @@ -0,0 +1,345 @@ |
1141 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
1142 | + |
1143 | +# Copyright 2011 OpenStack, LLC |
1144 | +# All Rights Reserved. |
1145 | +# |
1146 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
1147 | +# not use this s3 except in compliance with the License. You may obtain |
1148 | +# a copy of the License at |
1149 | +# |
1150 | +# http://www.apache.org/licenses/LICENSE-2.0 |
1151 | +# |
1152 | +# Unless required by applicable law or agreed to in writing, software |
1153 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
1154 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
1155 | +# License for the specific language governing permissions and limitations |
1156 | +# under the License. |
1157 | + |
1158 | +"""Tests the S3 backend store""" |
1159 | + |
1160 | +import StringIO |
1161 | +import hashlib |
1162 | +import httplib |
1163 | +import sys |
1164 | +import unittest |
1165 | +import urlparse |
1166 | + |
1167 | +import stubout |
1168 | +import boto.s3.connection |
1169 | + |
1170 | +from glance.common import exception |
1171 | +from glance.store import BackendException, UnsupportedBackend |
1172 | +from glance.store.location import get_location_from_uri |
1173 | +from glance.store.s3 import S3Backend |
1174 | + |
1175 | +FIVE_KB = (5 * 1024) |
1176 | +S3_OPTIONS = {'verbose': True, |
1177 | + 'debug': True, |
1178 | + 's3_store_access_key': 'user', |
1179 | + 's3_store_secret_key': 'key', |
1180 | + 's3_store_host': 'localhost:8080', |
1181 | + 's3_store_bucket': 'glance'} |
1182 | + |
1183 | + |
1184 | +# We stub out as little as possible to ensure that the code paths |
1185 | +# between glance.store.s3 and boto.s3.connection are tested |
1186 | +# thoroughly |
1187 | +def stub_out_s3(stubs): |
1188 | + |
1189 | + class FakeKey: |
1190 | + """ |
1191 | + Acts like a ``boto.s3.key.Key`` |
1192 | + """ |
1193 | + def __init__(self, bucket, name): |
1194 | + self.bucket = bucket |
1195 | + self.name = name |
1196 | + self.data = None |
1197 | + self.size = 0 |
1198 | + self.BufferSize = 1024 |
1199 | + |
1200 | + def close(self): |
1201 | + pass |
1202 | + |
1203 | + def exists(self): |
1204 | + return self.bucket.exists(self.name) |
1205 | + |
1206 | + def delete(self): |
1207 | + self.bucket.delete(self.name) |
1208 | + |
1209 | + def compute_md5(self, data): |
1210 | + chunk = data.read(self.BufferSize) |
1211 | + checksum = hashlib.md5() |
1212 | + while chunk: |
1213 | + checksum.update(chunk) |
1214 | + chunk = data.read(self.BufferSize) |
1215 | + return checksum.hexdigest(), None |
1216 | + |
1217 | + def set_contents_from_file(self, fp, replace=False, **kwargs): |
1218 | + self.data = StringIO.StringIO() |
1219 | + self.data.write(fp.getvalue()) |
1220 | + self.size = self.data.len |
1221 | + # Reset the buffer to start |
1222 | + self.data.seek(0) |
1223 | + self.read = self.data.read |
1224 | + |
1225 | + def get_file(self): |
1226 | + return self.data |
1227 | + |
1228 | + class FakeBucket: |
1229 | + """ |
1230 | + Acts like a ``boto.s3.bucket.Bucket`` |
1231 | + """ |
1232 | + def __init__(self, name, keys=None): |
1233 | + self.name = name |
1234 | + self.keys = keys or {} |
1235 | + |
1236 | + def __str__(self): |
1237 | + return self.name |
1238 | + |
1239 | + def exists(self, key): |
1240 | + return key in self.keys |
1241 | + |
1242 | + def delete(self, key): |
1243 | + del self.keys[key] |
1244 | + |
1245 | + def get_key(self, key_name, **kwargs): |
1246 | + key = self.keys.get(key_name) |
1247 | + if not key: |
1248 | + return FakeKey(self, key_name) |
1249 | + return key |
1250 | + |
1251 | + def new_key(self, key_name): |
1252 | + new_key = FakeKey(self, key_name) |
1253 | + self.keys[key_name] = new_key |
1254 | + return new_key |
1255 | + |
1256 | + fixture_buckets = {'glance': FakeBucket('glance')} |
1257 | + b = fixture_buckets['glance'] |
1258 | + k = b.new_key('2') |
1259 | + k.set_contents_from_file(StringIO.StringIO("*" * FIVE_KB)) |
1260 | + |
1261 | + def fake_connection_constructor(self, *args, **kwargs): |
1262 | + host = kwargs.get('host') |
1263 | + if host.startswith('http://') or host.startswith('https://'): |
1264 | + raise UnsupportedBackend(host) |
1265 | + |
1266 | + def fake_get_bucket(conn, bucket_id): |
1267 | + bucket = fixture_buckets.get(bucket_id) |
1268 | + if not bucket: |
1269 | + bucket = FakeBucket(bucket_id) |
1270 | + return bucket |
1271 | + |
1272 | + stubs.Set(boto.s3.connection.S3Connection, |
1273 | + '__init__', fake_connection_constructor) |
1274 | + stubs.Set(boto.s3.connection.S3Connection, |
1275 | + 'get_bucket', fake_get_bucket) |
1276 | + |
1277 | + |
1278 | +def format_s3_location(user, key, authurl, bucket, obj): |
1279 | + """ |
1280 | + Helper method that returns a S3 store URI given |
1281 | + the component pieces. |
1282 | + """ |
1283 | + scheme = 's3' |
1284 | + if authurl.startswith('https://'): |
1285 | + scheme = 's3+https' |
1286 | + authurl = authurl[8:] |
1287 | + elif authurl.startswith('http://'): |
1288 | + authurl = authurl[7:] |
1289 | + authurl = authurl.strip('/') |
1290 | + return "%s://%s:%s@%s/%s/%s" % (scheme, user, key, authurl, |
1291 | + bucket, obj) |
1292 | + |
1293 | + |
1294 | +class TestS3Backend(unittest.TestCase): |
1295 | + |
1296 | + def setUp(self): |
1297 | + """Establish a clean test environment""" |
1298 | + self.stubs = stubout.StubOutForTesting() |
1299 | + stub_out_s3(self.stubs) |
1300 | + |
1301 | + def tearDown(self): |
1302 | + """Clear the test environment""" |
1303 | + self.stubs.UnsetAll() |
1304 | + |
1305 | + def test_get(self): |
1306 | + """Test a "normal" retrieval of an image in chunks""" |
1307 | + loc = get_location_from_uri( |
1308 | + "s3://user:key@auth_address/glance/2") |
1309 | + image_s3 = S3Backend.get(loc) |
1310 | + |
1311 | + expected_data = "*" * FIVE_KB |
1312 | + data = "" |
1313 | + |
1314 | + for chunk in image_s3: |
1315 | + data += chunk |
1316 | + self.assertEqual(expected_data, data) |
1317 | + |
1318 | + def test_get_mismatched_expected_size(self): |
1319 | + """ |
1320 | + Test retrieval of an image with wrong expected_size param |
1321 | + raises an exception |
1322 | + """ |
1323 | + loc = get_location_from_uri( |
1324 | + "s3://user:key@auth_address/glance/2") |
1325 | + self.assertRaises(BackendException, |
1326 | + S3Backend.get, |
1327 | + loc, |
1328 | + {'expected_size': 42}) |
1329 | + |
1330 | + def test_get_non_existing(self): |
1331 | + """ |
1332 | + Test that trying to retrieve a s3 that doesn't exist |
1333 | + raises an error |
1334 | + """ |
1335 | + loc = get_location_from_uri( |
1336 | + "s3://user:key@auth_address/badbucket/2") |
1337 | + self.assertRaises(exception.NotFound, |
1338 | + S3Backend.get, |
1339 | + loc) |
1340 | + |
1341 | + loc = get_location_from_uri( |
1342 | + "s3://user:key@auth_address/glance/noexist") |
1343 | + self.assertRaises(exception.NotFound, |
1344 | + S3Backend.get, |
1345 | + loc) |
1346 | + |
1347 | + def test_add(self): |
1348 | + """Test that we can add an image via the s3 backend""" |
1349 | + expected_image_id = 42 |
1350 | + expected_s3_size = FIVE_KB |
1351 | + expected_s3_contents = "*" * expected_s3_size |
1352 | + expected_checksum = hashlib.md5(expected_s3_contents).hexdigest() |
1353 | + expected_location = format_s3_location( |
1354 | + S3_OPTIONS['s3_store_access_key'], |
1355 | + S3_OPTIONS['s3_store_secret_key'], |
1356 | + S3_OPTIONS['s3_store_host'], |
1357 | + S3_OPTIONS['s3_store_bucket'], |
1358 | + expected_image_id) |
1359 | + image_s3 = StringIO.StringIO(expected_s3_contents) |
1360 | + |
1361 | + location, size, checksum = S3Backend.add(42, image_s3, S3_OPTIONS) |
1362 | + |
1363 | + self.assertEquals(expected_location, location) |
1364 | + self.assertEquals(expected_s3_size, size) |
1365 | + self.assertEquals(expected_checksum, checksum) |
1366 | + |
1367 | + loc = get_location_from_uri(expected_location) |
1368 | + new_image_s3 = S3Backend.get(loc) |
1369 | + new_image_contents = StringIO.StringIO() |
1370 | + for chunk in new_image_s3: |
1371 | + new_image_contents.write(chunk) |
1372 | + new_image_s3_size = new_image_contents.len |
1373 | + |
1374 | + self.assertEquals(expected_s3_contents, new_image_contents.getvalue()) |
1375 | + self.assertEquals(expected_s3_size, new_image_s3_size) |
1376 | + |
1377 | + def test_add_host_variations(self): |
1378 | + """ |
1379 | + Test that having http(s):// in the s3serviceurl in config |
1380 | + options works as expected. |
1381 | + """ |
1382 | + variations = ['http://localhost:80', |
1383 | + 'http://localhost', |
1384 | + 'http://localhost/v1', |
1385 | + 'http://localhost/v1/', |
1386 | + 'https://localhost', |
1387 | + 'https://localhost:8080', |
1388 | + 'https://localhost/v1', |
1389 | + 'https://localhost/v1/', |
1390 | + 'localhost', |
1391 | + 'localhost:8080/v1'] |
1392 | + i = 42 |
1393 | + for variation in variations: |
1394 | + expected_image_id = i |
1395 | + expected_s3_size = FIVE_KB |
1396 | + expected_s3_contents = "*" * expected_s3_size |
1397 | + expected_checksum = \ |
1398 | + hashlib.md5(expected_s3_contents).hexdigest() |
1399 | + new_options = S3_OPTIONS.copy() |
1400 | + new_options['s3_store_host'] = variation |
1401 | + expected_location = format_s3_location( |
1402 | + new_options['s3_store_access_key'], |
1403 | + new_options['s3_store_secret_key'], |
1404 | + new_options['s3_store_host'], |
1405 | + new_options['s3_store_bucket'], |
1406 | + expected_image_id) |
1407 | + image_s3 = StringIO.StringIO(expected_s3_contents) |
1408 | + |
1409 | + location, size, checksum = S3Backend.add(i, image_s3, |
1410 | + new_options) |
1411 | + |
1412 | + self.assertEquals(expected_location, location) |
1413 | + self.assertEquals(expected_s3_size, size) |
1414 | + self.assertEquals(expected_checksum, checksum) |
1415 | + |
1416 | + loc = get_location_from_uri(expected_location) |
1417 | + new_image_s3 = S3Backend.get(loc) |
1418 | + new_image_contents = new_image_s3.getvalue() |
1419 | + new_image_s3_size = new_image_s3.len |
1420 | + |
1421 | + self.assertEquals(expected_s3_contents, new_image_contents) |
1422 | + self.assertEquals(expected_s3_size, new_image_s3_size) |
1423 | + i = i + 1 |
1424 | + |
1425 | + def test_add_already_existing(self): |
1426 | + """ |
1427 | + Tests that adding an image with an existing identifier |
1428 | + raises an appropriate exception |
1429 | + """ |
1430 | + image_s3 = StringIO.StringIO("nevergonnamakeit") |
1431 | + self.assertRaises(exception.Duplicate, |
1432 | + S3Backend.add, |
1433 | + 2, image_s3, S3_OPTIONS) |
1434 | + |
1435 | + def _assertOptionRequiredForS3(self, key): |
1436 | + image_s3 = StringIO.StringIO("nevergonnamakeit") |
1437 | + options = S3_OPTIONS.copy() |
1438 | + del options[key] |
1439 | + self.assertRaises(BackendException, S3Backend.add, |
1440 | + 2, image_s3, options) |
1441 | + |
1442 | + def test_add_no_user(self): |
1443 | + """ |
1444 | + Tests that adding options without user raises |
1445 | + an appropriate exception |
1446 | + """ |
1447 | + self._assertOptionRequiredForS3('s3_store_access_key') |
1448 | + |
1449 | + def test_no_key(self): |
1450 | + """ |
1451 | + Tests that adding options without key raises |
1452 | + an appropriate exception |
1453 | + """ |
1454 | + self._assertOptionRequiredForS3('s3_store_secret_key') |
1455 | + |
1456 | + def test_add_no_host(self): |
1457 | + """ |
1458 | + Tests that adding options without host raises |
1459 | + an appropriate exception |
1460 | + """ |
1461 | + self._assertOptionRequiredForS3('s3_store_host') |
1462 | + |
1463 | + def test_delete(self): |
1464 | + """ |
1465 | + Test we can delete an existing image in the s3 store |
1466 | + """ |
1467 | + loc = get_location_from_uri( |
1468 | + "s3://user:key@auth_address/glance/2") |
1469 | + |
1470 | + S3Backend.delete(loc) |
1471 | + |
1472 | + self.assertRaises(exception.NotFound, |
1473 | + S3Backend.get, |
1474 | + loc) |
1475 | + |
1476 | + def test_delete_non_existing(self): |
1477 | + """ |
1478 | + Test that trying to delete a s3 that doesn't exist |
1479 | + raises an error |
1480 | + """ |
1481 | + loc = get_location_from_uri( |
1482 | + "s3://user:key@auth_address/glance/noexist") |
1483 | + self.assertRaises(exception.NotFound, |
1484 | + S3Backend.delete, |
1485 | + loc) |
1486 | |
1487 | === modified file 'tests/unit/test_stores.py' |
1488 | --- tests/unit/test_stores.py 2011-07-13 20:15:46 +0000 |
1489 | +++ tests/unit/test_stores.py 2011-07-29 18:46:57 +0000 |
1490 | @@ -63,21 +63,3 @@ |
1491 | |
1492 | chunks = [c for c in fetcher] |
1493 | self.assertEqual(chunks, expected_returns) |
1494 | - |
1495 | - |
1496 | -class TestS3Backend(TestBackend): |
1497 | - def setUp(self): |
1498 | - super(TestS3Backend, self).setUp() |
1499 | - stubs.stub_out_s3_backend(self.stubs) |
1500 | - |
1501 | - def test_get(self): |
1502 | - s3_uri = "s3://user:password@localhost/bucket1/file.tar.gz" |
1503 | - |
1504 | - expected_returns = ['I ', 'am', ' a', ' t', 'ea', 'po', 't,', ' s', |
1505 | - 'ho', 'rt', ' a', 'nd', ' s', 'to', 'ut', '\n'] |
1506 | - fetcher = get_from_backend(s3_uri, |
1507 | - expected_size=8, |
1508 | - conn_class=S3Backend) |
1509 | - |
1510 | - chunks = [c for c in fetcher] |
1511 | - self.assertEqual(chunks, expected_returns) |
1512 | |
1513 | === modified file 'tools/pip-requires' |
1514 | --- tools/pip-requires 2011-07-29 15:31:43 +0000 |
1515 | +++ tools/pip-requires 2011-07-29 18:46:57 +0000 |
1516 | @@ -12,6 +12,7 @@ |
1517 | sphinx |
1518 | argparse |
1519 | mox==0.5.0 |
1520 | +boto |
1521 | swift |
1522 | -f http://pymox.googlecode.com/files/mox-0.5.0.tar.gz |
1523 | sqlalchemy-migrate>=0.6,<0.7 |
I could successfully upload and download images through glance-cli and the api with this code over http. There are a few things I would like to bring up:
1) The suggested s3_store_bucket value did not work for me. Amazon rejected it complaining about uppercase characters.
2) When I tried https (s3_store_host=http:// s3.amazonaws. com), upload worked but download failed with the following exception: "UnsupportedBac kend: No backend found for 's3+https'"
3) Are the packages going to be updated with the boto dependency?
4) Can you update this doc page w/ s3 configuration? http:// glance. openstack. org/configuring .html
Overall, great work, Jay. I'd love to get this in for D3.