Merge lp:~jaypipes/glance/bug713154 into lp:~hudson-openstack/glance/trunk

Proposed by Jay Pipes
Status: Superseded
Proposed branch: lp:~jaypipes/glance/bug713154
Merge into: lp:~hudson-openstack/glance/trunk
Diff against target: 1353 lines (+1155/-91)
9 files modified
etc/glance-api.conf (+21/-0)
glance/api/v1/images.py (+8/-0)
glance/store/s3.py (+299/-73)
tests/functional/__init__.py (+8/-0)
tests/functional/test_s3.conf (+21/-0)
tests/functional/test_s3.py (+482/-0)
tests/unit/test_s3_store.py (+315/-0)
tests/unit/test_stores.py (+0/-18)
tools/pip-requires (+1/-0)
To merge this branch: bzr merge lp:~jaypipes/glance/bug713154
Reviewer Review Type Date Requested Status
Brian Waldon (community) Needs Fixing
Dan Prince (community) Approve
Rick Harris (community) Approve
Christopher MacGown (community) Approve
Review via email: mp+59110@code.launchpad.net

Commit message

Completes the S3 storage backend.

Description of the change

Completes the S3 storage backend. The original code did not actually fit
the API from boto it turned out, and the stubs that were in the unit test
were hiding this fact.

Adds a unit test that gets S3 testing up to snuff with the Swift backend.

To post a comment you must log in.
Revision history for this message
Brian Waldon (bcwaldon) wrote :

40: You don't appear to use "config"

99: Feels a little weird to call it "parsed_uri" when the first thing you do is give it to parse_s3_tokens. How about resource_uri, or connection_uri? There are a bunch of places across the diff, I was just too lazy to point them all out. Actually, how would you feel about replacing parse_s3_tokens and format_s3_location with an S3ConnectionString class (that name sucks) that handles the same functionality?

162/187: Can we make these match?

190: Can we handle full_s3_host and s3_host a little differently? I would prefer to split it up into separate tokens rather than have s3_host encapsulated by full_s3_host.

review: Needs Fixing
Revision history for this message
Jay Pipes (jaypipes) wrote :

> 40: You don't appear to use "config"

Hmm, yes. Removed.

> 99: Feels a little weird to call it "parsed_uri" when the first thing you do
> is give it to parse_s3_tokens. How about resource_uri, or connection_uri?
> There are a bunch of places across the diff, I was just too lazy to point them
> all out. Actually, how would you feel about replacing parse_s3_tokens and
> format_s3_location with an S3ConnectionString class (that name sucks) that
> handles the same functionality?

This is mostly due to copy/paste fail from the Swift driver. I think I'd prefer to keep the code as-is and then refactor the URI parsing in both the Swift and S3 drivers into a common lib (possibly into glance.store.__init__ or glance.utils) in a separate patch. I agree it's messy. Would a separate patch be OK with you?

> 162/187: Can we make these match?

Done.

> 190: Can we handle full_s3_host and s3_host a little differently? I would
> prefer to split it up into separate tokens rather than have s3_host
> encapsulated by full_s3_host.

Hmm, I think I'd prefer to do this in a separate patch as well because the same code exists in the Swift driver. I agree it might be useful to separate the scheme from the host, but since the code also exists in the Swift driver, I'd prefer to put in a separate patch. Thoughts?

-jay

Revision history for this message
Brian Waldon (bcwaldon) wrote :

> > 40: You don't appear to use "config"
>
> Hmm, yes. Removed.
>
> > 99: Feels a little weird to call it "parsed_uri" when the first thing you do
> > is give it to parse_s3_tokens. How about resource_uri, or connection_uri?
> > There are a bunch of places across the diff, I was just too lazy to point
> them
> > all out. Actually, how would you feel about replacing parse_s3_tokens and
> > format_s3_location with an S3ConnectionString class (that name sucks) that
> > handles the same functionality?
>
> This is mostly due to copy/paste fail from the Swift driver. I think I'd
> prefer to keep the code as-is and then refactor the URI parsing in both the
> Swift and S3 drivers into a common lib (possibly into glance.store.__init__ or
> glance.utils) in a separate patch. I agree it's messy. Would a separate patch
> be OK with you?
>
> > 162/187: Can we make these match?
>
> Done.
>
> > 190: Can we handle full_s3_host and s3_host a little differently? I would
> > prefer to split it up into separate tokens rather than have s3_host
> > encapsulated by full_s3_host.
>
> Hmm, I think I'd prefer to do this in a separate patch as well because the
> same code exists in the Swift driver. I agree it might be useful to separate
> the scheme from the host, but since the code also exists in the Swift driver,
> I'd prefer to put in a separate patch. Thoughts?
>
> -jay

Sounds good.

review: Approve
Revision history for this message
Christopher MacGown (0x44) wrote :

looks good

review: Approve
Revision history for this message
Diego Lalo (diego-lalo) wrote :

Hi Jay,

First thanks for your answer, second: how do I pull the branch and use it?

Cheers
Diego

Revision history for this message
Dan Prince (dan-prince) wrote :

Hey Jay,

First time I've messed w/ S3 and glance. Couple of things. These may or may not all be related to this patch:

1) Should we add example default S3 configs to the glance.conf file. (glance-api.conf in the versioned branch).

2) Why do we use the get_key function in s3.py's 'get' and then call bucket.get_key directly in 'add'? Should both of these make use of get_key.

Dan

review: Needs Information
Revision history for this message
Jay Pipes (jaypipes) wrote :

> Hey Jay,
>
> First time I've messed w/ S3 and glance. Couple of things. These may or may
> not all be related to this patch:
>
> 1) Should we add example default S3 configs to the glance.conf file. (glance-
> api.conf in the versioned branch).

I did... :)

> 2) Why do we use the get_key function in s3.py's 'get' and then call
> bucket.get_key directly in 'add'? Should both of these make use of get_key.

The reason is the way the boto/s3 API works and because in the the case of get, we want to check for existence and raise different exceptions (NotFound vs. Duplicate) in the case of get vs. add..

-jay

Revision history for this message
Jay Pipes (jaypipes) wrote :

On Wed, May 11, 2011 at 2:35 PM, Diego Lalo <email address hidden> wrote:
> First thanks for your answer, second: how do I pull the branch and use it?

on a test machine/environment, you would do:

bzr branch lp:~jaypipes/glance/bug713154
cd bug713154
sudo python setup.py install

# Edit your glance.conf and add appropriate values for the new S3 stuff:

# ============ S3 Store Options ================================

# Address where the S3 authentication service lives
s3_store_host = 127.0.0.1:3333/

# User to authenticate against the S3 authentication service
s3_store_access_key = jdoe

# Auth key for the user authenticating against the
# S3 authentication service
s3_store_secret_key = a86850deb2742ec3cb41518e26aa2d89

# Bucket to store images on the S3 host.
s3_store_bucket = glance

Then repeat your calls to register images in Glance with the S3 backend :)

-jay

Revision history for this message
Dan Prince (dan-prince) wrote :

Just tried this with my S3 account. I'm doing a glance add to upload a ramdisk into S3 using Glance.

Doing this it appears line 201:

if key.exists():

needs to be something like:

if key and key.exists():

After making this change I still get the following error:

Error uploading image: 'Input' object has no attribute 'seek

review: Needs Fixing
Revision history for this message
Jay Pipes (jaypipes) wrote :

OK, wow. So I added a functional test case that uses Amazon S3 settings. As Dan found, the functional test case uncovered some issues with the store code:

* As Dan mentioned, I needed if key and key.exists() in one place
* "S3-style" URLs cause urlparse.urlparse() to utterly barf because AWS secret keys stupidly allow a forward slash. Had to rewrite the parse_s3_tokens() and add another unit test for this.
* The md5 kwarg to boto.s3.key.set_contents_from_file() doesn't work as advertised. Had to work around this by calculating the MD5 with compute_md5() and just not passing it to that method
* The key.get_file() method call wasn't necessary. Just needed to pass the Key object (which is an iterator itself) to the ChunkedFile constructor.

Fun! :)

Please re-review.

Revision history for this message
Jay Pipes (jaypipes) wrote :

Also, I fixed the issue with Input having no attribute 'seek' that Dan mentioned above. That was pretty easy fix: just change webob.Request.body_file to body_file_seekable... yeah for docs \o/

Revision history for this message
Christopher MacGown (0x44) wrote :

Hi Jay,

The functional tests work, but the S3 unit test is failing:

======================================================================
ERROR: test_add (tests.unit.test_s3_store.TestS3Backend)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/chris/Projects/review_day/bug713154/tests/unit/test_s3_store.py", line 244, in test_add
    for chunk in new_image_s3:
  File "/Users/chris/Projects/review_day/bug713154/glance/store/s3.py", line 53, in __iter__
    self.close()
  File "/Users/chris/Projects/review_day/bug713154/glance/store/s3.py", line 58, in close
    self.fp.close()
AttributeError: FakeKey instance has no attribute 'close'

======================================================================
ERROR: test_get (tests.unit.test_s3_store.TestS3Backend)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/chris/Projects/review_day/bug713154/tests/unit/test_s3_store.py", line 188, in test_get
    for chunk in image_s3:
  File "/Users/chris/Projects/review_day/bug713154/glance/store/s3.py", line 53, in __iter__
    self.close()
  File "/Users/chris/Projects/review_day/bug713154/glance/store/s3.py", line 58, in close
    self.fp.close()
AttributeError: FakeKey instance has no attribute 'close'

review: Needs Fixing
Revision history for this message
Jay Pipes (jaypipes) wrote :

thx Chris, I'll get right on that.

Revision history for this message
Jay Pipes (jaypipes) wrote :

ok, all fixed. we still have to deal with skipping tests better, but the bug mentioned is now fixed.

Revision history for this message
Christopher MacGown (0x44) wrote :

Looks good

review: Approve
Revision history for this message
Dan Prince (dan-prince) wrote :

Hey Jay,

I just tried this and I'm still getting:

2011-05-31 13:57:41 DEBUG [glance.registry] 1 custom properties...
2011-05-31 13:57:41 DEBUG [glance.registry] type: ramdisk
2011-05-31 13:57:41 DEBUG [glance.api.v1.images] Uploading image data for image 4 to s3 store
2011-05-31 13:57:41 ERROR [glance.api.v1.images] Error uploading image: body_file_seekable

What I'm doing for this test case is putting my S3 credentials into /etc/glance/glance-api.conf, starting glance and then running the following command:

root@glance1:~# glance add name="ari-tty" type="ramdisk" disk_format="ari" container_format="ari" is_public=true < /tmp/tty_linux/ramdisk
Failed to add image. Got error:
400 Bad Request

The server could not comply with the request since it is either malformed or otherwise incorrect.

 Error uploading image: body_file_seekable
Note: Your image metadata may still be in the registry, but the image's status will likely be 'killed'.

---

Am I doing something wrong here?

review: Needs Fixing
Revision history for this message
Jay Pipes (jaypipes) wrote :

> Hey Jay,
>
> I just tried this and I'm still getting:
>
> 2011-05-31 13:57:41 DEBUG [glance.registry] 1 custom properties...
> 2011-05-31 13:57:41 DEBUG [glance.registry] type: ramdisk
> 2011-05-31 13:57:41 DEBUG [glance.api.v1.images] Uploading image data for
> image 4 to s3 store
> 2011-05-31 13:57:41 ERROR [glance.api.v1.images] Error uploading image:
> body_file_seekable
>
> What I'm doing for this test case is putting my S3 credentials into
> /etc/glance/glance-api.conf, starting glance and then running the following
> command:
>
> root@glance1:~# glance add name="ari-tty" type="ramdisk" disk_format="ari"
> container_format="ari" is_public=true < /tmp/tty_linux/ramdisk
> Failed to add image. Got error:
> 400 Bad Request
>
> The server could not comply with the request since it is either malformed or
> otherwise incorrect.
>
> Error uploading image: body_file_seekable
> Note: Your image metadata may still be in the registry, but the image's status
> will likely be 'killed'.
>
> ---
>
> Am I doing something wrong here?

Heya Dan,

Hmm, so I tried doing that and see the same thing *only* when not operating in a virtualenv. I'm going to check whether the version of webob.Request that has body_file_seekable may be the issue.

Check for an update in a short while. Cheers,
jay

Revision history for this message
Jay Pipes (jaypipes) wrote :

OK, latest commit passes all tests in -N or -V mode. Please re-test/review. Thanks much.

Revision history for this message
Jay Pipes (jaypipes) wrote :

Add workaround for Webob bug issue #12 and fix DELETE operation in S3 where URL parsing was broken. Please re-check/review. Thanks again.

Revision history for this message
Rick Harris (rconradharris) wrote :

Great stuff Jay.

One small suggestion:

> 79 +class ChunkedFile(object):

I think we could DRY up the code a bit by moving the generally useful
ChunkedFile class out to glance/store/__init__.py or perhaps
glance/store/utils.py.

The one small modification we'd need to make is changing the implementation in
filesystem backend to accept a file-obj rather than a filename. That's
probably a good idea anyway.

review: Approve
Revision history for this message
Jay Pipes (jaypipes) wrote :

@dprince:

Are you still seeing any issues with this branch?

Thanks,
jay

Revision history for this message
Jay Pipes (jaypipes) wrote :

> Great stuff Jay.
>
> One small suggestion:
>
> > 79 +class ChunkedFile(object):
>
> I think we could DRY up the code a bit by moving the generally useful
> ChunkedFile class out to glance/store/__init__.py or perhaps
> glance/store/utils.py.
>
> The one small modification we'd need to make is changing the implementation in
> filesystem backend to accept a file-obj rather than a filename. That's
> probably a good idea anyway.

Agreed. I'll be working on refactoring some of that stuff here:

https://blueprints.launchpad.net/glance/+spec/refactor-stores

Cheers!
jay

Revision history for this message
Dan Prince (dan-prince) wrote :

Yes. I'm still seeing issues.

Using bzr134. I'm able to successfully upload images to glance with the S3 backend. When I look in S3 however the images all have a size of 0.

Dan

> @dprince:
>
> Are you still seeing any issues with this branch?
>
> Thanks,
> jay

Revision history for this message
Jay Pipes (jaypipes) wrote :

OK, so bug 794718 is the real culprit for the issues seen here. I'm proposing to depend on webob 1.0.7+. Swift has already done this (actually, they depend on *either* 0.9.8 OR 1.0.7+. Unfortunately, we cannot use 0.9.8 and still have the S3 driver, because 0.9.8's handling of make_body_seekable() is buggy for chunked-transfer requests (which our client uses).

If we merge this, we need to coordinate packaging dependency changes as well. I've spoke with Andrey about the RPM packaging. Dan, can you handle the Deb stuff?

-jay

Revision history for this message
Dan Prince (dan-prince) wrote :

> Dan, can you handle the Deb stuff?
>
> -jay

Sure. I'll work w/ Soren on that.

Revision history for this message
Brian Waldon (bcwaldon) wrote :

So are we good on the webob pre-reqs now? Can this fix go in?

Revision history for this message
Dan Prince (dan-prince) wrote :

> So are we good on the webob pre-reqs now? Can this fix go in?

Hitting the following conflict w/ Glance trunk:

Text conflict in tools/pip-requires
1 conflicts encountered.

Revision history for this message
Jay Pipes (jaypipes) wrote :

OK, conflict fixed and merged up with trunk. Dan, could you please smoke this?

Revision history for this message
Dan Prince (dan-prince) wrote :

Looks good now.

review: Approve
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :
Download full text (12.9 KiB)

The attempt to merge lp:~jaypipes/glance/bug713154 into lp:glance failed. Below is the output from the failed tests.

running test
running egg_info
creating glance.egg-info
writing glance.egg-info/PKG-INFO
writing top-level names to glance.egg-info/top_level.txt
writing dependency_links to glance.egg-info/dependency_links.txt
writing manifest file 'glance.egg-info/SOURCES.txt'
reading manifest file 'glance.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'ChangeLog'
writing manifest file 'glance.egg-info/SOURCES.txt'
running build_ext

We test the following: ... ok
We test the following: ... ok
Test for LP Bugs #736295, #767203 ... ok
We test conditions that produced LP Bug #768969, where an image ... ok
Set up three test images and ensure each query param filter works ... ok
We test the following sequential series of actions: ... ok
Ensure marker and limit query params work ... ok
We test the process flow where a user registers an image ... ok
A test against the actual datastore backend for the registry ... ok
A test that errors coming from the POST API do not ... ok
We test that various calls to the images and root endpoints are ... ok
Test logging output proper when verbose and debug ... ok
Test logging output proper when verbose and debug ... ok
A test for LP bug #704854 -- Exception thrown by registry ... ok
We test the following: ... ERROR
Tests raises BadRequest for invalid store header ... ok
Tests to add a basic image in the file store ... ok
Tests creates a queued image for no body and no loc header ... ok
Tests creates a queued image for no body and no loc header ... ok
Test that the image contents are checksummed properly ... ok
test_bad_container_format (tests.unit.test_api.TestGlanceAPI) ... ok
test_bad_disk_format (tests.unit.test_api.TestGlanceAPI) ... ok
test_delete_image (tests.unit.test_api.TestGlanceAPI) ... ok
test_delete_non_exists_image (tests.unit.test_api.TestGlanceAPI) ... ok
Here, we try to delete an image that is in the queued state. ... ok
Test that the ETag header matches the x-image-meta-checksum ... ok
Test that the image contents are checksummed properly ... ok
Test for HEAD /images/<ID> ... ok
test_show_image_basic (tests.unit.test_api.TestGlanceAPI) ... ok
test_show_non_exists_image (tests.unit.test_api.TestGlanceAPI) ... ok
Tests that the /images POST registry API creates the image ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad disk_format is set ... ok
Tests proper exception is raised if a bad status is set ... ok
Tests that exception raised for bad matching disk and container ... ok
Tests that the /images DELETE registry API deletes the image ... ok
Tests proper exception is raised if attempt to delete non-existing ... ok
Tests that the /images/detail registry API returns ... ok
Tests that the /images/detail registry API returns list of ... ok
Tests that the /images/detail registry API returns list of ... ok
Tests that the /images/detail registry API returns list of ... ok
Tests that the /images/detail registry API returns list of ... ok
Tests that the /images/detail registry API returns lis...

Revision history for this message
Brian Waldon (bcwaldon) wrote :

I'm getting a test failure, Jay. I ran the tests with a clean venv:

======================================================================
FAIL: test_parse_s3_tokens (tests.unit.test_s3_store.TestS3Backend)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/brian.waldon/valhalla/glance/bug713154/tests/unit/test_s3_store.py", line 157, in test_parse_s3_tokens
    self.assertEqual("user", user)
AssertionError: 'user' != '//user'

Revision history for this message
Jay Pipes (jaypipes) wrote :

I'll look into it. In SFO in meetings all day today... I'll try to
look at it tomorrow morning early.

On Fri, Jun 24, 2011 at 12:37 PM, Brian Waldon
<email address hidden> wrote:
> I'm getting a test failure, Jay. I ran the tests with a clean venv:
>
> ======================================================================
> FAIL: test_parse_s3_tokens (tests.unit.test_s3_store.TestS3Backend)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
>  File "/Users/brian.waldon/valhalla/glance/bug713154/tests/unit/test_s3_store.py", line 157, in test_parse_s3_tokens
>    self.assertEqual("user", user)
> AssertionError: 'user' != '//user'
>
> --
> https://code.launchpad.net/~jaypipes/glance/bug713154/+merge/59110
> You are the owner of lp:~jaypipes/glance/bug713154.
>

Revision history for this message
Christopher MacGown (0x44) wrote :

I just ran this with a clean venv and all tests pass for me.

On Fri, Jun 24, 2011 at 2:42 PM, Jay Pipes <email address hidden> wrote:

> I'll look into it. In SFO in meetings all day today... I'll try to
> look at it tomorrow morning early.
>
> On Fri, Jun 24, 2011 at 12:37 PM, Brian Waldon
> <email address hidden> wrote:
> > I'm getting a test failure, Jay. I ran the tests with a clean venv:
> >
> > ======================================================================
> > FAIL: test_parse_s3_tokens (tests.unit.test_s3_store.TestS3Backend)
> > ----------------------------------------------------------------------
> > Traceback (most recent call last):
> > File
> "/Users/brian.waldon/valhalla/glance/bug713154/tests/unit/test_s3_store.py",
> line 157, in test_parse_s3_tokens
> > self.assertEqual("user", user)
> > AssertionError: 'user' != '//user'
> >
> > --
> > https://code.launchpad.net/~jaypipes/glance/bug713154/+merge/59110
> > You are the owner of lp:~jaypipes/glance/bug713154.
> >
>
> --
> https://code.launchpad.net/~jaypipes/glance/bug713154/+merge/59110
> You are reviewing the proposed merge of lp:~jaypipes/glance/bug713154 into
> lp:glance.
>

--
Christopher MacGown
Piston Cloud Computing, Inc.
(415) 300-0944
<email address hidden>

Revision history for this message
Brian Waldon (bcwaldon) wrote :

The problem is the call to urlparse.urlparse() in tests/unit/test_s3_store.py. The netloc and path attributes are parsed differently depending on your python version. Here's the sample script I used:

import urlparse
uri = "s3://user:key@localhost/v1.0/bucket/objname"
urlparse.urlparse(uri)

On python 2.6.1 (my version), it returns the following:

ParseResult(scheme='s3', netloc='', path='//user:key@localhost/v1.0/bucket/objname', params='', query='', fragment='')

On python 2.6.6 and 2.7.1, it returns this:

ParseResult(scheme='s3', netloc='user:key@localhost', path='/v1.0/bucket/objname', params='', query='', fragment='')

Revision history for this message
Christopher MacGown (0x44) wrote :

I thought we already had a fix for the 2.6/2.7 urlparse?

On Fri, Jun 24, 2011 at 3:56 PM, Brian Waldon <email address hidden>wrote:

> The problem is the call to urlparse.urlparse() in
> tests/unit/test_s3_store.py. The netloc and path attributes are parsed
> differently depending on your python version. Here's the sample script I
> used:
>
> import urlparse
> uri = "s3://user:key@localhost/v1.0/bucket/objname"
> urlparse.urlparse(uri)
>
>
> On python 2.6.1 (my version), it returns the following:
>
> ParseResult(scheme='s3', netloc='', path='//user:key@localhost/v1.0/bucket/objname',
> params='', query='', fragment='')
>
>
> On python 2.6.6 and 2.7.1, it returns this:
>
> ParseResult(scheme='s3', netloc='user:key@localhost',
> path='/v1.0/bucket/objname', params='', query='', fragment='')
>
>
> --
> https://code.launchpad.net/~jaypipes/glance/bug713154/+merge/59110
> You are reviewing the proposed merge of lp:~jaypipes/glance/bug713154 into
> lp:glance.
>

--
Christopher MacGown
Piston Cloud Computing, Inc.
(415) 300-0944
<email address hidden>

Revision history for this message
Brian Waldon (bcwaldon) wrote :

Not that I know of. I don't see anything in the code about it.

Revision history for this message
Jay Pipes (jaypipes) wrote :

On Fri, Jun 24, 2011 at 2:01 PM, Christopher MacGown <email address hidden> wrote:
> I thought we already had a fix for the 2.6/2.7 urlparse?

We do. I just forgot to copy the fix into the S3 parse_tokens function. :)

I'll fix it..
-jay

Revision history for this message
Christopher MacGown (0x44) wrote :

In an old branch, I'd merged the parse_tokens functions… think it's worth
rewriting for inclusion?

On Sat, Jun 25, 2011 at 11:01 AM, Jay Pipes <email address hidden> wrote:

> On Fri, Jun 24, 2011 at 2:01 PM, Christopher MacGown <email address hidden>
> wrote:
> > I thought we already had a fix for the 2.6/2.7 urlparse?
>
> We do. I just forgot to copy the fix into the S3 parse_tokens function. :)
>
> I'll fix it..
> -jay
>
> --
> https://code.launchpad.net/~jaypipes/glance/bug713154/+merge/59110
> You are reviewing the proposed merge of lp:~jaypipes/glance/bug713154 into
> lp:glance.
>

--
Christopher MacGown
Piston Cloud Computing, Inc.
(415) 300-0944
<email address hidden>

Revision history for this message
Jay Pipes (jaypipes) wrote :

I had to rewrite it for this particular driver, unfortunately, because
of S3's silly insistence on allowing forward slashes in passwords, so
http://user:<email address hidden>/bucket/ blows up urlparse :(

-jay

Revision history for this message
Brian Waldon (bcwaldon) wrote :

Looks like there is a little left to do here. Setting to WIP for now.

review: Needs Fixing
Revision history for this message
Jay Pipes (jaypipes) wrote :

FYI, this is the issue this is now blocked on: https://lists.launchpad.net/openstack-testing/msg00002.html

Revision history for this message
Chuck Short (zulcss) wrote :

> FYI, this is the issue this is now blocked on: https://lists.launchpad.net
> /openstack-testing/msg00002.html

Are we any closer to getting this merged?

Thanks
chuck

Revision history for this message
Jay Pipes (jaypipes) wrote :

Hey Chuck,

AFAIK, this actually can be merged fine. The S3 functional tests won't run in the Jenkins job of course, but we can deal with that later.

If people are OK with that, feel free to approve and merge it.

-jay

Revision history for this message
Chuck Short (zulcss) wrote :

lgtm although documentation will need to be updated.

Revision history for this message
Brian Waldon (bcwaldon) wrote :

Still seeing that test failure. Are you going to be able to address that?

Revision history for this message
Jay Pipes (jaypipes) wrote :

Uh, sorry, not sure why I set it back to Needs Review. I need to get
ahold of a python 2.6 dev env to do these fixes. On vacation until
Monday, so if someone can pick that up, would help get this through...

-jay

On Fri, Jul 15, 2011 at 11:54 AM, Brian Waldon
<email address hidden> wrote:
> Still seeing that test failure. Are you going to be able to address that?
> --
> https://code.launchpad.net/~jaypipes/glance/bug713154/+merge/59110
> You are the owner of lp:~jaypipes/glance/bug713154.
>

lp:~jaypipes/glance/bug713154 updated
138. By Jay Pipes

Merge trunk and resolve conflicts. All tests passing for me...

139. By Jay Pipes

Fixed review stuff from Brian

140. By Jay Pipes

Missing import of common.config in S3 driver

141. By Jay Pipes

Add test case for S3 s3_store_host variations and fixes for URL bug

142. By Jay Pipes

Forgot to put back fix for the get_backend_class problem...

143. By Jay Pipes

Add functional test case for checking delete and get of non-existing image

144. By Jay Pipes

Typo in error condition for create_bucket_on_put, make body seekable in req object, and remove +glance from docs and configs

145. By Jay Pipes

Merge trunk

146. By Jay Pipes

Merge latest trunk

147. By Jay Pipes

Merge trunk

148. By Jay Pipes

Fix for boto1.9b issue 540 (http://code.google.com/p/boto/issues/detail?id=540)

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'etc/glance-api.conf'
2--- etc/glance-api.conf 2011-05-11 23:03:51 +0000
3+++ etc/glance-api.conf 2011-07-06 21:36:36 +0000
4@@ -51,6 +51,27 @@
5 # Do we create the container if it does not exist?
6 swift_store_create_container_on_put = False
7
8+# ============ S3 Store Options =============================
9+
10+# Address where the S3 authentication service lives
11+s3_store_host = 127.0.0.1:8080/v1.0/
12+
13+# User to authenticate against the S3 authentication service
14+s3_store_access_key = <20-char AWS access key>
15+
16+# Auth key for the user authenticating against the
17+# S3 authentication service
18+s3_store_secret_key = <40-char AWS secret key>
19+
20+# Container within the account that the account should use
21+# for storing images in S3. Note that S3 has a flat namespace,
22+# so you need a unique bucket name for your glance images. An
23+# easy way to do this is append your AWS access key to "+glance"
24+s3_store_bucket = <20-char AWS access key>+glance
25+
26+# Do we create the bucket if it does not exist?
27+s3_store_create_bucket_on_put = False
28+
29 [pipeline:glance-api]
30 pipeline = versionnegotiation apiv1app
31
32
33=== modified file 'glance/api/v1/images.py'
34--- glance/api/v1/images.py 2011-06-29 18:10:39 +0000
35+++ glance/api/v1/images.py 2011-07-06 21:36:36 +0000
36@@ -272,6 +272,10 @@
37 try:
38 logger.debug("Uploading image data for image %(image_id)s "
39 "to %(store_name)s store" % locals())
40+ # NOTE(jaypipes): webob 1.0.3 has body_file_seekable attr
41+ # but webob 0.9.8 does not; it has the make_body_seekable() method
42+ # which newer versions have also, so we use the old method
43+ req.make_body_seekable()
44 location, size, checksum = store.add(image_meta['id'],
45 req.body_file,
46 self.options)
47@@ -367,6 +371,10 @@
48 :retval Mapping of updated image data
49 """
50 image_id = image_meta['id']
51+ # This is necessary because of a bug in Webob 1.0.2 - 1.0.7
52+ # See: https://bitbucket.org/ianb/webob/
53+ # issue/12/fix-for-issue-6-broke-chunked-transfer
54+ req.is_body_readable = True
55 location = self._upload(req, image_meta)
56 return self._activate(req, image_id, location)
57
58
59=== modified file 'glance/store/s3.py'
60--- glance/store/s3.py 2011-01-27 04:19:13 +0000
61+++ glance/store/s3.py 2011-07-06 21:36:36 +0000
62@@ -15,95 +15,321 @@
63 # License for the specific language governing permissions and limitations
64 # under the License.
65
66-"""The s3 backend adapter"""
67-
68+"""Storage backend for S3 or Storage Servers that follow the S3 Protocol"""
69+
70+import logging
71+import urlparse
72+
73+from glance.common import exception
74 import glance.store
75
76+logger = logging.getLogger('glance.store.s3')
77+
78+
79+class ChunkedFile(object):
80+
81+ """
82+ We send this back to the Glance API server as
83+ something that can iterate over a ``boto.s3.key.Key``
84+ """
85+
86+ CHUNKSIZE = 65536
87+
88+ def __init__(self, fp):
89+ self.fp = fp
90+
91+ def __iter__(self):
92+ """Return an iterator over the image file"""
93+ try:
94+ while True:
95+ chunk = self.fp.read(ChunkedFile.CHUNKSIZE)
96+ if chunk:
97+ yield chunk
98+ else:
99+ break
100+ finally:
101+ self.close()
102+
103+ def close(self):
104+ """Close the internal file pointer"""
105+ if self.fp:
106+ self.fp.close()
107+ self.fp = None
108+
109
110 class S3Backend(glance.store.Backend):
111 """An implementation of the s3 adapter."""
112
113- EXAMPLE_URL = "s3://ACCESS_KEY:SECRET_KEY@s3_url/bucket/file.gz.0"
114-
115- @classmethod
116- def get(cls, parsed_uri, expected_size, conn_class=None):
117+ EXAMPLE_URL = "s3://<ACCESS_KEY>:<SECRET_KEY>@<S3_URL>/<BUCKET>/<OBJ>"
118+
119+ @classmethod
120+ def _option_get(cls, options, param):
121+ result = options.get(param)
122+ if not result:
123+ msg = ("Could not find %s in configuration options." % param)
124+ logger.error(msg)
125+ raise glance.store.BackendException(msg)
126+ return result
127+
128+ @classmethod
129+ def get(cls, parsed_uri, expected_size=None, options=None):
130 """
131 Takes a parsed_uri in the format of:
132 s3://access_key:secret_key@s3.amazonaws.com/bucket/file.gz.0, connects
133 to s3 and downloads the file. Returns the generator resp_body provided
134 by get_object.
135 """
136-
137- if conn_class:
138- pass
139- else:
140- import boto.s3.connection
141- conn_class = boto.s3.connection.S3Connection
142-
143- (access_key, secret_key, host, bucket, obj) = \
144- cls._parse_s3_tokens(parsed_uri)
145-
146- # Close the connection when we're through.
147- with conn_class(access_key, secret_key, host=host) as s3_conn:
148- bucket = cls._get_bucket(s3_conn, bucket)
149-
150- # Close the key when we're through.
151- with cls._get_key(bucket, obj) as key:
152- if not key.size == expected_size:
153- raise glance.store.BackendException(
154- "Expected %s bytes, got %s" %
155- (expected_size, key.size))
156-
157- key.BufferSize = cls.CHUNKSIZE
158- for chunk in key:
159- yield chunk
160-
161- @classmethod
162- def delete(cls, parsed_uri, conn_class=None):
163+ from boto.s3.connection import S3Connection
164+
165+ (access_key, secret_key, s3_host, bucket, obj_name) = \
166+ parse_s3_tokens(parsed_uri)
167+
168+ # This is annoying. If I pass http://s3.amazonaws.com to Boto, it
169+ # dies. If I pass s3.amazonaws.com it works fine. :(
170+ s3_host_only = urlparse.urlparse(s3_host).netloc
171+
172+ s3_conn = S3Connection(access_key, secret_key, host=s3_host_only)
173+ bucket_obj = get_bucket(s3_conn, bucket)
174+
175+ key = get_key(bucket_obj, obj_name)
176+
177+ logger.debug("Retrieved image object from S3 using "
178+ "(s3_host=%(s3_host)s, access_key=%(access_key)s, "
179+ "bucket=%(bucket)s, key=%(obj_name)s)" % locals())
180+
181+ if expected_size and key.size != expected_size:
182+ msg = "Expected %s bytes, got %s" % (expected_size, key.size)
183+ logger.error(msg)
184+ raise glance.store.BackendException(msg)
185+
186+ key.BufferSize = cls.CHUNKSIZE
187+ return ChunkedFile(key)
188+
189+ @classmethod
190+ def add(cls, id, data, options):
191+ """
192+ Stores image data to S3 and returns a location that the image was
193+ written to.
194+
195+ S3 writes the image data using the scheme:
196+ s3://<ACCESS_KEY>:<SECRET_KEY>@<S3_URL>/<BUCKET>/<OBJ>
197+ where:
198+ <USER> = ``s3_store_user``
199+ <KEY> = ``s3_store_key``
200+ <S3_HOST> = ``s3_store_host``
201+ <BUCKET> = ``s3_store_bucket``
202+ <ID> = The id of the image being added
203+
204+ :param id: The opaque image identifier
205+ :param data: The image data to write, as a file-like object
206+ :param options: Conf mapping
207+
208+ :retval Tuple with (location, size)
209+ The location that was written,
210+ and the size in bytes of the data written
211+ """
212+ from boto.s3.connection import S3Connection
213+
214+ # TODO(jaypipes): This needs to be checked every time
215+ # because of the decision to make glance.store.Backend's
216+ # interface all @classmethods. This is inefficient. Backend
217+ # should be a stateful object with options parsed once in
218+ # a constructor.
219+ s3_host = cls._option_get(options, 's3_store_host')
220+ access_key = cls._option_get(options, 's3_store_access_key')
221+ secret_key = cls._option_get(options, 's3_store_secret_key')
222+ # NOTE(jaypipes): Need to encode to UTF-8 here because of a
223+ # bug in the HMAC library that boto uses.
224+ # See: http://bugs.python.org/issue5285
225+ # See: http://trac.edgewall.org/ticket/8083
226+ access_key = access_key.encode('utf-8')
227+ secret_key = secret_key.encode('utf-8')
228+ bucket = cls._option_get(options, 's3_store_bucket')
229+
230+ full_s3_host = s3_host
231+ if not full_s3_host.startswith('http'):
232+ full_s3_host = 'https://' + full_s3_host
233+
234+ s3_conn = S3Connection(access_key, secret_key, host=s3_host)
235+
236+ create_bucket_if_missing(bucket, s3_conn, options)
237+
238+ bucket_obj = get_bucket(s3_conn, bucket)
239+ obj_name = str(id)
240+ location = format_s3_location(access_key, secret_key, s3_host,
241+ bucket, obj_name)
242+
243+ key = bucket_obj.get_key(obj_name)
244+ if key and key.exists():
245+ raise exception.Duplicate("S3 already has an image at "
246+ "location %(location)s" % locals())
247+
248+ logger.debug("Adding image object to S3 using "
249+ "(s3_host=%(s3_host)s, access_key=%(access_key)s, "
250+ "bucket=%(bucket)s, key=%(obj_name)s)" % locals())
251+
252+ key = bucket_obj.new_key(obj_name)
253+
254+ # OK, now upload the data into the key
255+ obj_md5, _base64_digest = key.compute_md5(data)
256+ key.set_contents_from_file(data, replace=False)
257+ size = key.size
258+
259+ return (location, size, obj_md5)
260+
261+ @classmethod
262+ def delete(cls, parsed_uri, options=None):
263 """
264 Takes a parsed_uri in the format of:
265 s3://access_key:secret_key@s3.amazonaws.com/bucket/file.gz.0, connects
266 to s3 and deletes the file. Returns whatever boto.s3.key.Key.delete()
267 returns.
268 """
269-
270- if conn_class:
271- pass
272+ from boto.s3.connection import S3Connection
273+
274+ (access_key, secret_key, s3_host, bucket, obj_name) = \
275+ parse_s3_tokens(parsed_uri)
276+
277+ # This is annoying. If I pass http://s3.amazonaws.com to Boto, it
278+ # dies. If I pass s3.amazonaws.com it works fine. :(
279+ s3_host_only = urlparse.urlparse(s3_host).netloc
280+
281+ # Close the connection when we're through.
282+ s3_conn = S3Connection(access_key, secret_key, host=s3_host_only)
283+ bucket_obj = get_bucket(s3_conn, bucket)
284+
285+ # Close the key when we're through.
286+ key = get_key(bucket_obj, obj_name)
287+
288+ logger.debug("Deleting image object from S3 using "
289+ "(s3_host=%(s3_host)s, user=%(access_key)s, "
290+ "bucket=%(bucket)s, key=%(obj_name)s)" % locals())
291+
292+ return key.delete()
293+
294+
295+def get_bucket(conn, bucket_id):
296+ """
297+ Get a bucket from an s3 connection
298+
299+ :param conn: The ``boto.s3.connection.S3Connection``
300+ :param bucket_id: ID of the bucket to fetch
301+ :raises ``glance.exception.NotFound`` if bucket is not found.
302+ """
303+
304+ bucket = conn.get_bucket(bucket_id)
305+ if not bucket:
306+ msg = ("Could not find bucket with ID %(bucket_id)s") % locals()
307+ logger.error(msg)
308+ raise exception.NotFound(msg)
309+
310+ return bucket
311+
312+
313+def create_bucket_if_missing(bucket, s3_conn, options):
314+ """
315+ Creates a missing bucket in S3 if the
316+ ``s3_store_create_bucket_on_put`` option is set.
317+
318+ :param bucket: Name of bucket to create
319+ :param s3_conn: Connection to S3
320+ :param options: Option mapping
321+ """
322+ from boto.exception import S3ResponseError
323+ try:
324+ s3_conn.get_bucket(bucket)
325+ except S3ResponseError, e:
326+ if e.status == httplib.NOT_FOUND:
327+ add_bucket = config.get_option(options,
328+ 's3_store_create_bucket_on_put',
329+ type='bool', default=False)
330+ if add_bucket:
331+ try:
332+ s3_conn.create_bucket(bucket)
333+ except S3ResponseError, e:
334+ msg = ("Failed to add bucket to S3.\n"
335+ "Got error from S3: %(e)s" % locals())
336+ raise glance.store.BackendException(msg)
337+ else:
338+ msg = ("The bucket %(bucket)s does not exist in "
339+ "S3. Please set the "
340+ "s3_store_create_bucket_on_put option"
341+ "to add bucket to S3 automatically."
342+ % locals())
343+ raise glance.store.BackendException(msg)
344 else:
345- conn_class = boto.s3.connection.S3Connection
346-
347- (access_key, secret_key, host, bucket, obj) = \
348- cls._parse_s3_tokens(parsed_uri)
349-
350- # Close the connection when we're through.
351- with conn_class(access_key, secret_key, host=host) as s3_conn:
352- bucket = cls._get_bucket(s3_conn, bucket)
353-
354- # Close the key when we're through.
355- with cls._get_key(bucket, obj) as key:
356- return key.delete()
357-
358- @classmethod
359- def _get_bucket(cls, conn, bucket_id):
360- """Get a bucket from an s3 connection"""
361-
362- bucket = conn.get_bucket(bucket_id)
363- if not bucket:
364- raise glance.store.BackendException("Could not find bucket: %s" %
365- bucket_id)
366-
367- return bucket
368-
369- @classmethod
370- def _get_key(cls, bucket, obj):
371- """Get a key from a bucket"""
372-
373- key = bucket.get_key(obj)
374- if not key:
375- raise glance.store.BackendException("Could not get key: %s" % key)
376- return key
377-
378- @classmethod
379- def _parse_s3_tokens(cls, parsed_uri):
380- """Parse tokens from the parsed_uri"""
381- return glance.store.parse_uri_tokens(parsed_uri, cls.EXAMPLE_URL)
382+ raise
383+
384+
385+def get_key(bucket, obj):
386+ """
387+ Get a key from a bucket
388+
389+ :param bucket: The ``boto.s3.Bucket``
390+ :param obj: Object to get the key for
391+ :raises ``glance.exception.NotFound`` if key is not found.
392+ """
393+
394+ key = bucket.get_key(obj)
395+ if not key.exists():
396+ msg = ("Could not find key %(obj)s in bucket %(bucket)s") % locals()
397+ logger.error(msg)
398+ raise exception.NotFound(msg)
399+ return key
400+
401+
402+def format_s3_location(user, key, s3_host, bucket, obj_name):
403+ """
404+ Returns the s3 URI in the format:
405+ s3://<USER_KEY>:<SECRET_KEY>@<S3_HOST>/<BUCKET>/<OBJNAME>
406+
407+ :param user: The s3 user key to authenticate with
408+ :param key: The s3 secret key for the authenticating user
409+ :param s3_host: The base URL for the s3 service
410+ :param bucket: The name of the bucket
411+ :param obj_name: The name of the object
412+ """
413+ return "s3://%(user)s:%(key)s@%(s3_host)s/"\
414+ "%(bucket)s/%(obj_name)s" % locals()
415+
416+
417+def parse_s3_tokens(parsed_uri):
418+ """
419+ Return the various tokens used by S3.
420+
421+ Note that an Amazon AWS secret key can contain the forward slash,
422+ which is entirely retarded, and breaks urlparse miserably.
423+ This function works around that issue.
424+
425+ :param parsed_uri: The pieces of a URI returned by urlparse
426+ :retval A tuple of (access_key, secret_key, s3_host, bucket, obj_name)
427+ """
428+
429+ # TODO(jaypipes): Do parsing in the stores. Don't call urlparse in the
430+ # base get_backend_class routine...
431+ entire_path = "%s%s" % (parsed_uri.netloc, parsed_uri.path)
432+
433+ try:
434+ creds, path = entire_path.split('@')
435+ cred_parts = creds.split(':')
436+
437+ access_key = cred_parts[0]
438+ secret_key = cred_parts[1]
439+ path_parts = path.split('/')
440+ obj = path_parts.pop()
441+ bucket = path_parts.pop()
442+ except (ValueError, IndexError):
443+ raise glance.store.BackendException(
444+ "Expected four values to unpack in: s3:%s. "
445+ "Should have received something like: %s."
446+ % (parsed_uri.path, S3Backend.EXAMPLE_URL))
447+
448+ # NOTE(jaypipes): Need to encode to UTF-8 here because of a
449+ # bug in the HMAC library that boto uses.
450+ # See: http://bugs.python.org/issue5285
451+ # See: http://trac.edgewall.org/ticket/8083
452+ access_key = access_key.encode('utf-8')
453+ secret_key = secret_key.encode('utf-8')
454+ s3_host = "https://%s" % '/'.join(path_parts)
455+
456+ return access_key, secret_key, s3_host, bucket, obj
457
458=== modified file 'tests/functional/__init__.py'
459--- tests/functional/__init__.py 2011-05-27 02:24:55 +0000
460+++ tests/functional/__init__.py 2011-07-06 21:36:36 +0000
461@@ -137,6 +137,10 @@
462 "api.pid")
463 self.log_file = os.path.join(self.test_dir, "api.log")
464 self.registry_port = registry_port
465+ self.s3_store_host = "s3.amazonaws.com"
466+ self.s3_store_access_key = ""
467+ self.s3_store_secret_key = ""
468+ self.s3_store_bucket = ""
469 self.conf_base = """[DEFAULT]
470 verbose = %(verbose)s
471 debug = %(debug)s
472@@ -147,6 +151,10 @@
473 registry_host = 0.0.0.0
474 registry_port = %(registry_port)s
475 log_file = %(log_file)s
476+s3_store_host = %(s3_store_host)s
477+s3_store_access_key = %(s3_store_access_key)s
478+s3_store_secret_key = %(s3_store_secret_key)s
479+s3_store_bucket = %(s3_store_bucket)s
480
481 [pipeline:glance-api]
482 pipeline = versionnegotiation apiv1app
483
484=== added file 'tests/functional/test_s3.conf'
485--- tests/functional/test_s3.conf 1970-01-01 00:00:00 +0000
486+++ tests/functional/test_s3.conf 2011-07-06 21:36:36 +0000
487@@ -0,0 +1,21 @@
488+[DEFAULT]
489+
490+# Set the following to Amazon S3 credentials that you want
491+# to use while functional testing the S3 backend.
492+
493+# Address where the S3 authentication service lives
494+s3_store_host = s3.amazonaws.com
495+
496+# User to authenticate against the S3 authentication service
497+s3_store_access_key = <20-char AWS access key>
498+
499+# Auth key for the user authenticating against the
500+# S3 authentication service
501+s3_store_secret_key = <40-char AWS secret key>
502+
503+# Container within the account that the account should use
504+# for storing images in S3. Note that S3 has a flat namespace,
505+# so you need a unique bucket name for your glance images. An
506+# easy way to do this is append your lower-cased AWS access key
507+# to "glance"
508+s3_store_bucket = <20-char AWS access key - lowercased>glance
509
510=== added file 'tests/functional/test_s3.py'
511--- tests/functional/test_s3.py 1970-01-01 00:00:00 +0000
512+++ tests/functional/test_s3.py 2011-07-06 21:36:36 +0000
513@@ -0,0 +1,482 @@
514+# vim: tabstop=4 shiftwidth=4 softtabstop=4
515+
516+# Copyright 2011 OpenStack, LLC
517+# All Rights Reserved.
518+#
519+# Licensed under the Apache License, Version 2.0 (the "License"); you may
520+# not use this file except in compliance with the License. You may obtain
521+# a copy of the License at
522+#
523+# http://www.apache.org/licenses/LICENSE-2.0
524+#
525+# Unless required by applicable law or agreed to in writing, software
526+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
527+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
528+# License for the specific language governing permissions and limitations
529+# under the License.
530+
531+"""
532+Tests a Glance API server which uses an S3 backend by default
533+
534+This test requires that a real S3 account is available. It looks
535+in a file /tests/functional/test_s3.conf for the credentials to
536+use.
537+
538+Note that this test clears the entire bucket from the S3 account
539+for use by the test case, so make sure you supply credentials for
540+test accounts only.
541+
542+If a connection cannot be established, all the test cases are
543+skipped.
544+"""
545+
546+import ConfigParser
547+import json
548+import os
549+import tempfile
550+import unittest
551+
552+from tests import functional
553+from tests.utils import execute
554+
555+FIVE_KB = 5 * 1024
556+FIVE_GB = 5 * 1024 * 1024 * 1024
557+
558+
559+class TestS3(functional.FunctionalTest):
560+
561+ """Functional tests for the S3 backend"""
562+
563+ # Test machines can set the GLANCE_TEST_MIGRATIONS_CONF variable
564+ # to override the location of the config file for migration testing
565+ CONFIG_FILE_PATH = os.environ.get('GLANCE_TEST_S3_CONF',
566+ os.path.join('tests', 'functional',
567+ 'test_s3.conf'))
568+
569+ def setUp(self):
570+ """
571+ Test a connection to an S3 store using the credentials
572+ found in the environs or /tests/functional/test_s3.conf, if found.
573+ If the connection fails, mark all tests to skip.
574+ """
575+ self.inited = False
576+ self.skip_tests = True
577+
578+ if self.inited:
579+ return
580+
581+ if os.path.exists(TestS3.CONFIG_FILE_PATH):
582+ cp = ConfigParser.RawConfigParser()
583+ try:
584+ cp.read(TestS3.CONFIG_FILE_PATH)
585+ defaults = cp.defaults()
586+ for key, value in defaults.items():
587+ self.__dict__[key] = value
588+ except ConfigParser.ParsingError, e:
589+ print ("Failed to read test_s3.conf config file. "
590+ "Got error: %s" % e)
591+ super(TestS3, self).setUp()
592+ self.inited = True
593+ return
594+
595+ from boto.s3.connection import S3Connection
596+ from boto.exception import S3ResponseError
597+
598+ try:
599+ s3_host = self.s3_store_host
600+ access_key = self.s3_store_access_key
601+ secret_key = self.s3_store_secret_key
602+ bucket_name = self.s3_store_bucket
603+ except AttributeError, e:
604+ print ("Failed to find required configuration options for "
605+ "S3 store. Got error: %s" % e)
606+ self.inited = True
607+ super(TestS3, self).setUp()
608+ return
609+
610+ s3_conn = S3Connection(access_key, secret_key, host=s3_host)
611+
612+ self.bucket = None
613+ try:
614+ buckets = s3_conn.get_all_buckets()
615+ for bucket in buckets:
616+ if bucket.name == bucket_name:
617+ self.bucket = bucket
618+ except S3ResponseError, e:
619+ print ("Failed to connect to S3 with credentials,"
620+ "to find bucket. Got error: %s" % e)
621+ self.inited = True
622+ super(TestS3, self).setUp()
623+ return
624+
625+ self.s3_conn = s3_conn
626+
627+ if not self.bucket:
628+ try:
629+ self.bucket = s3_conn.create_bucket(bucket_name)
630+ except boto.exception.S3ResponseError, e:
631+ print ("Failed to create bucket. Got error: %s" % e)
632+ self.inited = True
633+ super(TestS3, self).setUp()
634+ return
635+ else:
636+ self.clear_bucket()
637+
638+ self.skip_tests = False
639+ self.inited = True
640+ self.default_store = 's3'
641+
642+ super(TestS3, self).setUp()
643+
644+ def tearDown(self):
645+ if not self.skip_tests:
646+ self.clear_bucket()
647+ super(TestS3, self).tearDown()
648+
649+ def clear_bucket(self):
650+ # It's not possible to simply clear a bucket. You
651+ # need to loop over all the keys and delete them
652+ # all first...
653+ keys = self.bucket.list()
654+ for key in keys:
655+ key.delete()
656+
657+ def test_add_list_delete_list(self):
658+ """
659+ We test the following:
660+
661+ 0. GET /images
662+ - Verify no public images
663+ 1. GET /images/detail
664+ - Verify no public images
665+ 2. HEAD /images/1
666+ - Verify 404 returned
667+ 3. POST /images with public image named Image1 with a location
668+ attribute and no custom properties
669+ - Verify 201 returned
670+ 4. HEAD /images/1
671+ - Verify HTTP headers have correct information we just added
672+ 5. GET /images/1
673+ - Verify all information on image we just added is correct
674+ 6. GET /images
675+ - Verify the image we just added is returned
676+ 7. GET /images/detail
677+ - Verify the image we just added is returned
678+ 8. PUT /images/1 with custom properties of "distro" and "arch"
679+ - Verify 200 returned
680+ 9. GET /images/1
681+ - Verify updated information about image was stored
682+ 10. PUT /images/1
683+ - Remove a previously existing property.
684+ 11. PUT /images/1
685+ - Add a previously deleted property.
686+ """
687+ if self.skip_tests:
688+ return True
689+
690+ self.cleanup()
691+ self.start_servers(**self.__dict__.copy())
692+
693+ api_port = self.api_port
694+ registry_port = self.registry_port
695+
696+ # 0. GET /images
697+ # Verify no public images
698+ cmd = "curl http://0.0.0.0:%d/v1/images" % api_port
699+
700+ exitcode, out, err = execute(cmd)
701+
702+ self.assertEqual(0, exitcode)
703+ self.assertEqual('{"images": []}', out.strip())
704+
705+ # 1. GET /images/detail
706+ # Verify no public images
707+ cmd = "curl http://0.0.0.0:%d/v1/images/detail" % api_port
708+
709+ exitcode, out, err = execute(cmd)
710+
711+ self.assertEqual(0, exitcode)
712+ self.assertEqual('{"images": []}', out.strip())
713+
714+ # 2. HEAD /images/1
715+ # Verify 404 returned
716+ cmd = "curl -i -X HEAD http://0.0.0.0:%d/v1/images/1" % api_port
717+
718+ exitcode, out, err = execute(cmd)
719+
720+ self.assertEqual(0, exitcode)
721+
722+ lines = out.split("\r\n")
723+ status_line = lines[0]
724+
725+ self.assertEqual("HTTP/1.1 404 Not Found", status_line)
726+
727+ # 3. POST /images with public image named Image1
728+ # attribute and no custom properties. Verify a 200 OK is returned
729+ image_data = "*" * FIVE_KB
730+
731+ cmd = ("curl -i -X POST "
732+ "-H 'Expect: ' " # Necessary otherwise sends 100 Continue
733+ "-H 'Content-Type: application/octet-stream' "
734+ "-H 'X-Image-Meta-Name: Image1' "
735+ "-H 'X-Image-Meta-Is-Public: True' "
736+ "--data-binary \"%s\" "
737+ "http://0.0.0.0:%d/v1/images") % (image_data, api_port)
738+
739+ exitcode, out, err = execute(cmd)
740+ self.assertEqual(0, exitcode)
741+
742+ lines = out.split("\r\n")
743+ status_line = lines[0]
744+
745+ self.assertEqual("HTTP/1.1 201 Created", status_line)
746+
747+ # 4. HEAD /images
748+ # Verify image found now
749+ cmd = "curl -i -X HEAD http://0.0.0.0:%d/v1/images/1" % api_port
750+
751+ exitcode, out, err = execute(cmd)
752+
753+ self.assertEqual(0, exitcode)
754+
755+ lines = out.split("\r\n")
756+ status_line = lines[0]
757+
758+ self.assertEqual("HTTP/1.1 200 OK", status_line)
759+ self.assertTrue("X-Image-Meta-Name: Image1" in out)
760+
761+ # 5. GET /images/1
762+ # Verify all information on image we just added is correct
763+
764+ cmd = "curl -i http://0.0.0.0:%d/v1/images/1" % api_port
765+
766+ exitcode, out, err = execute(cmd)
767+
768+ self.assertEqual(0, exitcode)
769+
770+ lines = out.split("\r\n")
771+
772+ self.assertEqual("HTTP/1.1 200 OK", lines.pop(0))
773+
774+ # Handle the headers
775+ image_headers = {}
776+ std_headers = {}
777+ other_lines = []
778+ for line in lines:
779+ if line.strip() == '':
780+ continue
781+ if line.startswith("X-Image"):
782+ pieces = line.split(":")
783+ key = pieces[0].strip()
784+ value = ":".join(pieces[1:]).strip()
785+ image_headers[key] = value
786+ elif ':' in line:
787+ pieces = line.split(":")
788+ key = pieces[0].strip()
789+ value = ":".join(pieces[1:]).strip()
790+ std_headers[key] = value
791+ else:
792+ other_lines.append(line)
793+
794+ expected_image_headers = {
795+ 'X-Image-Meta-Id': '1',
796+ 'X-Image-Meta-Name': 'Image1',
797+ 'X-Image-Meta-Is_public': 'True',
798+ 'X-Image-Meta-Status': 'active',
799+ 'X-Image-Meta-Disk_format': '',
800+ 'X-Image-Meta-Container_format': '',
801+ 'X-Image-Meta-Size': str(FIVE_KB),
802+ 'X-Image-Meta-Location': 's3://%s:%s@%s/%s/1' % (
803+ self.s3_store_access_key,
804+ self.s3_store_secret_key,
805+ self.s3_store_host,
806+ self.s3_store_bucket)}
807+
808+ expected_std_headers = {
809+ 'Content-Length': str(FIVE_KB),
810+ 'Content-Type': 'application/octet-stream'}
811+
812+ for expected_key, expected_value in expected_image_headers.items():
813+ self.assertTrue(expected_key in image_headers,
814+ "Failed to find key %s in image_headers"
815+ % expected_key)
816+ self.assertEqual(expected_value, image_headers[expected_key],
817+ "For key '%s' expected header value '%s'. Got '%s'"
818+ % (expected_key,
819+ expected_value,
820+ image_headers[expected_key]))
821+
822+ for expected_key, expected_value in expected_std_headers.items():
823+ self.assertTrue(expected_key in std_headers,
824+ "Failed to find key %s in std_headers"
825+ % expected_key)
826+ self.assertEqual(expected_value, std_headers[expected_key],
827+ "For key '%s' expected header value '%s'. Got '%s'"
828+ % (expected_key,
829+ expected_value,
830+ std_headers[expected_key]))
831+
832+ # Now the image data...
833+ expected_image_data = "*" * FIVE_KB
834+
835+ # Should only be a single "line" left, and
836+ # that's the image data
837+ self.assertEqual(1, len(other_lines))
838+ self.assertEqual(expected_image_data, other_lines[0])
839+
840+ # 6. GET /images
841+ # Verify no public images
842+ cmd = "curl http://0.0.0.0:%d/v1/images" % api_port
843+
844+ exitcode, out, err = execute(cmd)
845+
846+ self.assertEqual(0, exitcode)
847+
848+ expected_result = {"images": [
849+ {"container_format": None,
850+ "disk_format": None,
851+ "id": 1,
852+ "name": "Image1",
853+ "checksum": "c2e5db72bd7fd153f53ede5da5a06de3",
854+ "size": 5120}]}
855+ self.assertEqual(expected_result, json.loads(out.strip()))
856+
857+ # 7. GET /images/detail
858+ # Verify image and all its metadata
859+ cmd = "curl http://0.0.0.0:%d/v1/images/detail" % api_port
860+
861+ exitcode, out, err = execute(cmd)
862+
863+ self.assertEqual(0, exitcode)
864+
865+ expected_image = {
866+ "status": "active",
867+ "name": "Image1",
868+ "deleted": False,
869+ "container_format": None,
870+ "disk_format": None,
871+ "id": 1,
872+ 'location': 's3://%s:%s@%s/%s/1' % (
873+ self.s3_store_access_key,
874+ self.s3_store_secret_key,
875+ self.s3_store_host,
876+ self.s3_store_bucket),
877+ "is_public": True,
878+ "deleted_at": None,
879+ "properties": {},
880+ "size": 5120}
881+
882+ image = json.loads(out.strip())['images'][0]
883+
884+ for expected_key, expected_value in expected_image.items():
885+ self.assertTrue(expected_key in image,
886+ "Failed to find key %s in image"
887+ % expected_key)
888+ self.assertEqual(expected_value, expected_image[expected_key],
889+ "For key '%s' expected header value '%s'. Got '%s'"
890+ % (expected_key,
891+ expected_value,
892+ image[expected_key]))
893+
894+ # 8. PUT /images/1 with custom properties of "distro" and "arch"
895+ # Verify 200 returned
896+
897+ cmd = ("curl -i -X PUT "
898+ "-H 'X-Image-Meta-Property-Distro: Ubuntu' "
899+ "-H 'X-Image-Meta-Property-Arch: x86_64' "
900+ "http://0.0.0.0:%d/v1/images/1") % api_port
901+
902+ exitcode, out, err = execute(cmd)
903+ self.assertEqual(0, exitcode)
904+
905+ lines = out.split("\r\n")
906+ status_line = lines[0]
907+
908+ self.assertEqual("HTTP/1.1 200 OK", status_line)
909+
910+ # 9. GET /images/detail
911+ # Verify image and all its metadata
912+ cmd = "curl http://0.0.0.0:%d/v1/images/detail" % api_port
913+
914+ exitcode, out, err = execute(cmd)
915+
916+ self.assertEqual(0, exitcode)
917+
918+ expected_image = {
919+ "status": "active",
920+ "name": "Image1",
921+ "deleted": False,
922+ "container_format": None,
923+ "disk_format": None,
924+ "id": 1,
925+ 'location': 's3://%s:%s@%s/%s/1' % (
926+ self.s3_store_access_key,
927+ self.s3_store_secret_key,
928+ self.s3_store_host,
929+ self.s3_store_bucket),
930+ "is_public": True,
931+ "deleted_at": None,
932+ "properties": {'distro': 'Ubuntu', 'arch': 'x86_64'},
933+ "size": 5120}
934+
935+ image = json.loads(out.strip())['images'][0]
936+
937+ for expected_key, expected_value in expected_image.items():
938+ self.assertTrue(expected_key in image,
939+ "Failed to find key %s in image"
940+ % expected_key)
941+ self.assertEqual(expected_value, image[expected_key],
942+ "For key '%s' expected header value '%s'. Got '%s'"
943+ % (expected_key,
944+ expected_value,
945+ image[expected_key]))
946+
947+ # 10. PUT /images/1 and remove a previously existing property.
948+ cmd = ("curl -i -X PUT "
949+ "-H 'X-Image-Meta-Property-Arch: x86_64' "
950+ "http://0.0.0.0:%d/v1/images/1") % api_port
951+
952+ exitcode, out, err = execute(cmd)
953+ self.assertEqual(0, exitcode)
954+
955+ lines = out.split("\r\n")
956+ status_line = lines[0]
957+
958+ self.assertEqual("HTTP/1.1 200 OK", status_line)
959+
960+ cmd = "curl http://0.0.0.0:%d/v1/images/detail" % api_port
961+
962+ exitcode, out, err = execute(cmd)
963+
964+ self.assertEqual(0, exitcode)
965+
966+ image = json.loads(out.strip())['images'][0]
967+ self.assertEqual(1, len(image['properties']))
968+ self.assertEqual('x86_64', image['properties']['arch'])
969+
970+ # 11. PUT /images/1 and add a previously deleted property.
971+ cmd = ("curl -i -X PUT "
972+ "-H 'X-Image-Meta-Property-Distro: Ubuntu' "
973+ "-H 'X-Image-Meta-Property-Arch: x86_64' "
974+ "http://0.0.0.0:%d/v1/images/1") % api_port
975+
976+ exitcode, out, err = execute(cmd)
977+ self.assertEqual(0, exitcode)
978+
979+ lines = out.split("\r\n")
980+ status_line = lines[0]
981+
982+ self.assertEqual("HTTP/1.1 200 OK", status_line)
983+
984+ cmd = "curl http://0.0.0.0:%d/v1/images/detail" % api_port
985+
986+ exitcode, out, err = execute(cmd)
987+
988+ self.assertEqual(0, exitcode)
989+
990+ image = json.loads(out.strip())['images'][0]
991+ self.assertEqual(2, len(image['properties']))
992+ self.assertEqual('x86_64', image['properties']['arch'])
993+ self.assertEqual('Ubuntu', image['properties']['distro'])
994+
995+ self.stop_servers()
996
997=== added file 'tests/unit/test_s3_store.py'
998--- tests/unit/test_s3_store.py 1970-01-01 00:00:00 +0000
999+++ tests/unit/test_s3_store.py 2011-07-06 21:36:36 +0000
1000@@ -0,0 +1,315 @@
1001+# vim: tabstop=4 shiftwidth=4 softtabstop=4
1002+
1003+# Copyright 2011 OpenStack, LLC
1004+# All Rights Reserved.
1005+#
1006+# Licensed under the Apache License, Version 2.0 (the "License"); you may
1007+# not use this s3 except in compliance with the License. You may obtain
1008+# a copy of the License at
1009+#
1010+# http://www.apache.org/licenses/LICENSE-2.0
1011+#
1012+# Unless required by applicable law or agreed to in writing, software
1013+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
1014+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1015+# License for the specific language governing permissions and limitations
1016+# under the License.
1017+
1018+"""Tests the S3 backend store"""
1019+
1020+import StringIO
1021+import hashlib
1022+import httplib
1023+import sys
1024+import unittest
1025+import urlparse
1026+
1027+import stubout
1028+import boto.s3.connection
1029+
1030+from glance.common import exception
1031+from glance.store import BackendException
1032+from glance.store.s3 import (S3Backend,
1033+ format_s3_location,
1034+ parse_s3_tokens)
1035+
1036+FIVE_KB = (5 * 1024)
1037+S3_OPTIONS = {'verbose': True,
1038+ 'debug': True,
1039+ 's3_store_access_key': 'user',
1040+ 's3_store_secret_key': 'key',
1041+ 's3_store_host': 'localhost:8080',
1042+ 's3_store_bucket': 'glance'}
1043+
1044+
1045+# We stub out as little as possible to ensure that the code paths
1046+# between glance.store.s3 and boto.s3.connection are tested
1047+# thoroughly
1048+def stub_out_s3(stubs):
1049+
1050+ class FakeKey:
1051+ """
1052+ Acts like a ``boto.s3.key.Key``
1053+ """
1054+ def __init__(self, bucket, name):
1055+ self.bucket = bucket
1056+ self.name = name
1057+ self.data = None
1058+ self.size = 0
1059+ self.BufferSize = 1024
1060+
1061+ def close(self):
1062+ pass
1063+
1064+ def exists(self):
1065+ return self.bucket.exists(self.name)
1066+
1067+ def delete(self):
1068+ self.bucket.delete(self.name)
1069+
1070+ def compute_md5(self, data):
1071+ chunk = data.read(self.BufferSize)
1072+ checksum = hashlib.md5()
1073+ while chunk:
1074+ checksum.update(chunk)
1075+ chunk = data.read(self.BufferSize)
1076+ return checksum.hexdigest(), None
1077+
1078+ def set_contents_from_file(self, fp, replace=False, **kwargs):
1079+ self.data = StringIO.StringIO()
1080+ self.data.write(fp.getvalue())
1081+ self.size = self.data.len
1082+ # Reset the buffer to start
1083+ self.data.seek(0)
1084+ self.read = self.data.read
1085+
1086+ def get_file(self):
1087+ return self.data
1088+
1089+ class FakeBucket:
1090+ """
1091+ Acts like a ``boto.s3.bucket.Bucket``
1092+ """
1093+ def __init__(self, name, keys=None):
1094+ self.name = name
1095+ self.keys = keys or {}
1096+
1097+ def __str__(self):
1098+ return self.name
1099+
1100+ def exists(self, key):
1101+ return key in self.keys
1102+
1103+ def delete(self, key):
1104+ del self.keys[key]
1105+
1106+ def get_key(self, key_name, **kwargs):
1107+ key = self.keys.get(key_name)
1108+ if not key:
1109+ return FakeKey(self, key_name)
1110+ return key
1111+
1112+ def new_key(self, key_name):
1113+ new_key = FakeKey(self, key_name)
1114+ self.keys[key_name] = new_key
1115+ return new_key
1116+
1117+ fixture_buckets = {'glance': FakeBucket('glance')}
1118+ b = fixture_buckets['glance']
1119+ k = b.new_key('2')
1120+ k.set_contents_from_file(StringIO.StringIO("*" * FIVE_KB))
1121+
1122+ def fake_connection_constructor(self, *args, **kwargs):
1123+ pass
1124+
1125+ def fake_get_bucket(conn, bucket_id):
1126+ bucket = fixture_buckets.get(bucket_id)
1127+ if not bucket:
1128+ bucket = FakeBucket(bucket_id)
1129+ return bucket
1130+
1131+ stubs.Set(boto.s3.connection.S3Connection,
1132+ '__init__', fake_connection_constructor)
1133+ stubs.Set(boto.s3.connection.S3Connection,
1134+ 'get_bucket', fake_get_bucket)
1135+
1136+
1137+class TestS3Backend(unittest.TestCase):
1138+
1139+ def setUp(self):
1140+ """Establish a clean test environment"""
1141+ self.stubs = stubout.StubOutForTesting()
1142+ stub_out_s3(self.stubs)
1143+
1144+ def tearDown(self):
1145+ """Clear the test environment"""
1146+ self.stubs.UnsetAll()
1147+
1148+ def test_parse_s3_tokens(self):
1149+ """
1150+ Test that the parse_s3_tokens function returns
1151+ user, key, authurl, bucket, and objname properly
1152+ """
1153+ uri = "s3://user:key@localhost/v1.0/bucket/objname"
1154+ url_pieces = urlparse.urlparse(uri)
1155+ user, key, authurl, bucket, objname =\
1156+ parse_s3_tokens(url_pieces)
1157+ self.assertEqual("user", user)
1158+ self.assertEqual("key", key)
1159+ self.assertEqual("https://localhost/v1.0", authurl)
1160+ self.assertEqual("bucket", bucket)
1161+ self.assertEqual("objname", objname)
1162+
1163+ uri = "s3://user:key@localhost:9090/v1.0/bucket/objname"
1164+ url_pieces = urlparse.urlparse(uri)
1165+ user, key, authurl, bucket, objname =\
1166+ parse_s3_tokens(url_pieces)
1167+ self.assertEqual("user", user)
1168+ self.assertEqual("key", key)
1169+ self.assertEqual("https://localhost:9090/v1.0", authurl)
1170+ self.assertEqual("bucket", bucket)
1171+ self.assertEqual("objname", objname)
1172+
1173+ uri = "s3://user:key/part@s3.amazonaws.com/bucket/objname"
1174+ url_pieces = urlparse.urlparse(uri)
1175+ user, key, authurl, bucket, objname =\
1176+ parse_s3_tokens(url_pieces)
1177+ self.assertEqual("user", user)
1178+ self.assertEqual("key/part", key)
1179+ self.assertEqual("https://s3.amazonaws.com", authurl)
1180+ self.assertEqual("bucket", bucket)
1181+ self.assertEqual("objname", objname)
1182+
1183+ def test_get(self):
1184+ """Test a "normal" retrieval of an image in chunks"""
1185+ url_pieces = urlparse.urlparse(
1186+ "s3://user:key@auth_address/glance/2")
1187+ image_s3 = S3Backend.get(url_pieces)
1188+
1189+ expected_data = "*" * FIVE_KB
1190+ data = ""
1191+
1192+ for chunk in image_s3:
1193+ data += chunk
1194+ self.assertEqual(expected_data, data)
1195+
1196+ def test_get_mismatched_expected_size(self):
1197+ """
1198+ Test retrieval of an image with wrong expected_size param
1199+ raises an exception
1200+ """
1201+ url_pieces = urlparse.urlparse(
1202+ "s3://user:key@auth_address/glance/2")
1203+ self.assertRaises(BackendException,
1204+ S3Backend.get,
1205+ url_pieces,
1206+ {'expected_size': 42})
1207+
1208+ def test_get_non_existing(self):
1209+ """
1210+ Test that trying to retrieve a s3 that doesn't exist
1211+ raises an error
1212+ """
1213+ url_pieces = urlparse.urlparse(
1214+ "s3://user:key@auth_address/badbucket")
1215+ self.assertRaises(exception.NotFound,
1216+ S3Backend.get,
1217+ url_pieces)
1218+
1219+ url_pieces = urlparse.urlparse(
1220+ "s3://user:key@auth_address/glance/noexist")
1221+ self.assertRaises(exception.NotFound,
1222+ S3Backend.get,
1223+ url_pieces)
1224+
1225+ def test_add(self):
1226+ """Test that we can add an image via the s3 backend"""
1227+ expected_image_id = 42
1228+ expected_s3_size = FIVE_KB
1229+ expected_s3_contents = "*" * expected_s3_size
1230+ expected_checksum = hashlib.md5(expected_s3_contents).hexdigest()
1231+ expected_location = format_s3_location(
1232+ S3_OPTIONS['s3_store_access_key'],
1233+ S3_OPTIONS['s3_store_secret_key'],
1234+ S3_OPTIONS['s3_store_host'],
1235+ S3_OPTIONS['s3_store_bucket'],
1236+ expected_image_id)
1237+ image_s3 = StringIO.StringIO(expected_s3_contents)
1238+
1239+ location, size, checksum = S3Backend.add(42, image_s3, S3_OPTIONS)
1240+
1241+ self.assertEquals(expected_location, location)
1242+ self.assertEquals(expected_s3_size, size)
1243+ self.assertEquals(expected_checksum, checksum)
1244+
1245+ url_pieces = urlparse.urlparse(expected_location)
1246+ new_image_s3 = S3Backend.get(url_pieces)
1247+ new_image_contents = StringIO.StringIO()
1248+ for chunk in new_image_s3:
1249+ new_image_contents.write(chunk)
1250+ new_image_s3_size = new_image_contents.len
1251+
1252+ self.assertEquals(expected_s3_contents, new_image_contents.getvalue())
1253+ self.assertEquals(expected_s3_size, new_image_s3_size)
1254+
1255+ def test_add_already_existing(self):
1256+ """
1257+ Tests that adding an image with an existing identifier
1258+ raises an appropriate exception
1259+ """
1260+ image_s3 = StringIO.StringIO("nevergonnamakeit")
1261+ self.assertRaises(exception.Duplicate,
1262+ S3Backend.add,
1263+ 2, image_s3, S3_OPTIONS)
1264+
1265+ def _assertOptionRequiredForS3(self, key):
1266+ image_s3 = StringIO.StringIO("nevergonnamakeit")
1267+ options = S3_OPTIONS.copy()
1268+ del options[key]
1269+ self.assertRaises(BackendException, S3Backend.add,
1270+ 2, image_s3, options)
1271+
1272+ def test_add_no_user(self):
1273+ """
1274+ Tests that adding options without user raises
1275+ an appropriate exception
1276+ """
1277+ self._assertOptionRequiredForS3('s3_store_access_key')
1278+
1279+ def test_no_key(self):
1280+ """
1281+ Tests that adding options without key raises
1282+ an appropriate exception
1283+ """
1284+ self._assertOptionRequiredForS3('s3_store_secret_key')
1285+
1286+ def test_add_no_host(self):
1287+ """
1288+ Tests that adding options without host raises
1289+ an appropriate exception
1290+ """
1291+ self._assertOptionRequiredForS3('s3_store_host')
1292+
1293+ def test_delete(self):
1294+ """
1295+ Test we can delete an existing image in the s3 store
1296+ """
1297+ url_pieces = urlparse.urlparse(
1298+ "s3://user:key@auth_address/glance/2")
1299+
1300+ S3Backend.delete(url_pieces)
1301+
1302+ self.assertRaises(exception.NotFound,
1303+ S3Backend.get,
1304+ url_pieces)
1305+
1306+ def test_delete_non_existing(self):
1307+ """
1308+ Test that trying to delete a s3 that doesn't exist
1309+ raises an error
1310+ """
1311+ url_pieces = urlparse.urlparse(
1312+ "s3://user:key@auth_address/noexist")
1313+ self.assertRaises(exception.NotFound,
1314+ S3Backend.delete,
1315+ url_pieces)
1316
1317=== modified file 'tests/unit/test_stores.py'
1318--- tests/unit/test_stores.py 2011-03-05 00:02:26 +0000
1319+++ tests/unit/test_stores.py 2011-07-06 21:36:36 +0000
1320@@ -64,21 +64,3 @@
1321
1322 chunks = [c for c in fetcher]
1323 self.assertEqual(chunks, expected_returns)
1324-
1325-
1326-class TestS3Backend(TestBackend):
1327- def setUp(self):
1328- super(TestS3Backend, self).setUp()
1329- stubs.stub_out_s3_backend(self.stubs)
1330-
1331- def test_get(self):
1332- s3_uri = "s3://user:password@localhost/bucket1/file.tar.gz"
1333-
1334- expected_returns = ['I ', 'am', ' a', ' t', 'ea', 'po', 't,', ' s',
1335- 'ho', 'rt', ' a', 'nd', ' s', 'to', 'ut', '\n']
1336- fetcher = get_from_backend(s3_uri,
1337- expected_size=8,
1338- conn_class=S3Backend)
1339-
1340- chunks = [c for c in fetcher]
1341- self.assertEqual(chunks, expected_returns)
1342
1343=== modified file 'tools/pip-requires'
1344--- tools/pip-requires 2011-06-22 14:19:33 +0000
1345+++ tools/pip-requires 2011-07-06 21:36:36 +0000
1346@@ -12,6 +12,7 @@
1347 sphinx
1348 argparse
1349 mox==0.5.0
1350+boto
1351 swift
1352 -f http://pymox.googlecode.com/files/mox-0.5.0.tar.gz
1353 sqlalchemy-migrate>=0.6,<0.7

Subscribers

People subscribed via source and target branches