Merge lp:~termie/nova/eventlet_objectstore into lp:~hudson-openstack/nova/trunk
- eventlet_objectstore
- Merge into trunk
Status: | Merged | ||||
---|---|---|---|---|---|
Approved by: | termie | ||||
Approved revision: | 889 | ||||
Merged at revision: | 884 | ||||
Proposed branch: | lp:~termie/nova/eventlet_objectstore | ||||
Merge into: | lp:~hudson-openstack/nova/trunk | ||||
Prerequisite: | lp:~vishvananda/nova/kill-objectstore | ||||
Diff against target: |
1998 lines (+426/-1332) 11 files modified
bin/nova-objectstore (+9/-6) nova/objectstore/bucket.py (+0/-181) nova/objectstore/handler.py (+0/-478) nova/objectstore/image.py (+0/-296) nova/objectstore/s3server.py (+335/-0) nova/objectstore/stored.py (+0/-63) nova/test.py (+31/-5) nova/tests/integrated/integrated_helpers.py (+0/-42) nova/tests/integrated/test_login.py (+3/-3) nova/tests/test_cloud.py (+4/-47) nova/tests/test_objectstore.py (+44/-211) |
||||
To merge this branch: | bzr merge lp:~termie/nova/eventlet_objectstore | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Jay Pipes (community) | Approve | ||
justinsb (community) | Approve | ||
Vish Ishaya (community) | Approve | ||
Soren Hansen | Pending | ||
Review via email: mp+52468@code.launchpad.net |
This proposal supersedes a proposal from 2011-03-04.
Commit message
Description of the change
Ports the Tornado version of an S3 server to eventlet and wsgi, first step in deprecating the twistd-based objectstore.
This is a trivial implementation, never meant for production, it exists to provide an s3-look-alike objectstore for use when developing/testing things related to the amazon APIs (eucatools, etc), any production deployment would be expected to use Swift + an S3 interface.
In later patches I expect to be able to remove the old objectstore code entirely.
Vish Ishaya (vishvananda) wrote : Posted in a previous version of this proposal | # |
Soren Hansen (soren) wrote : Posted in a previous version of this proposal | # |
In nova/test.py, you import eventlet.greenpool, yet you don't use it. Is that a mistake or does importing the greenpool module make eventlet do something differently behind the scenes? If so, it would probably be good to add a comment to that effect.
I wonder what the rationale behind not removing the twisted based handler is?
You're removing a bunch of tests for the objectstore (the ones that do not directly pertain to the S3 API). Why is that?
termie (termie) wrote : | # |
Soren:
re: greenpool, just a leftover, good catch, removed it.
re: not removing the twisted code yet, just to keep the patch more concise, I plan on removing the twisted code once this goes in, all the code in that patch will just be related to removing the twisted code, where as this is code for adding the alternate path. Seem reasonable?
re: removing tests, those tests were testing specific implementation aspects of the twisted code (buckets, objects, etc) that are no longer valuable, this implementation is meant only as a trivial s3 interface not as our general purpose objectstore so only the s3 interface exists and only it needs to be tested.
Jay Pipes (jaypipes) wrote : | # |
this branch is obselete with Vish's kill-objectstore branch, right?
Soren Hansen (soren) wrote : | # |
2011/3/7 termie <email address hidden>:
> Soren:
> re: greenpool, just a leftover, good catch, removed it.
>
> re: not removing the twisted code yet, just to keep the patch more concise, I plan on removing the twisted code once this goes in, all the code in that patch will just be related to removing the twisted code, where as this is code for adding the alternate path. Seem reasonable?
Almost :)
> re: removing tests, those tests were testing specific implementation aspects of the twisted code (buckets, objects, etc) that are no longer valuable, this implementation is meant only as a trivial s3 interface not as our general purpose objectstore so only the s3 interface exists and only it needs to be tested.
If you're not going to remove the twisted code, I don't think you
should remove the corresponding tests.
Perhaps it's because I don't fully understand how much code you intend
to remove afterwards, but when I look at the ObjectStoreTestCase
you're removing, it doesn't look related to the Twisted implementation
at all. Rather, it looks related to the generic backend stuff that's
in nova.objectstor
--
Soren Hansen | http://
Ubuntu Developer | http://
OpenStack Developer | http://
Vish Ishaya (vishvananda) wrote : | # |
Soren: My patch moves the image handling out of objectstore and puts it into the ec2_api/glance. So that code isn't really needed any more. The Eventlet S3 is just so people have something that they can run that is smaller than swift with s3 bindings so that euca-upload-bundle stil works. The plan is for all of the other objectstore code to come out.
Vish
On Mar 9, 2011, at 12:19 PM, Soren Hansen wrote:
> 2011/3/7 termie <email address hidden>:
>> Soren:
>> re: greenpool, just a leftover, good catch, removed it.
>>
>> re: not removing the twisted code yet, just to keep the patch more concise, I plan on removing the twisted code once this goes in, all the code in that patch will just be related to removing the twisted code, where as this is code for adding the alternate path. Seem reasonable?
>
> Almost :)
>
>> re: removing tests, those tests were testing specific implementation aspects of the twisted code (buckets, objects, etc) that are no longer valuable, this implementation is meant only as a trivial s3 interface not as our general purpose objectstore so only the s3 interface exists and only it needs to be tested.
>
> If you're not going to remove the twisted code, I don't think you
> should remove the corresponding tests.
>
> Perhaps it's because I don't fully understand how much code you intend
> to remove afterwards, but when I look at the ObjectStoreTestCase
> you're removing, it doesn't look related to the Twisted implementation
> at all. Rather, it looks related to the generic backend stuff that's
> in nova.objectstor
>
> --
> Soren Hansen | http://
> Ubuntu Developer | http://
> OpenStack Developer | http://
>
> https:/
> You are requested to review the proposed merge of lp:~termie/nova/eventlet_objectstore into lp:nova.
termie (termie) wrote : | # |
> this branch is obselete with Vish's kill-objectstore branch, right?
no, vish just names his branches through some sort of stream of thought process, kill-objectstore is a dependency of this branch
termie (termie) wrote : | # |
> 2011/3/7 termie <email address hidden>:
> > Soren:
> > re: greenpool, just a leftover, good catch, removed it.
> >
> > re: not removing the twisted code yet, just to keep the patch more concise,
> I plan on removing the twisted code once this goes in, all the code in that
> patch will just be related to removing the twisted code, where as this is code
> for adding the alternate path. Seem reasonable?
>
> Almost :)
>
> > re: removing tests, those tests were testing specific implementation aspects
> of the twisted code (buckets, objects, etc) that are no longer valuable, this
> implementation is meant only as a trivial s3 interface not as our general
> purpose objectstore so only the s3 interface exists and only it needs to be
> tested.
>
> If you're not going to remove the twisted code, I don't think you
> should remove the corresponding tests.
>
> Perhaps it's because I don't fully understand how much code you intend
> to remove afterwards, but when I look at the ObjectStoreTestCase
> you're removing, it doesn't look related to the Twisted implementation
> at all. Rather, it looks related to the generic backend stuff that's
> in nova.objectstor
It is an S3-specific trivial implementation rather than an actual object store implementation, it is just the shim for use during development and testing when using the EC2 api.
But whatever, I'll just remove all the twisted code too, then.
>
> --
> Soren Hansen | http://
> Ubuntu Developer | http://
> OpenStack Developer | http://
termie (termie) wrote : | # |
removed the twisted objectstore code, ready for review again
Vish Ishaya (vishvananda) wrote : | # |
LGTM. Looks like you also fixed lp:731668, so i linked the bug.
justinsb (justin-fathomdb) wrote : | # |
LGTM
Looking forward to getting this merged - it does indeed fix bug 731668 as we figured out on IRC yesterday
Jay Pipes (jaypipes) wrote : | # |
Hi Andy! Good stuff.
1076 + mapper.connect('/',
1077 + controller=lambda *a, **kw: RootHandler(
1078 + mapper.
1079 + controller=lambda *a, **kw: ObjectHandler(
1080 + mapper.
1081 + controller=lambda *a, **kw: BucketHandler(
What was the purpose of that? Seems overly complicated and I'm not sure what the use of doing that was versus something like:
root_handler = RootHandler()
obj_handler = ObjectHandler()
bucket_handler = BucketHandler()
mapper.connect('/', controller=
mapper.
mapper.
Could you please explain? Thanks!
-jay
termie (termie) wrote : | # |
> Hi Andy! Good stuff.
>
> 1076 + mapper.connect('/',
> 1077 + controller=lambda *a, **kw: RootHandler(
> 1078 + mapper.
> 1079 + controller=lambda *a, **kw: ObjectHandler(
> 1080 + mapper.
> 1081 + controller=lambda *a, **kw: BucketHandler(
>
> What was the purpose of that? Seems overly complicated and I'm not sure what
> the use of doing that was versus something like:
>
> root_handler = RootHandler()
> obj_handler = ObjectHandler()
> bucket_handler = BucketHandler()
> mapper.connect('/', controller=
> mapper.
> mapper.
>
> Could you please explain? Thanks!
>
Each handler has to be a new object, as this is a direct port of Tornado's code it expects to modify local state (through such calls as "set_header") rather than operate on a request object directly, and expects to have access to a parent application object.
> -jay
Jay Pipes (jaypipes) wrote : | # |
> > Hi Andy! Good stuff.
> >
> > 1076 + mapper.connect('/',
> > 1077 + controller=lambda *a, **kw: RootHandler(
> > 1078 + mapper.
> > 1079 + controller=lambda *a, **kw: ObjectHandler(
> > 1080 + mapper.
> > 1081 + controller=lambda *a, **kw: BucketHandler(
> >
> > What was the purpose of that? Seems overly complicated and I'm not sure what
> > the use of doing that was versus something like:
> >
> > root_handler = RootHandler()
> > obj_handler = ObjectHandler()
> > bucket_handler = BucketHandler()
> > mapper.connect('/', controller=
> > mapper.
> > mapper.
> >
> > Could you please explain? Thanks!
> >
>
> Each handler has to be a new object, as this is a direct port of Tornado's
> code it expects to modify local state (through such calls as "set_header")
> rather than operate on a request object directly, and expects to have access
> to a parent application object.
Hmm, you lost me on that one...
-jay
termie (termie) wrote : | # |
Alright, another way...
This is a direct port of Tornado's implementation, so some key decisions about how the code interacts have already been chosen.
The two most common ways of designing async web frameworks can be classified as object-oriented and functional.
Tornado's is on the OO side because a response is built up in and using the shared state of an object and one of the object's methods will eventually trigger the "finishing" of the response asynchronously.
Most WSGI stuff is in the functional side, we pass a request object to every call down a chain and the eventual return value will be a response.
Part of the function of the routing code in S3Application as well as the code in BaseRequestHand
To do that it needs to give the Tornado-style code clean objects that it can modify the state of for each request that is processed, so we use a very simple factory lambda to create new state for each request, that's the stuff in the router, and when we let the Tornado code modify that object to handle the request, then we return the response it generated. This wouldn't work the same if Tornado was being more async'y and doing other callbacks throughout the process, but since Tornado is being relatively simple here we can be satisfied that the response will be complete by the end of the get/post method.
If that makes sense I'll add it to the docstring in BaseRequestHandler.
termie (termie) wrote : | # |
could probably remove the word 'async' in the "designing web frameworks" line, the WSGI approach is a sync approach (we just make it async).
Vish Ishaya (vishvananda) wrote : | # |
Very clear explanation!
On Mar 24, 2011 5:34 PM, "termie" <email address hidden> wrote:
> Alright, another way...
>
> This is a direct port of Tornado's implementation, so some key decisions
about how the code interacts have already been chosen.
>
> The two most common ways of designing async web frameworks can be
classified as object-oriented and functional.
>
> Tornado's is on the OO side because a response is built up in and using
the shared state of an object and one of the object's methods will
eventually trigger the "finishing" of the response asynchronously.
>
> Most WSGI stuff is in the functional side, we pass a request object to
every call down a chain and the eventual return value will be a response.
>
> Part of the function of the routing code in S3Application as well as the
code in BaseRequestHand
together enough that the Tornado code can work without extensive
modifications.
>
> To do that it needs to give the Tornado-style code clean objects that it
can modify the state of for each request that is processed, so we use a very
simple factory lambda to create new state for each request, that's the stuff
in the router, and when we let the Tornado code modify that object to handle
the request, then we return the response it generated. This wouldn't work
the same if Tornado was being more async'y and doing other callbacks
throughout the process, but since Tornado is being relatively simple here we
can be satisfied that the response will be complete by the end of the
get/post method.
>
> If that makes sense I'll add it to the docstring in BaseRequestHandler.
> --
> https:/
> You are reviewing the proposed merge of
lp:~termie/nova/eventlet_objectstore into lp:nova.
Jay Pipes (jaypipes) wrote : | # |
thanks very much for the excellent followup explanation, Andy. as mentioned on IRC, yes, I think adding that as a comment in the source file would be a great idea. thanks!
termie (termie) wrote : | # |
added big docstring
Trey Morris (tr3buchet) wrote : | # |
:)
OpenStack Infra (hudson-openstack) wrote : | # |
Attempt to merge into lp:nova failed due to conflicts:
text conflict in nova/test.py
- 882. By termie
-
add s3server, pre-modifications
- 883. By termie
-
port s3server to eventlet/wsgi
- 884. By termie
-
rename objectstore tests
- 885. By termie
-
update test base class to monkey patch wsgi
- 886. By termie
-
port the objectstore tests to the new tests
- 887. By termie
-
remove twisted objectstore
- 888. By termie
-
don't require integrated tests to recycle connections
- 889. By termie
-
add descriptive docstring
termie (termie) wrote : | # |
fixed conflict
Preview Diff
1 | === modified file 'bin/nova-objectstore' |
2 | --- bin/nova-objectstore 2011-03-18 13:56:05 +0000 |
3 | +++ bin/nova-objectstore 2011-03-24 23:51:35 +0000 |
4 | @@ -36,9 +36,10 @@ |
5 | gettext.install('nova', unicode=1) |
6 | |
7 | from nova import flags |
8 | +from nova import log as logging |
9 | from nova import utils |
10 | -from nova import twistd |
11 | -from nova.objectstore import handler |
12 | +from nova import wsgi |
13 | +from nova.objectstore import s3server |
14 | |
15 | |
16 | FLAGS = flags.FLAGS |
17 | @@ -46,7 +47,9 @@ |
18 | |
19 | if __name__ == '__main__': |
20 | utils.default_flagfile() |
21 | - twistd.serve(__file__) |
22 | - |
23 | -if __name__ == '__builtin__': |
24 | - application = handler.get_application() # pylint: disable=C0103 |
25 | + FLAGS(sys.argv) |
26 | + logging.setup() |
27 | + router = s3server.S3Application(FLAGS.buckets_path) |
28 | + server = wsgi.Server() |
29 | + server.start(router, FLAGS.s3_port, host=FLAGS.s3_host) |
30 | + server.wait() |
31 | |
32 | === removed file 'nova/objectstore/bucket.py' |
33 | --- nova/objectstore/bucket.py 2011-02-19 06:49:13 +0000 |
34 | +++ nova/objectstore/bucket.py 1970-01-01 00:00:00 +0000 |
35 | @@ -1,181 +0,0 @@ |
36 | -# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
37 | - |
38 | -# Copyright 2010 United States Government as represented by the |
39 | -# Administrator of the National Aeronautics and Space Administration. |
40 | -# All Rights Reserved. |
41 | -# |
42 | -# Licensed under the Apache License, Version 2.0 (the "License"); you may |
43 | -# not use this file except in compliance with the License. You may obtain |
44 | -# a copy of the License at |
45 | -# |
46 | -# http://www.apache.org/licenses/LICENSE-2.0 |
47 | -# |
48 | -# Unless required by applicable law or agreed to in writing, software |
49 | -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
50 | -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
51 | -# License for the specific language governing permissions and limitations |
52 | -# under the License. |
53 | - |
54 | -""" |
55 | -Simple object store using Blobs and JSON files on disk. |
56 | -""" |
57 | - |
58 | -import bisect |
59 | -import datetime |
60 | -import glob |
61 | -import json |
62 | -import os |
63 | - |
64 | -from nova import exception |
65 | -from nova import flags |
66 | -from nova import utils |
67 | -from nova.objectstore import stored |
68 | - |
69 | - |
70 | -FLAGS = flags.FLAGS |
71 | -flags.DEFINE_string('buckets_path', '$state_path/buckets', |
72 | - 'path to s3 buckets') |
73 | - |
74 | - |
75 | -class Bucket(object): |
76 | - def __init__(self, name): |
77 | - self.name = name |
78 | - self.path = os.path.abspath(os.path.join(FLAGS.buckets_path, name)) |
79 | - if not self.path.startswith(os.path.abspath(FLAGS.buckets_path)) or \ |
80 | - not os.path.isdir(self.path): |
81 | - raise exception.NotFound() |
82 | - |
83 | - self.ctime = os.path.getctime(self.path) |
84 | - |
85 | - def __repr__(self): |
86 | - return "<Bucket: %s>" % self.name |
87 | - |
88 | - @staticmethod |
89 | - def all(): |
90 | - """ list of all buckets """ |
91 | - buckets = [] |
92 | - for fn in glob.glob("%s/*.json" % FLAGS.buckets_path): |
93 | - try: |
94 | - json.load(open(fn)) |
95 | - name = os.path.split(fn)[-1][:-5] |
96 | - buckets.append(Bucket(name)) |
97 | - except: |
98 | - pass |
99 | - |
100 | - return buckets |
101 | - |
102 | - @staticmethod |
103 | - def create(bucket_name, context): |
104 | - """Create a new bucket owned by a project. |
105 | - |
106 | - @bucket_name: a string representing the name of the bucket to create |
107 | - @context: a nova.auth.api.ApiContext object representing who owns the |
108 | - bucket. |
109 | - |
110 | - Raises: |
111 | - NotAuthorized: if the bucket is already exists or has invalid name |
112 | - """ |
113 | - path = os.path.abspath(os.path.join( |
114 | - FLAGS.buckets_path, bucket_name)) |
115 | - if not path.startswith(os.path.abspath(FLAGS.buckets_path)) or \ |
116 | - os.path.exists(path): |
117 | - raise exception.NotAuthorized() |
118 | - |
119 | - os.makedirs(path) |
120 | - |
121 | - with open(path + '.json', 'w') as f: |
122 | - json.dump({'ownerId': context.project_id}, f) |
123 | - |
124 | - @property |
125 | - def metadata(self): |
126 | - """ dictionary of metadata around bucket, |
127 | - keys are 'Name' and 'CreationDate' |
128 | - """ |
129 | - |
130 | - return { |
131 | - "Name": self.name, |
132 | - "CreationDate": datetime.datetime.utcfromtimestamp(self.ctime), |
133 | - } |
134 | - |
135 | - @property |
136 | - def owner_id(self): |
137 | - try: |
138 | - with open(self.path + '.json') as f: |
139 | - return json.load(f)['ownerId'] |
140 | - except: |
141 | - return None |
142 | - |
143 | - def is_authorized(self, context): |
144 | - try: |
145 | - return context.is_admin or \ |
146 | - self.owner_id == context.project_id |
147 | - except Exception, e: |
148 | - return False |
149 | - |
150 | - def list_keys(self, prefix='', marker=None, max_keys=1000, terse=False): |
151 | - object_names = [] |
152 | - path_length = len(self.path) |
153 | - for root, dirs, files in os.walk(self.path): |
154 | - for file_name in files: |
155 | - object_name = os.path.join(root, file_name)[path_length + 1:] |
156 | - object_names.append(object_name) |
157 | - object_names.sort() |
158 | - contents = [] |
159 | - |
160 | - start_pos = 0 |
161 | - if marker: |
162 | - start_pos = bisect.bisect_right(object_names, marker, start_pos) |
163 | - if prefix: |
164 | - start_pos = bisect.bisect_left(object_names, prefix, start_pos) |
165 | - |
166 | - truncated = False |
167 | - for object_name in object_names[start_pos:]: |
168 | - if not object_name.startswith(prefix): |
169 | - break |
170 | - if len(contents) >= max_keys: |
171 | - truncated = True |
172 | - break |
173 | - object_path = self._object_path(object_name) |
174 | - c = {"Key": object_name} |
175 | - if not terse: |
176 | - info = os.stat(object_path) |
177 | - c.update({ |
178 | - "LastModified": datetime.datetime.utcfromtimestamp( |
179 | - info.st_mtime), |
180 | - "Size": info.st_size, |
181 | - }) |
182 | - contents.append(c) |
183 | - marker = object_name |
184 | - |
185 | - return { |
186 | - "Name": self.name, |
187 | - "Prefix": prefix, |
188 | - "Marker": marker, |
189 | - "MaxKeys": max_keys, |
190 | - "IsTruncated": truncated, |
191 | - "Contents": contents, |
192 | - } |
193 | - |
194 | - def _object_path(self, object_name): |
195 | - fn = os.path.join(self.path, object_name) |
196 | - |
197 | - if not fn.startswith(self.path): |
198 | - raise exception.NotAuthorized() |
199 | - |
200 | - return fn |
201 | - |
202 | - def delete(self): |
203 | - if len(os.listdir(self.path)) > 0: |
204 | - raise exception.NotEmpty() |
205 | - os.rmdir(self.path) |
206 | - os.remove(self.path + '.json') |
207 | - |
208 | - def __getitem__(self, key): |
209 | - return stored.Object(self, key) |
210 | - |
211 | - def __setitem__(self, key, value): |
212 | - with open(self._object_path(key), 'wb') as f: |
213 | - f.write(value) |
214 | - |
215 | - def __delitem__(self, key): |
216 | - stored.Object(self, key).delete() |
217 | |
218 | === removed file 'nova/objectstore/handler.py' |
219 | --- nova/objectstore/handler.py 2011-03-18 13:56:05 +0000 |
220 | +++ nova/objectstore/handler.py 1970-01-01 00:00:00 +0000 |
221 | @@ -1,478 +0,0 @@ |
222 | -# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
223 | -# |
224 | -# Copyright 2010 OpenStack LLC. |
225 | -# Copyright 2010 United States Government as represented by the |
226 | -# Administrator of the National Aeronautics and Space Administration. |
227 | -# All Rights Reserved. |
228 | -# |
229 | -# Copyright 2009 Facebook |
230 | -# |
231 | -# Licensed under the Apache License, Version 2.0 (the "License"); you may |
232 | -# not use this file except in compliance with the License. You may obtain |
233 | -# a copy of the License at |
234 | -# |
235 | -# http://www.apache.org/licenses/LICENSE-2.0 |
236 | -# |
237 | -# Unless required by applicable law or agreed to in writing, software |
238 | -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
239 | -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
240 | -# License for the specific language governing permissions and limitations |
241 | -# under the License. |
242 | - |
243 | -""" |
244 | -Implementation of an S3-like storage server based on local files. |
245 | - |
246 | -Useful to test features that will eventually run on S3, or if you want to |
247 | -run something locally that was once running on S3. |
248 | - |
249 | -We don't support all the features of S3, but it does work with the |
250 | -standard S3 client for the most basic semantics. To use the standard |
251 | -S3 client with this module:: |
252 | - |
253 | - c = S3.AWSAuthConnection("", "", server="localhost", port=8888, |
254 | - is_secure=False) |
255 | - c.create_bucket("mybucket") |
256 | - c.put("mybucket", "mykey", "a value") |
257 | - print c.get("mybucket", "mykey").body |
258 | - |
259 | -""" |
260 | - |
261 | -import datetime |
262 | -import json |
263 | -import multiprocessing |
264 | -import os |
265 | -import urllib |
266 | - |
267 | -from twisted.application import internet |
268 | -from twisted.application import service |
269 | -from twisted.web import error |
270 | -from twisted.web import resource |
271 | -from twisted.web import server |
272 | -from twisted.web import static |
273 | - |
274 | -from nova import context |
275 | -from nova import exception |
276 | -from nova import flags |
277 | -from nova import log as logging |
278 | -from nova import utils |
279 | -from nova.auth import manager |
280 | -from nova.objectstore import bucket |
281 | -from nova.objectstore import image |
282 | - |
283 | - |
284 | -LOG = logging.getLogger('nova.objectstore.handler') |
285 | -FLAGS = flags.FLAGS |
286 | -flags.DEFINE_string('s3_listen_host', '', 'Host to listen on.') |
287 | - |
288 | - |
289 | -def render_xml(request, value): |
290 | - """Writes value as XML string to request""" |
291 | - assert isinstance(value, dict) and len(value) == 1 |
292 | - request.setHeader("Content-Type", "application/xml; charset=UTF-8") |
293 | - |
294 | - name = value.keys()[0] |
295 | - request.write('<?xml version="1.0" encoding="UTF-8"?>\n') |
296 | - request.write('<' + utils.utf8(name) + |
297 | - ' xmlns="http://doc.s3.amazonaws.com/2006-03-01">') |
298 | - _render_parts(value.values()[0], request.write) |
299 | - request.write('</' + utils.utf8(name) + '>') |
300 | - request.finish() |
301 | - |
302 | - |
303 | -def finish(request, content=None): |
304 | - """Finalizer method for request""" |
305 | - if content: |
306 | - request.write(content) |
307 | - request.finish() |
308 | - |
309 | - |
310 | -def _render_parts(value, write_cb): |
311 | - """Helper method to render different Python objects to XML""" |
312 | - if isinstance(value, basestring): |
313 | - write_cb(utils.xhtml_escape(value)) |
314 | - elif isinstance(value, int) or isinstance(value, long): |
315 | - write_cb(str(value)) |
316 | - elif isinstance(value, datetime.datetime): |
317 | - write_cb(value.strftime("%Y-%m-%dT%H:%M:%S.000Z")) |
318 | - elif isinstance(value, dict): |
319 | - for name, subvalue in value.iteritems(): |
320 | - if not isinstance(subvalue, list): |
321 | - subvalue = [subvalue] |
322 | - for subsubvalue in subvalue: |
323 | - write_cb('<' + utils.utf8(name) + '>') |
324 | - _render_parts(subsubvalue, write_cb) |
325 | - write_cb('</' + utils.utf8(name) + '>') |
326 | - else: |
327 | - raise Exception(_("Unknown S3 value type %r"), value) |
328 | - |
329 | - |
330 | -def get_argument(request, key, default_value): |
331 | - """Returns the request's value at key, or default_value |
332 | - if not found |
333 | - """ |
334 | - if key in request.args: |
335 | - return request.args[key][0] |
336 | - return default_value |
337 | - |
338 | - |
339 | -def get_context(request): |
340 | - """Returns the supplied request's context object""" |
341 | - try: |
342 | - # Authorization Header format: 'AWS <access>:<secret>' |
343 | - authorization_header = request.getHeader('Authorization') |
344 | - if not authorization_header: |
345 | - raise exception.NotAuthorized() |
346 | - auth_header_value = authorization_header.split(' ')[1] |
347 | - access, _ignored, secret = auth_header_value.rpartition(':') |
348 | - am = manager.AuthManager() |
349 | - (user, project) = am.authenticate(access, |
350 | - secret, |
351 | - {}, |
352 | - request.method, |
353 | - request.getRequestHostname(), |
354 | - request.uri, |
355 | - headers=request.getAllHeaders(), |
356 | - check_type='s3') |
357 | - rv = context.RequestContext(user, project) |
358 | - LOG.audit(_("Authenticated request"), context=rv) |
359 | - return rv |
360 | - except exception.Error as ex: |
361 | - LOG.debug(_("Authentication Failure: %s"), ex) |
362 | - raise exception.NotAuthorized() |
363 | - |
364 | - |
365 | -class ErrorHandlingResource(resource.Resource): |
366 | - """Maps exceptions to 404 / 401 codes. Won't work for |
367 | - exceptions thrown after NOT_DONE_YET is returned. |
368 | - """ |
369 | - # TODO(unassigned) (calling-all-twisted-experts): This needs to be |
370 | - # plugged in to the right place in twisted... |
371 | - # This doesn't look like it's the right place |
372 | - # (consider exceptions in getChild; or after |
373 | - # NOT_DONE_YET is returned |
374 | - def render(self, request): |
375 | - """Renders the response as XML""" |
376 | - try: |
377 | - return resource.Resource.render(self, request) |
378 | - except exception.NotFound: |
379 | - request.setResponseCode(404) |
380 | - return '' |
381 | - except exception.NotAuthorized: |
382 | - request.setResponseCode(403) |
383 | - return '' |
384 | - |
385 | - |
386 | -class S3(ErrorHandlingResource): |
387 | - """Implementation of an S3-like storage server based on local files.""" |
388 | - def __init__(self): |
389 | - ErrorHandlingResource.__init__(self) |
390 | - |
391 | - def getChild(self, name, request): # pylint: disable=C0103 |
392 | - """Returns either the image or bucket resource""" |
393 | - request.context = get_context(request) |
394 | - if name == '': |
395 | - return self |
396 | - elif name == '_images': |
397 | - return ImagesResource() |
398 | - else: |
399 | - return BucketResource(name) |
400 | - |
401 | - def render_GET(self, request): # pylint: disable=R0201 |
402 | - """Renders the GET request for a list of buckets as XML""" |
403 | - LOG.debug(_('List of buckets requested'), context=request.context) |
404 | - buckets = [b for b in bucket.Bucket.all() |
405 | - if b.is_authorized(request.context)] |
406 | - |
407 | - render_xml(request, {"ListAllMyBucketsResult": { |
408 | - "Buckets": {"Bucket": [b.metadata for b in buckets]}, |
409 | - }}) |
410 | - return server.NOT_DONE_YET |
411 | - |
412 | - |
413 | -class BucketResource(ErrorHandlingResource): |
414 | - """A web resource containing an S3-like bucket""" |
415 | - def __init__(self, name): |
416 | - resource.Resource.__init__(self) |
417 | - self.name = name |
418 | - |
419 | - def getChild(self, name, request): |
420 | - """Returns the bucket resource itself, or the object resource |
421 | - the bucket contains if a name is supplied |
422 | - """ |
423 | - if name == '': |
424 | - return self |
425 | - else: |
426 | - return ObjectResource(bucket.Bucket(self.name), name) |
427 | - |
428 | - def render_GET(self, request): |
429 | - "Returns the keys for the bucket resource""" |
430 | - LOG.debug(_("List keys for bucket %s"), self.name) |
431 | - |
432 | - try: |
433 | - bucket_object = bucket.Bucket(self.name) |
434 | - except exception.NotFound: |
435 | - return error.NoResource(message="No such bucket").render(request) |
436 | - |
437 | - if not bucket_object.is_authorized(request.context): |
438 | - LOG.audit(_("Unauthorized attempt to access bucket %s"), |
439 | - self.name, context=request.context) |
440 | - raise exception.NotAuthorized() |
441 | - |
442 | - prefix = get_argument(request, "prefix", u"") |
443 | - marker = get_argument(request, "marker", u"") |
444 | - max_keys = int(get_argument(request, "max-keys", 1000)) |
445 | - terse = int(get_argument(request, "terse", 0)) |
446 | - |
447 | - results = bucket_object.list_keys(prefix=prefix, |
448 | - marker=marker, |
449 | - max_keys=max_keys, |
450 | - terse=terse) |
451 | - render_xml(request, {"ListBucketResult": results}) |
452 | - return server.NOT_DONE_YET |
453 | - |
454 | - def render_PUT(self, request): |
455 | - "Creates the bucket resource""" |
456 | - LOG.debug(_("Creating bucket %s"), self.name) |
457 | - LOG.debug("calling bucket.Bucket.create(%r, %r)", |
458 | - self.name, |
459 | - request.context) |
460 | - bucket.Bucket.create(self.name, request.context) |
461 | - request.finish() |
462 | - return server.NOT_DONE_YET |
463 | - |
464 | - def render_DELETE(self, request): |
465 | - """Deletes the bucket resource""" |
466 | - LOG.debug(_("Deleting bucket %s"), self.name) |
467 | - bucket_object = bucket.Bucket(self.name) |
468 | - |
469 | - if not bucket_object.is_authorized(request.context): |
470 | - LOG.audit(_("Unauthorized attempt to delete bucket %s"), |
471 | - self.name, context=request.context) |
472 | - raise exception.NotAuthorized() |
473 | - |
474 | - bucket_object.delete() |
475 | - request.setResponseCode(204) |
476 | - return '' |
477 | - |
478 | - |
479 | -class ObjectResource(ErrorHandlingResource): |
480 | - """The resource returned from a bucket""" |
481 | - def __init__(self, bucket, name): |
482 | - resource.Resource.__init__(self) |
483 | - self.bucket = bucket |
484 | - self.name = name |
485 | - |
486 | - def render_GET(self, request): |
487 | - """Returns the object |
488 | - |
489 | - Raises NotAuthorized if user in request context is not |
490 | - authorized to delete the object. |
491 | - """ |
492 | - bname = self.bucket.name |
493 | - nm = self.name |
494 | - LOG.debug(_("Getting object: %(bname)s / %(nm)s") % locals()) |
495 | - |
496 | - if not self.bucket.is_authorized(request.context): |
497 | - LOG.audit(_("Unauthorized attempt to get object %(nm)s" |
498 | - " from bucket %(bname)s") % locals(), |
499 | - context=request.context) |
500 | - raise exception.NotAuthorized() |
501 | - |
502 | - obj = self.bucket[urllib.unquote(self.name)] |
503 | - request.setHeader("Content-Type", "application/unknown") |
504 | - request.setHeader("Last-Modified", |
505 | - datetime.datetime.utcfromtimestamp(obj.mtime)) |
506 | - request.setHeader("Etag", '"' + obj.md5 + '"') |
507 | - return static.File(obj.path).render_GET(request) |
508 | - |
509 | - def render_PUT(self, request): |
510 | - """Modifies/inserts the object and returns a result code |
511 | - |
512 | - Raises NotAuthorized if user in request context is not |
513 | - authorized to delete the object. |
514 | - """ |
515 | - nm = self.name |
516 | - bname = self.bucket.name |
517 | - LOG.debug(_("Putting object: %(bname)s / %(nm)s") % locals()) |
518 | - |
519 | - if not self.bucket.is_authorized(request.context): |
520 | - LOG.audit(_("Unauthorized attempt to upload object %(nm)s to" |
521 | - " bucket %(bname)s") % locals(), context=request.context) |
522 | - raise exception.NotAuthorized() |
523 | - |
524 | - key = urllib.unquote(self.name) |
525 | - request.content.seek(0, 0) |
526 | - self.bucket[key] = request.content.read() |
527 | - request.setHeader("Etag", '"' + self.bucket[key].md5 + '"') |
528 | - finish(request) |
529 | - return server.NOT_DONE_YET |
530 | - |
531 | - def render_DELETE(self, request): |
532 | - """Deletes the object and returns a result code |
533 | - |
534 | - Raises NotAuthorized if user in request context is not |
535 | - authorized to delete the object. |
536 | - """ |
537 | - nm = self.name |
538 | - bname = self.bucket.name |
539 | - LOG.debug(_("Deleting object: %(bname)s / %(nm)s") % locals(), |
540 | - context=request.context) |
541 | - |
542 | - if not self.bucket.is_authorized(request.context): |
543 | - LOG.audit(_("Unauthorized attempt to delete object %(nm)s from " |
544 | - "bucket %(bname)s") % locals(), context=request.context) |
545 | - raise exception.NotAuthorized() |
546 | - |
547 | - del self.bucket[urllib.unquote(self.name)] |
548 | - request.setResponseCode(204) |
549 | - return '' |
550 | - |
551 | - |
552 | -class ImageResource(ErrorHandlingResource): |
553 | - """A web resource representing a single image""" |
554 | - isLeaf = True |
555 | - |
556 | - def __init__(self, name): |
557 | - resource.Resource.__init__(self) |
558 | - self.img = image.Image(name) |
559 | - |
560 | - def render_GET(self, request): |
561 | - """Returns the image file""" |
562 | - if not self.img.is_authorized(request.context, True): |
563 | - raise exception.NotAuthorized() |
564 | - return static.File(self.img.image_path, |
565 | - defaultType='application/octet-stream').\ |
566 | - render_GET(request) |
567 | - |
568 | - |
569 | -class ImagesResource(resource.Resource): |
570 | - """A web resource representing a list of images""" |
571 | - |
572 | - def getChild(self, name, _request): |
573 | - """Returns itself or an ImageResource if no name given""" |
574 | - if name == '': |
575 | - return self |
576 | - else: |
577 | - return ImageResource(name) |
578 | - |
579 | - def render_GET(self, request): # pylint: disable=R0201 |
580 | - """ returns a json listing of all images |
581 | - that a user has permissions to see """ |
582 | - |
583 | - images = [i for i in image.Image.all() \ |
584 | - if i.is_authorized(request.context, readonly=True)] |
585 | - |
586 | - # Bug #617776: |
587 | - # We used to have 'type' in the image metadata, but this field |
588 | - # should be called 'imageType', as per the EC2 specification. |
589 | - # For compat with old metadata files we copy type to imageType if |
590 | - # imageType is not present. |
591 | - # For compat with euca2ools (and any other clients using the |
592 | - # incorrect name) we copy imageType to type. |
593 | - # imageType is primary if we end up with both in the metadata file |
594 | - # (which should never happen). |
595 | - def decorate(m): |
596 | - if 'imageType' not in m and 'type' in m: |
597 | - m[u'imageType'] = m['type'] |
598 | - elif 'imageType' in m: |
599 | - m[u'type'] = m['imageType'] |
600 | - if 'displayName' not in m: |
601 | - m[u'displayName'] = u'' |
602 | - return m |
603 | - |
604 | - request.write(json.dumps([decorate(i.metadata) for i in images])) |
605 | - request.finish() |
606 | - return server.NOT_DONE_YET |
607 | - |
608 | - def render_PUT(self, request): # pylint: disable=R0201 |
609 | - """ create a new registered image """ |
610 | - |
611 | - image_id = get_argument(request, 'image_id', u'') |
612 | - image_location = get_argument(request, 'image_location', u'') |
613 | - |
614 | - image_path = os.path.join(FLAGS.images_path, image_id) |
615 | - if ((not image_path.startswith(FLAGS.images_path)) or |
616 | - os.path.exists(image_path)): |
617 | - LOG.audit(_("Not authorized to upload image: invalid directory " |
618 | - "%s"), |
619 | - image_path, context=request.context) |
620 | - raise exception.NotAuthorized() |
621 | - |
622 | - bucket_object = bucket.Bucket(image_location.split("/")[0]) |
623 | - |
624 | - if not bucket_object.is_authorized(request.context): |
625 | - LOG.audit(_("Not authorized to upload image: unauthorized " |
626 | - "bucket %s"), bucket_object.name, |
627 | - context=request.context) |
628 | - raise exception.NotAuthorized() |
629 | - |
630 | - LOG.audit(_("Starting image upload: %s"), image_id, |
631 | - context=request.context) |
632 | - p = multiprocessing.Process(target=image.Image.register_aws_image, |
633 | - args=(image_id, image_location, request.context)) |
634 | - p.start() |
635 | - return '' |
636 | - |
637 | - def render_POST(self, request): # pylint: disable=R0201 |
638 | - """Update image attributes: public/private""" |
639 | - |
640 | - # image_id required for all requests |
641 | - image_id = get_argument(request, 'image_id', u'') |
642 | - image_object = image.Image(image_id) |
643 | - if not image_object.is_authorized(request.context): |
644 | - LOG.audit(_("Not authorized to update attributes of image %s"), |
645 | - image_id, context=request.context) |
646 | - raise exception.NotAuthorized() |
647 | - |
648 | - operation = get_argument(request, 'operation', u'') |
649 | - if operation: |
650 | - # operation implies publicity toggle |
651 | - newstatus = (operation == 'add') |
652 | - LOG.audit(_("Toggling publicity flag of image %(image_id)s" |
653 | - " %(newstatus)r") % locals(), context=request.context) |
654 | - image_object.set_public(newstatus) |
655 | - else: |
656 | - # other attributes imply update |
657 | - LOG.audit(_("Updating user fields on image %s"), image_id, |
658 | - context=request.context) |
659 | - clean_args = {} |
660 | - for arg in request.args.keys(): |
661 | - clean_args[arg] = request.args[arg][0] |
662 | - image_object.update_user_editable_fields(clean_args) |
663 | - return '' |
664 | - |
665 | - def render_DELETE(self, request): # pylint: disable=R0201 |
666 | - """Delete a registered image""" |
667 | - image_id = get_argument(request, "image_id", u"") |
668 | - image_object = image.Image(image_id) |
669 | - |
670 | - if not image_object.is_authorized(request.context): |
671 | - LOG.audit(_("Unauthorized attempt to delete image %s"), |
672 | - image_id, context=request.context) |
673 | - raise exception.NotAuthorized() |
674 | - |
675 | - image_object.delete() |
676 | - LOG.audit(_("Deleted image: %s"), image_id, context=request.context) |
677 | - |
678 | - request.setResponseCode(204) |
679 | - return '' |
680 | - |
681 | - |
682 | -def get_site(): |
683 | - """Support for WSGI-like interfaces""" |
684 | - root = S3() |
685 | - site = server.Site(root) |
686 | - return site |
687 | - |
688 | - |
689 | -def get_application(): |
690 | - """Support WSGI-like interfaces""" |
691 | - factory = get_site() |
692 | - application = service.Application("objectstore") |
693 | - # Disabled because of lack of proper introspection in Twisted |
694 | - # or possibly different versions of twisted? |
695 | - # pylint: disable=E1101 |
696 | - objectStoreService = internet.TCPServer(FLAGS.s3_port, factory, |
697 | - interface=FLAGS.s3_listen_host) |
698 | - objectStoreService.setServiceParent(application) |
699 | - return application |
700 | |
701 | === removed file 'nova/objectstore/image.py' |
702 | --- nova/objectstore/image.py 2011-03-10 10:59:50 +0000 |
703 | +++ nova/objectstore/image.py 1970-01-01 00:00:00 +0000 |
704 | @@ -1,296 +0,0 @@ |
705 | -# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
706 | - |
707 | -# Copyright 2010 United States Government as represented by the |
708 | -# Administrator of the National Aeronautics and Space Administration. |
709 | -# All Rights Reserved. |
710 | -# |
711 | -# Licensed under the Apache License, Version 2.0 (the "License"); you may |
712 | -# not use this file except in compliance with the License. You may obtain |
713 | -# a copy of the License at |
714 | -# |
715 | -# http://www.apache.org/licenses/LICENSE-2.0 |
716 | -# |
717 | -# Unless required by applicable law or agreed to in writing, software |
718 | -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
719 | -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
720 | -# License for the specific language governing permissions and limitations |
721 | -# under the License. |
722 | - |
723 | -""" |
724 | -Take uploaded bucket contents and register them as disk images (AMIs). |
725 | -Requires decryption using keys in the manifest. |
726 | -""" |
727 | - |
728 | - |
729 | -import binascii |
730 | -import glob |
731 | -import json |
732 | -import os |
733 | -import shutil |
734 | -import tarfile |
735 | -from xml.etree import ElementTree |
736 | - |
737 | -from nova import exception |
738 | -from nova import flags |
739 | -from nova import utils |
740 | -from nova.objectstore import bucket |
741 | - |
742 | - |
743 | -FLAGS = flags.FLAGS |
744 | -flags.DECLARE('images_path', 'nova.image.local') |
745 | - |
746 | - |
747 | -class Image(object): |
748 | - def __init__(self, image_id): |
749 | - self.image_id = image_id |
750 | - self.path = os.path.abspath(os.path.join(FLAGS.images_path, image_id)) |
751 | - if not self.path.startswith(os.path.abspath(FLAGS.images_path)) or \ |
752 | - not os.path.isdir(self.path): |
753 | - raise exception.NotFound |
754 | - |
755 | - @property |
756 | - def image_path(self): |
757 | - return os.path.join(self.path, 'image') |
758 | - |
759 | - def delete(self): |
760 | - for fn in ['info.json', 'image']: |
761 | - try: |
762 | - os.unlink(os.path.join(self.path, fn)) |
763 | - except: |
764 | - pass |
765 | - try: |
766 | - os.rmdir(self.path) |
767 | - except: |
768 | - pass |
769 | - |
770 | - def is_authorized(self, context, readonly=False): |
771 | - # NOTE(devcamcar): Public images can be read by anyone, |
772 | - # but only modified by admin or owner. |
773 | - try: |
774 | - return (self.metadata['isPublic'] and readonly) or \ |
775 | - context.is_admin or \ |
776 | - self.metadata['imageOwnerId'] == context.project_id |
777 | - except: |
778 | - return False |
779 | - |
780 | - def set_public(self, state): |
781 | - md = self.metadata |
782 | - md['isPublic'] = state |
783 | - with open(os.path.join(self.path, 'info.json'), 'w') as f: |
784 | - json.dump(md, f) |
785 | - |
786 | - def update_user_editable_fields(self, args): |
787 | - """args is from the request parameters, so requires extra cleaning""" |
788 | - fields = {'display_name': 'displayName', 'description': 'description'} |
789 | - info = self.metadata |
790 | - for field in fields.keys(): |
791 | - if field in args: |
792 | - info[fields[field]] = args[field] |
793 | - with open(os.path.join(self.path, 'info.json'), 'w') as f: |
794 | - json.dump(info, f) |
795 | - |
796 | - @staticmethod |
797 | - def all(): |
798 | - images = [] |
799 | - for fn in glob.glob("%s/*/info.json" % FLAGS.images_path): |
800 | - try: |
801 | - image_id = fn.split('/')[-2] |
802 | - images.append(Image(image_id)) |
803 | - except: |
804 | - pass |
805 | - return images |
806 | - |
807 | - @property |
808 | - def owner_id(self): |
809 | - return self.metadata['imageOwnerId'] |
810 | - |
811 | - @property |
812 | - def metadata(self): |
813 | - with open(os.path.join(self.path, 'info.json')) as f: |
814 | - return json.load(f) |
815 | - |
816 | - @staticmethod |
817 | - def add(src, description, kernel=None, ramdisk=None, public=True): |
818 | - """adds an image to imagestore |
819 | - |
820 | - @type src: str |
821 | - @param src: location of the partition image on disk |
822 | - |
823 | - @type description: str |
824 | - @param description: string describing the image contents |
825 | - |
826 | - @type kernel: bool or str |
827 | - @param kernel: either TRUE meaning this partition is a kernel image or |
828 | - a string of the image id for the kernel |
829 | - |
830 | - @type ramdisk: bool or str |
831 | - @param ramdisk: either TRUE meaning this partition is a ramdisk image |
832 | - or a string of the image id for the ramdisk |
833 | - |
834 | - |
835 | - @type public: bool |
836 | - @param public: determine if this is a public image or private |
837 | - |
838 | - @rtype: str |
839 | - @return: a string with the image id |
840 | - """ |
841 | - |
842 | - image_type = 'machine' |
843 | - image_id = utils.generate_uid('ami') |
844 | - |
845 | - if kernel is True: |
846 | - image_type = 'kernel' |
847 | - image_id = utils.generate_uid('aki') |
848 | - if ramdisk is True: |
849 | - image_type = 'ramdisk' |
850 | - image_id = utils.generate_uid('ari') |
851 | - |
852 | - image_path = os.path.join(FLAGS.images_path, image_id) |
853 | - os.makedirs(image_path) |
854 | - |
855 | - shutil.copyfile(src, os.path.join(image_path, 'image')) |
856 | - |
857 | - info = { |
858 | - 'imageId': image_id, |
859 | - 'imageLocation': description, |
860 | - 'imageOwnerId': 'system', |
861 | - 'isPublic': public, |
862 | - 'architecture': 'x86_64', |
863 | - 'imageType': image_type, |
864 | - 'state': 'available'} |
865 | - |
866 | - if type(kernel) is str and len(kernel) > 0: |
867 | - info['kernelId'] = kernel |
868 | - |
869 | - if type(ramdisk) is str and len(ramdisk) > 0: |
870 | - info['ramdiskId'] = ramdisk |
871 | - |
872 | - with open(os.path.join(image_path, 'info.json'), "w") as f: |
873 | - json.dump(info, f) |
874 | - |
875 | - return image_id |
876 | - |
877 | - @staticmethod |
878 | - def register_aws_image(image_id, image_location, context): |
879 | - image_path = os.path.join(FLAGS.images_path, image_id) |
880 | - os.makedirs(image_path) |
881 | - |
882 | - bucket_name = image_location.split("/")[0] |
883 | - manifest_path = image_location[len(bucket_name) + 1:] |
884 | - bucket_object = bucket.Bucket(bucket_name) |
885 | - |
886 | - manifest = ElementTree.fromstring(bucket_object[manifest_path].read()) |
887 | - image_type = 'machine' |
888 | - |
889 | - try: |
890 | - kernel_id = manifest.find("machine_configuration/kernel_id").text |
891 | - if kernel_id == 'true': |
892 | - image_type = 'kernel' |
893 | - except: |
894 | - kernel_id = None |
895 | - |
896 | - try: |
897 | - ramdisk_id = manifest.find("machine_configuration/ramdisk_id").text |
898 | - if ramdisk_id == 'true': |
899 | - image_type = 'ramdisk' |
900 | - except: |
901 | - ramdisk_id = None |
902 | - |
903 | - try: |
904 | - arch = manifest.find("machine_configuration/architecture").text |
905 | - except: |
906 | - arch = 'x86_64' |
907 | - |
908 | - info = { |
909 | - 'imageId': image_id, |
910 | - 'imageLocation': image_location, |
911 | - 'imageOwnerId': context.project_id, |
912 | - 'isPublic': False, # FIXME: grab public from manifest |
913 | - 'architecture': arch, |
914 | - 'imageType': image_type} |
915 | - |
916 | - if kernel_id: |
917 | - info['kernelId'] = kernel_id |
918 | - |
919 | - if ramdisk_id: |
920 | - info['ramdiskId'] = ramdisk_id |
921 | - |
922 | - def write_state(state): |
923 | - info['imageState'] = state |
924 | - with open(os.path.join(image_path, 'info.json'), "w") as f: |
925 | - json.dump(info, f) |
926 | - |
927 | - write_state('pending') |
928 | - |
929 | - encrypted_filename = os.path.join(image_path, 'image.encrypted') |
930 | - with open(encrypted_filename, 'w') as f: |
931 | - for filename in manifest.find("image").getiterator("filename"): |
932 | - shutil.copyfileobj(bucket_object[filename.text].file, f) |
933 | - |
934 | - write_state('decrypting') |
935 | - |
936 | - # FIXME: grab kernelId and ramdiskId from bundle manifest |
937 | - hex_key = manifest.find("image/ec2_encrypted_key").text |
938 | - encrypted_key = binascii.a2b_hex(hex_key) |
939 | - hex_iv = manifest.find("image/ec2_encrypted_iv").text |
940 | - encrypted_iv = binascii.a2b_hex(hex_iv) |
941 | - cloud_private_key = os.path.join(FLAGS.ca_path, "private/cakey.pem") |
942 | - |
943 | - decrypted_filename = os.path.join(image_path, 'image.tar.gz') |
944 | - Image.decrypt_image(encrypted_filename, encrypted_key, encrypted_iv, |
945 | - cloud_private_key, decrypted_filename) |
946 | - |
947 | - write_state('untarring') |
948 | - |
949 | - image_file = Image.untarzip_image(image_path, decrypted_filename) |
950 | - shutil.move(os.path.join(image_path, image_file), |
951 | - os.path.join(image_path, 'image')) |
952 | - |
953 | - write_state('available') |
954 | - os.unlink(decrypted_filename) |
955 | - os.unlink(encrypted_filename) |
956 | - |
957 | - @staticmethod |
958 | - def decrypt_image(encrypted_filename, encrypted_key, encrypted_iv, |
959 | - cloud_private_key, decrypted_filename): |
960 | - key, err = utils.execute('openssl', |
961 | - 'rsautl', |
962 | - '-decrypt', |
963 | - '-inkey', '%s' % cloud_private_key, |
964 | - process_input=encrypted_key, |
965 | - check_exit_code=False) |
966 | - if err: |
967 | - raise exception.Error(_("Failed to decrypt private key: %s") |
968 | - % err) |
969 | - iv, err = utils.execute('openssl', |
970 | - 'rsautl', |
971 | - '-decrypt', |
972 | - '-inkey', '%s' % cloud_private_key, |
973 | - process_input=encrypted_iv, |
974 | - check_exit_code=False) |
975 | - if err: |
976 | - raise exception.Error(_("Failed to decrypt initialization " |
977 | - "vector: %s") % err) |
978 | - |
979 | - _out, err = utils.execute('openssl', |
980 | - 'enc', |
981 | - '-d', |
982 | - '-aes-128-cbc', |
983 | - '-in', '%s' % (encrypted_filename,), |
984 | - '-K', '%s' % (key,), |
985 | - '-iv', '%s' % (iv,), |
986 | - '-out', '%s' % (decrypted_filename,), |
987 | - check_exit_code=False) |
988 | - if err: |
989 | - raise exception.Error(_("Failed to decrypt image file " |
990 | - "%(image_file)s: %(err)s") % |
991 | - {'image_file': encrypted_filename, |
992 | - 'err': err}) |
993 | - |
994 | - @staticmethod |
995 | - def untarzip_image(path, filename): |
996 | - tar_file = tarfile.open(filename, "r|gz") |
997 | - tar_file.extractall(path) |
998 | - image_file = tar_file.getnames()[0] |
999 | - tar_file.close() |
1000 | - return image_file |
1001 | |
1002 | === added file 'nova/objectstore/s3server.py' |
1003 | --- nova/objectstore/s3server.py 1970-01-01 00:00:00 +0000 |
1004 | +++ nova/objectstore/s3server.py 2011-03-24 23:51:35 +0000 |
1005 | @@ -0,0 +1,335 @@ |
1006 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
1007 | +# |
1008 | +# Copyright 2010 United States Government as represented by the |
1009 | +# Administrator of the National Aeronautics and Space Administration. |
1010 | +# Copyright 2010 OpenStack LLC. |
1011 | +# Copyright 2009 Facebook |
1012 | +# |
1013 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
1014 | +# not use this file except in compliance with the License. You may obtain |
1015 | +# a copy of the License at |
1016 | +# |
1017 | +# http://www.apache.org/licenses/LICENSE-2.0 |
1018 | +# |
1019 | +# Unless required by applicable law or agreed to in writing, software |
1020 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
1021 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
1022 | +# License for the specific language governing permissions and limitations |
1023 | +# under the License. |
1024 | + |
1025 | +"""Implementation of an S3-like storage server based on local files. |
1026 | + |
1027 | +Useful to test features that will eventually run on S3, or if you want to |
1028 | +run something locally that was once running on S3. |
1029 | + |
1030 | +We don't support all the features of S3, but it does work with the |
1031 | +standard S3 client for the most basic semantics. To use the standard |
1032 | +S3 client with this module: |
1033 | + |
1034 | + c = S3.AWSAuthConnection("", "", server="localhost", port=8888, |
1035 | + is_secure=False) |
1036 | + c.create_bucket("mybucket") |
1037 | + c.put("mybucket", "mykey", "a value") |
1038 | + print c.get("mybucket", "mykey").body |
1039 | + |
1040 | +""" |
1041 | + |
1042 | +import bisect |
1043 | +import datetime |
1044 | +import hashlib |
1045 | +import os |
1046 | +import os.path |
1047 | +import urllib |
1048 | + |
1049 | +import routes |
1050 | +import webob |
1051 | + |
1052 | +from nova import flags |
1053 | +from nova import log as logging |
1054 | +from nova import utils |
1055 | +from nova import wsgi |
1056 | + |
1057 | + |
1058 | +FLAGS = flags.FLAGS |
1059 | +flags.DEFINE_string('buckets_path', '$state_path/buckets', |
1060 | + 'path to s3 buckets') |
1061 | + |
1062 | + |
1063 | +class S3Application(wsgi.Router): |
1064 | + """Implementation of an S3-like storage server based on local files. |
1065 | + |
1066 | + If bucket depth is given, we break files up into multiple directories |
1067 | + to prevent hitting file system limits for number of files in each |
1068 | + directories. 1 means one level of directories, 2 means 2, etc. |
1069 | + |
1070 | + """ |
1071 | + |
1072 | + def __init__(self, root_directory, bucket_depth=0, mapper=None): |
1073 | + if mapper is None: |
1074 | + mapper = routes.Mapper() |
1075 | + |
1076 | + mapper.connect('/', |
1077 | + controller=lambda *a, **kw: RootHandler(self)(*a, **kw)) |
1078 | + mapper.connect('/{bucket}/{object_name}', |
1079 | + controller=lambda *a, **kw: ObjectHandler(self)(*a, **kw)) |
1080 | + mapper.connect('/{bucket_name}/', |
1081 | + controller=lambda *a, **kw: BucketHandler(self)(*a, **kw)) |
1082 | + self.directory = os.path.abspath(root_directory) |
1083 | + if not os.path.exists(self.directory): |
1084 | + os.makedirs(self.directory) |
1085 | + self.bucket_depth = bucket_depth |
1086 | + super(S3Application, self).__init__(mapper) |
1087 | + |
1088 | + |
1089 | +class BaseRequestHandler(wsgi.Controller): |
1090 | + """Base class emulating Tornado's web framework pattern in WSGI. |
1091 | + |
1092 | + This is a direct port of Tornado's implementation, so some key decisions |
1093 | + about how the code interacts have already been chosen. |
1094 | + |
1095 | + The two most common ways of designing web frameworks can be |
1096 | + classified as async object-oriented and sync functional. |
1097 | + |
1098 | + Tornado's is on the OO side because a response is built up in and using |
1099 | + the shared state of an object and one of the object's methods will |
1100 | + eventually trigger the "finishing" of the response asynchronously. |
1101 | + |
1102 | + Most WSGI stuff is in the functional side, we pass a request object to |
1103 | + every call down a chain and the eventual return value will be a response. |
1104 | + |
1105 | + Part of the function of the routing code in S3Application as well as the |
1106 | + code in BaseRequestHandler's __call__ method is to merge those two styles |
1107 | + together enough that the Tornado code can work without extensive |
1108 | + modifications. |
1109 | + |
1110 | + To do that it needs to give the Tornado-style code clean objects that it |
1111 | + can modify the state of for each request that is processed, so we use a |
1112 | + very simple factory lambda to create new state for each request, that's |
1113 | + the stuff in the router, and when we let the Tornado code modify that |
1114 | + object to handle the request, then we return the response it generated. |
1115 | + This wouldn't work the same if Tornado was being more async'y and doing |
1116 | + other callbacks throughout the process, but since Tornado is being |
1117 | + relatively simple here we can be satisfied that the response will be |
1118 | + complete by the end of the get/post method. |
1119 | + |
1120 | + """ |
1121 | + |
1122 | + def __init__(self, application): |
1123 | + self.application = application |
1124 | + |
1125 | + @webob.dec.wsgify |
1126 | + def __call__(self, request): |
1127 | + method = request.method.lower() |
1128 | + f = getattr(self, method, self.invalid) |
1129 | + self.request = request |
1130 | + self.response = webob.Response() |
1131 | + params = request.environ['wsgiorg.routing_args'][1] |
1132 | + del params['controller'] |
1133 | + f(**params) |
1134 | + return self.response |
1135 | + |
1136 | + def get_argument(self, arg, default): |
1137 | + return self.request.str_params.get(arg, default) |
1138 | + |
1139 | + def set_header(self, header, value): |
1140 | + self.response.headers[header] = value |
1141 | + |
1142 | + def set_status(self, status_code): |
1143 | + self.response.status = status_code |
1144 | + |
1145 | + def finish(self, body=''): |
1146 | + self.response.body = utils.utf8(body) |
1147 | + |
1148 | + def invalid(self, **kwargs): |
1149 | + pass |
1150 | + |
1151 | + def render_xml(self, value): |
1152 | + assert isinstance(value, dict) and len(value) == 1 |
1153 | + self.set_header("Content-Type", "application/xml; charset=UTF-8") |
1154 | + name = value.keys()[0] |
1155 | + parts = [] |
1156 | + parts.append('<' + utils.utf8(name) + |
1157 | + ' xmlns="http://doc.s3.amazonaws.com/2006-03-01">') |
1158 | + self._render_parts(value.values()[0], parts) |
1159 | + parts.append('</' + utils.utf8(name) + '>') |
1160 | + self.finish('<?xml version="1.0" encoding="UTF-8"?>\n' + |
1161 | + ''.join(parts)) |
1162 | + |
1163 | + def _render_parts(self, value, parts=[]): |
1164 | + if isinstance(value, basestring): |
1165 | + parts.append(utils.xhtml_escape(value)) |
1166 | + elif isinstance(value, int) or isinstance(value, long): |
1167 | + parts.append(str(value)) |
1168 | + elif isinstance(value, datetime.datetime): |
1169 | + parts.append(value.strftime("%Y-%m-%dT%H:%M:%S.000Z")) |
1170 | + elif isinstance(value, dict): |
1171 | + for name, subvalue in value.iteritems(): |
1172 | + if not isinstance(subvalue, list): |
1173 | + subvalue = [subvalue] |
1174 | + for subsubvalue in subvalue: |
1175 | + parts.append('<' + utils.utf8(name) + '>') |
1176 | + self._render_parts(subsubvalue, parts) |
1177 | + parts.append('</' + utils.utf8(name) + '>') |
1178 | + else: |
1179 | + raise Exception("Unknown S3 value type %r", value) |
1180 | + |
1181 | + def _object_path(self, bucket, object_name): |
1182 | + if self.application.bucket_depth < 1: |
1183 | + return os.path.abspath(os.path.join( |
1184 | + self.application.directory, bucket, object_name)) |
1185 | + hash = hashlib.md5(object_name).hexdigest() |
1186 | + path = os.path.abspath(os.path.join( |
1187 | + self.application.directory, bucket)) |
1188 | + for i in range(self.application.bucket_depth): |
1189 | + path = os.path.join(path, hash[:2 * (i + 1)]) |
1190 | + return os.path.join(path, object_name) |
1191 | + |
1192 | + |
1193 | +class RootHandler(BaseRequestHandler): |
1194 | + def get(self): |
1195 | + names = os.listdir(self.application.directory) |
1196 | + buckets = [] |
1197 | + for name in names: |
1198 | + path = os.path.join(self.application.directory, name) |
1199 | + info = os.stat(path) |
1200 | + buckets.append({ |
1201 | + "Name": name, |
1202 | + "CreationDate": datetime.datetime.utcfromtimestamp( |
1203 | + info.st_ctime), |
1204 | + }) |
1205 | + self.render_xml({"ListAllMyBucketsResult": { |
1206 | + "Buckets": {"Bucket": buckets}, |
1207 | + }}) |
1208 | + |
1209 | + |
1210 | +class BucketHandler(BaseRequestHandler): |
1211 | + def get(self, bucket_name): |
1212 | + prefix = self.get_argument("prefix", u"") |
1213 | + marker = self.get_argument("marker", u"") |
1214 | + max_keys = int(self.get_argument("max-keys", 50000)) |
1215 | + path = os.path.abspath(os.path.join(self.application.directory, |
1216 | + bucket_name)) |
1217 | + terse = int(self.get_argument("terse", 0)) |
1218 | + if not path.startswith(self.application.directory) or \ |
1219 | + not os.path.isdir(path): |
1220 | + self.set_status(404) |
1221 | + return |
1222 | + object_names = [] |
1223 | + for root, dirs, files in os.walk(path): |
1224 | + for file_name in files: |
1225 | + object_names.append(os.path.join(root, file_name)) |
1226 | + skip = len(path) + 1 |
1227 | + for i in range(self.application.bucket_depth): |
1228 | + skip += 2 * (i + 1) + 1 |
1229 | + object_names = [n[skip:] for n in object_names] |
1230 | + object_names.sort() |
1231 | + contents = [] |
1232 | + |
1233 | + start_pos = 0 |
1234 | + if marker: |
1235 | + start_pos = bisect.bisect_right(object_names, marker, start_pos) |
1236 | + if prefix: |
1237 | + start_pos = bisect.bisect_left(object_names, prefix, start_pos) |
1238 | + |
1239 | + truncated = False |
1240 | + for object_name in object_names[start_pos:]: |
1241 | + if not object_name.startswith(prefix): |
1242 | + break |
1243 | + if len(contents) >= max_keys: |
1244 | + truncated = True |
1245 | + break |
1246 | + object_path = self._object_path(bucket_name, object_name) |
1247 | + c = {"Key": object_name} |
1248 | + if not terse: |
1249 | + info = os.stat(object_path) |
1250 | + c.update({ |
1251 | + "LastModified": datetime.datetime.utcfromtimestamp( |
1252 | + info.st_mtime), |
1253 | + "Size": info.st_size, |
1254 | + }) |
1255 | + contents.append(c) |
1256 | + marker = object_name |
1257 | + self.render_xml({"ListBucketResult": { |
1258 | + "Name": bucket_name, |
1259 | + "Prefix": prefix, |
1260 | + "Marker": marker, |
1261 | + "MaxKeys": max_keys, |
1262 | + "IsTruncated": truncated, |
1263 | + "Contents": contents, |
1264 | + }}) |
1265 | + |
1266 | + def put(self, bucket_name): |
1267 | + path = os.path.abspath(os.path.join( |
1268 | + self.application.directory, bucket_name)) |
1269 | + if not path.startswith(self.application.directory) or \ |
1270 | + os.path.exists(path): |
1271 | + self.set_status(403) |
1272 | + return |
1273 | + os.makedirs(path) |
1274 | + self.finish() |
1275 | + |
1276 | + def delete(self, bucket_name): |
1277 | + path = os.path.abspath(os.path.join( |
1278 | + self.application.directory, bucket_name)) |
1279 | + if not path.startswith(self.application.directory) or \ |
1280 | + not os.path.isdir(path): |
1281 | + self.set_status(404) |
1282 | + return |
1283 | + if len(os.listdir(path)) > 0: |
1284 | + self.set_status(403) |
1285 | + return |
1286 | + os.rmdir(path) |
1287 | + self.set_status(204) |
1288 | + self.finish() |
1289 | + |
1290 | + |
1291 | +class ObjectHandler(BaseRequestHandler): |
1292 | + def get(self, bucket, object_name): |
1293 | + object_name = urllib.unquote(object_name) |
1294 | + path = self._object_path(bucket, object_name) |
1295 | + if not path.startswith(self.application.directory) or \ |
1296 | + not os.path.isfile(path): |
1297 | + self.set_status(404) |
1298 | + return |
1299 | + info = os.stat(path) |
1300 | + self.set_header("Content-Type", "application/unknown") |
1301 | + self.set_header("Last-Modified", datetime.datetime.utcfromtimestamp( |
1302 | + info.st_mtime)) |
1303 | + object_file = open(path, "r") |
1304 | + try: |
1305 | + self.finish(object_file.read()) |
1306 | + finally: |
1307 | + object_file.close() |
1308 | + |
1309 | + def put(self, bucket, object_name): |
1310 | + object_name = urllib.unquote(object_name) |
1311 | + bucket_dir = os.path.abspath(os.path.join( |
1312 | + self.application.directory, bucket)) |
1313 | + if not bucket_dir.startswith(self.application.directory) or \ |
1314 | + not os.path.isdir(bucket_dir): |
1315 | + self.set_status(404) |
1316 | + return |
1317 | + path = self._object_path(bucket, object_name) |
1318 | + if not path.startswith(bucket_dir) or os.path.isdir(path): |
1319 | + self.set_status(403) |
1320 | + return |
1321 | + directory = os.path.dirname(path) |
1322 | + if not os.path.exists(directory): |
1323 | + os.makedirs(directory) |
1324 | + object_file = open(path, "w") |
1325 | + object_file.write(self.request.body) |
1326 | + object_file.close() |
1327 | + self.set_header('ETag', |
1328 | + '"%s"' % hashlib.md5(self.request.body).hexdigest()) |
1329 | + self.finish() |
1330 | + |
1331 | + def delete(self, bucket, object_name): |
1332 | + object_name = urllib.unquote(object_name) |
1333 | + path = self._object_path(bucket, object_name) |
1334 | + if not path.startswith(self.application.directory) or \ |
1335 | + not os.path.isfile(path): |
1336 | + self.set_status(404) |
1337 | + return |
1338 | + os.unlink(path) |
1339 | + self.set_status(204) |
1340 | + self.finish() |
1341 | |
1342 | === removed file 'nova/objectstore/stored.py' |
1343 | --- nova/objectstore/stored.py 2010-10-22 00:15:21 +0000 |
1344 | +++ nova/objectstore/stored.py 1970-01-01 00:00:00 +0000 |
1345 | @@ -1,63 +0,0 @@ |
1346 | -# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
1347 | - |
1348 | -# Copyright 2010 United States Government as represented by the |
1349 | -# Administrator of the National Aeronautics and Space Administration. |
1350 | -# All Rights Reserved. |
1351 | -# |
1352 | -# Licensed under the Apache License, Version 2.0 (the "License"); you may |
1353 | -# not use this file except in compliance with the License. You may obtain |
1354 | -# a copy of the License at |
1355 | -# |
1356 | -# http://www.apache.org/licenses/LICENSE-2.0 |
1357 | -# |
1358 | -# Unless required by applicable law or agreed to in writing, software |
1359 | -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
1360 | -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
1361 | -# License for the specific language governing permissions and limitations |
1362 | -# under the License. |
1363 | - |
1364 | -""" |
1365 | -Properties of an object stored within a bucket. |
1366 | -""" |
1367 | - |
1368 | -import os |
1369 | - |
1370 | -import nova.crypto |
1371 | -from nova import exception |
1372 | - |
1373 | - |
1374 | -class Object(object): |
1375 | - def __init__(self, bucket, key): |
1376 | - """ wrapper class of an existing key """ |
1377 | - self.bucket = bucket |
1378 | - self.key = key |
1379 | - self.path = bucket._object_path(key) |
1380 | - if not os.path.isfile(self.path): |
1381 | - raise exception.NotFound |
1382 | - |
1383 | - def __repr__(self): |
1384 | - return "<Object %s/%s>" % (self.bucket, self.key) |
1385 | - |
1386 | - @property |
1387 | - def md5(self): |
1388 | - """ computes the MD5 of the contents of file """ |
1389 | - with open(self.path, "r") as f: |
1390 | - return nova.crypto.compute_md5(f) |
1391 | - |
1392 | - @property |
1393 | - def mtime(self): |
1394 | - """ mtime of file """ |
1395 | - return os.path.getmtime(self.path) |
1396 | - |
1397 | - def read(self): |
1398 | - """ read all contents of key into memory and return """ |
1399 | - return self.file.read() |
1400 | - |
1401 | - @property |
1402 | - def file(self): |
1403 | - """ return a file object for the key """ |
1404 | - return open(self.path, 'rb') |
1405 | - |
1406 | - def delete(self): |
1407 | - """ deletes the file """ |
1408 | - os.unlink(self.path) |
1409 | |
1410 | === modified file 'nova/test.py' |
1411 | --- nova/test.py 2011-03-15 00:37:13 +0000 |
1412 | +++ nova/test.py 2011-03-24 23:51:35 +0000 |
1413 | @@ -24,6 +24,7 @@ |
1414 | |
1415 | |
1416 | import datetime |
1417 | +import functools |
1418 | import os |
1419 | import shutil |
1420 | import uuid |
1421 | @@ -32,6 +33,7 @@ |
1422 | import mox |
1423 | import shutil |
1424 | import stubout |
1425 | +from eventlet import greenthread |
1426 | |
1427 | from nova import context |
1428 | from nova import db |
1429 | @@ -39,6 +41,7 @@ |
1430 | from nova import flags |
1431 | from nova import rpc |
1432 | from nova import service |
1433 | +from nova import wsgi |
1434 | |
1435 | |
1436 | FLAGS = flags.FLAGS |
1437 | @@ -79,6 +82,7 @@ |
1438 | self.injected = [] |
1439 | self._services = [] |
1440 | self._monkey_patch_attach() |
1441 | + self._monkey_patch_wsgi() |
1442 | self._original_flags = FLAGS.FlagValuesDict() |
1443 | |
1444 | def tearDown(self): |
1445 | @@ -99,7 +103,8 @@ |
1446 | self.reset_flags() |
1447 | |
1448 | # Reset our monkey-patches |
1449 | - rpc.Consumer.attach_to_eventlet = self.originalAttach |
1450 | + rpc.Consumer.attach_to_eventlet = self.original_attach |
1451 | + wsgi.Server.start = self.original_start |
1452 | |
1453 | # Stop any timers |
1454 | for x in self.injected: |
1455 | @@ -141,16 +146,37 @@ |
1456 | return svc |
1457 | |
1458 | def _monkey_patch_attach(self): |
1459 | - self.originalAttach = rpc.Consumer.attach_to_eventlet |
1460 | + self.original_attach = rpc.Consumer.attach_to_eventlet |
1461 | |
1462 | - def _wrapped(innerSelf): |
1463 | - rv = self.originalAttach(innerSelf) |
1464 | + def _wrapped(inner_self): |
1465 | + rv = self.original_attach(inner_self) |
1466 | self.injected.append(rv) |
1467 | return rv |
1468 | |
1469 | - _wrapped.func_name = self.originalAttach.func_name |
1470 | + _wrapped.func_name = self.original_attach.func_name |
1471 | rpc.Consumer.attach_to_eventlet = _wrapped |
1472 | |
1473 | + def _monkey_patch_wsgi(self): |
1474 | + """Allow us to kill servers spawned by wsgi.Server.""" |
1475 | + # TODO(termie): change these patterns to use functools |
1476 | + self.original_start = wsgi.Server.start |
1477 | + |
1478 | + @functools.wraps(self.original_start) |
1479 | + def _wrapped_start(inner_self, *args, **kwargs): |
1480 | + original_spawn_n = inner_self.pool.spawn_n |
1481 | + |
1482 | + @functools.wraps(original_spawn_n) |
1483 | + def _wrapped_spawn_n(*args, **kwargs): |
1484 | + rv = greenthread.spawn(*args, **kwargs) |
1485 | + self._services.append(rv) |
1486 | + |
1487 | + inner_self.pool.spawn_n = _wrapped_spawn_n |
1488 | + self.original_start(inner_self, *args, **kwargs) |
1489 | + inner_self.pool.spawn_n = original_spawn_n |
1490 | + |
1491 | + _wrapped_start.func_name = self.original_start.func_name |
1492 | + wsgi.Server.start = _wrapped_start |
1493 | + |
1494 | # Useful assertions |
1495 | def assertDictMatch(self, d1, d2): |
1496 | """Assert two dicts are equivalent. |
1497 | |
1498 | === modified file 'nova/tests/integrated/integrated_helpers.py' |
1499 | --- nova/tests/integrated/integrated_helpers.py 2011-03-15 05:29:30 +0000 |
1500 | +++ nova/tests/integrated/integrated_helpers.py 2011-03-24 23:51:35 +0000 |
1501 | @@ -75,8 +75,6 @@ |
1502 | |
1503 | |
1504 | class IntegratedUnitTestContext(object): |
1505 | - __INSTANCE = None |
1506 | - |
1507 | def __init__(self): |
1508 | self.auth_manager = manager.AuthManager() |
1509 | |
1510 | @@ -92,7 +90,6 @@ |
1511 | |
1512 | def setup(self): |
1513 | self._start_services() |
1514 | - |
1515 | self._create_test_user() |
1516 | |
1517 | def _create_test_user(self): |
1518 | @@ -109,14 +106,6 @@ |
1519 | self._start_api_service() |
1520 | |
1521 | def cleanup(self): |
1522 | - for service in self.services: |
1523 | - service.kill() |
1524 | - self.services = [] |
1525 | - # TODO(justinsb): Shutdown WSGI & anything else we startup |
1526 | - # bug731668 |
1527 | - # WSGI shutdown broken :-( |
1528 | - # self.wsgi_server.terminate() |
1529 | - # self.wsgi_server = None |
1530 | self.test_user = None |
1531 | |
1532 | def _create_unittest_user(self): |
1533 | @@ -150,39 +139,8 @@ |
1534 | if not api_service: |
1535 | raise Exception("API Service was None") |
1536 | |
1537 | - # WSGI shutdown broken :-( |
1538 | - #self.services.append(volume_service) |
1539 | self.api_service = api_service |
1540 | |
1541 | self.auth_url = 'http://localhost:8774/v1.0' |
1542 | |
1543 | return api_service |
1544 | - |
1545 | - # WSGI shutdown broken :-( |
1546 | - # bug731668 |
1547 | - #@staticmethod |
1548 | - #def get(): |
1549 | - # if not IntegratedUnitTestContext.__INSTANCE: |
1550 | - # IntegratedUnitTestContext.startup() |
1551 | - # #raise Error("Must call IntegratedUnitTestContext::startup") |
1552 | - # return IntegratedUnitTestContext.__INSTANCE |
1553 | - |
1554 | - @staticmethod |
1555 | - def startup(): |
1556 | - # Because WSGI shutdown is broken at the moment, we have to recycle |
1557 | - # bug731668 |
1558 | - if IntegratedUnitTestContext.__INSTANCE: |
1559 | - #raise Error("Multiple calls to IntegratedUnitTestContext.startup") |
1560 | - IntegratedUnitTestContext.__INSTANCE.setup() |
1561 | - else: |
1562 | - IntegratedUnitTestContext.__INSTANCE = IntegratedUnitTestContext() |
1563 | - return IntegratedUnitTestContext.__INSTANCE |
1564 | - |
1565 | - @staticmethod |
1566 | - def shutdown(): |
1567 | - if not IntegratedUnitTestContext.__INSTANCE: |
1568 | - raise Error("Must call IntegratedUnitTestContext::startup") |
1569 | - IntegratedUnitTestContext.__INSTANCE.cleanup() |
1570 | - # WSGI shutdown broken :-( |
1571 | - # bug731668 |
1572 | - #IntegratedUnitTestContext.__INSTANCE = None |
1573 | |
1574 | === modified file 'nova/tests/integrated/test_login.py' |
1575 | --- nova/tests/integrated/test_login.py 2011-03-15 05:43:21 +0000 |
1576 | +++ nova/tests/integrated/test_login.py 2011-03-24 23:51:35 +0000 |
1577 | @@ -33,12 +33,12 @@ |
1578 | class LoginTest(test.TestCase): |
1579 | def setUp(self): |
1580 | super(LoginTest, self).setUp() |
1581 | - context = integrated_helpers.IntegratedUnitTestContext.startup() |
1582 | - self.user = context.test_user |
1583 | + self.context = integrated_helpers.IntegratedUnitTestContext() |
1584 | + self.user = self.context.test_user |
1585 | self.api = self.user.openstack_api |
1586 | |
1587 | def tearDown(self): |
1588 | - integrated_helpers.IntegratedUnitTestContext.shutdown() |
1589 | + self.context.cleanup() |
1590 | super(LoginTest, self).tearDown() |
1591 | |
1592 | def test_login(self): |
1593 | |
1594 | === modified file 'nova/tests/test_cloud.py' |
1595 | --- nova/tests/test_cloud.py 2011-03-11 16:55:28 +0000 |
1596 | +++ nova/tests/test_cloud.py 2011-03-24 23:51:35 +0000 |
1597 | @@ -35,31 +35,22 @@ |
1598 | from nova import rpc |
1599 | from nova import service |
1600 | from nova import test |
1601 | +from nova import utils |
1602 | from nova.auth import manager |
1603 | from nova.compute import power_state |
1604 | from nova.api.ec2 import cloud |
1605 | from nova.api.ec2 import ec2utils |
1606 | from nova.image import local |
1607 | -from nova.objectstore import image |
1608 | |
1609 | |
1610 | FLAGS = flags.FLAGS |
1611 | LOG = logging.getLogger('nova.tests.cloud') |
1612 | |
1613 | -# Temp dirs for working with image attributes through the cloud controller |
1614 | -# (stole this from objectstore_unittest.py) |
1615 | -OSS_TEMPDIR = tempfile.mkdtemp(prefix='test_oss-') |
1616 | -IMAGES_PATH = os.path.join(OSS_TEMPDIR, 'images') |
1617 | -os.makedirs(IMAGES_PATH) |
1618 | - |
1619 | - |
1620 | -# TODO(termie): these tests are rather fragile, they should at the lest be |
1621 | -# wiping database state after each run |
1622 | + |
1623 | class CloudTestCase(test.TestCase): |
1624 | def setUp(self): |
1625 | super(CloudTestCase, self).setUp() |
1626 | - self.flags(connection_type='fake', |
1627 | - images_path=IMAGES_PATH) |
1628 | + self.flags(connection_type='fake') |
1629 | |
1630 | self.conn = rpc.Connection.instance() |
1631 | |
1632 | @@ -70,6 +61,7 @@ |
1633 | self.compute = self.start_service('compute') |
1634 | self.scheduter = self.start_service('scheduler') |
1635 | self.network = self.start_service('network') |
1636 | + self.image_service = utils.import_object(FLAGS.image_service) |
1637 | |
1638 | self.manager = manager.AuthManager() |
1639 | self.user = self.manager.create_user('admin', 'admin', 'admin', True) |
1640 | @@ -318,41 +310,6 @@ |
1641 | LOG.debug(_("Terminating instance %s"), instance_id) |
1642 | rv = self.compute.terminate_instance(instance_id) |
1643 | |
1644 | - @staticmethod |
1645 | - def _fake_set_image_description(ctxt, image_id, description): |
1646 | - from nova.objectstore import handler |
1647 | - |
1648 | - class req: |
1649 | - pass |
1650 | - |
1651 | - request = req() |
1652 | - request.context = ctxt |
1653 | - request.args = {'image_id': [image_id], |
1654 | - 'description': [description]} |
1655 | - |
1656 | - resource = handler.ImagesResource() |
1657 | - resource.render_POST(request) |
1658 | - |
1659 | - def test_user_editable_image_endpoint(self): |
1660 | - pathdir = os.path.join(FLAGS.images_path, 'ami-testing') |
1661 | - os.mkdir(pathdir) |
1662 | - info = {'isPublic': False} |
1663 | - with open(os.path.join(pathdir, 'info.json'), 'w') as f: |
1664 | - json.dump(info, f) |
1665 | - img = image.Image('ami-testing') |
1666 | - # self.cloud.set_image_description(self.context, 'ami-testing', |
1667 | - # 'Foo Img') |
1668 | - # NOTE(vish): Above won't work unless we start objectstore or create |
1669 | - # a fake version of api/ec2/images.py conn that can |
1670 | - # call methods directly instead of going through boto. |
1671 | - # for now, just cheat and call the method directly |
1672 | - self._fake_set_image_description(self.context, 'ami-testing', |
1673 | - 'Foo Img') |
1674 | - self.assertEqual('Foo Img', img.metadata['description']) |
1675 | - self._fake_set_image_description(self.context, 'ami-testing', '') |
1676 | - self.assertEqual('', img.metadata['description']) |
1677 | - shutil.rmtree(pathdir) |
1678 | - |
1679 | def test_update_of_instance_display_fields(self): |
1680 | inst = db.instance_create(self.context, {}) |
1681 | ec2_id = ec2utils.id_to_ec2_id(inst['id']) |
1682 | |
1683 | === renamed file 'nova/tests/objectstore_unittest.py' => 'nova/tests/test_objectstore.py' |
1684 | --- nova/tests/objectstore_unittest.py 2011-03-18 13:56:38 +0000 |
1685 | +++ nova/tests/test_objectstore.py 2011-03-24 23:51:35 +0000 |
1686 | @@ -27,18 +27,16 @@ |
1687 | import shutil |
1688 | import tempfile |
1689 | |
1690 | -from boto.s3.connection import S3Connection, OrdinaryCallingFormat |
1691 | -from twisted.internet import reactor, threads, defer |
1692 | -from twisted.web import http, server |
1693 | +from boto import exception as boto_exception |
1694 | +from boto.s3 import connection as s3 |
1695 | |
1696 | from nova import context |
1697 | +from nova import exception |
1698 | from nova import flags |
1699 | -from nova import objectstore |
1700 | +from nova import wsgi |
1701 | from nova import test |
1702 | from nova.auth import manager |
1703 | -from nova.exception import NotEmpty, NotFound |
1704 | -from nova.objectstore import image |
1705 | -from nova.objectstore.handler import S3 |
1706 | +from nova.objectstore import s3server |
1707 | |
1708 | |
1709 | FLAGS = flags.FLAGS |
1710 | @@ -53,151 +51,15 @@ |
1711 | os.makedirs(os.path.join(OSS_TEMPDIR, 'buckets')) |
1712 | |
1713 | |
1714 | -class ObjectStoreTestCase(test.TestCase): |
1715 | - """Test objectstore API directly.""" |
1716 | - |
1717 | - def setUp(self): |
1718 | - """Setup users and projects.""" |
1719 | - super(ObjectStoreTestCase, self).setUp() |
1720 | - self.flags(buckets_path=os.path.join(OSS_TEMPDIR, 'buckets'), |
1721 | - images_path=os.path.join(OSS_TEMPDIR, 'images'), |
1722 | - ca_path=os.path.join(os.path.dirname(__file__), 'CA')) |
1723 | - |
1724 | - self.auth_manager = manager.AuthManager() |
1725 | - self.auth_manager.create_user('user1') |
1726 | - self.auth_manager.create_user('user2') |
1727 | - self.auth_manager.create_user('admin_user', admin=True) |
1728 | - self.auth_manager.create_project('proj1', 'user1', 'a proj', ['user1']) |
1729 | - self.auth_manager.create_project('proj2', 'user2', 'a proj', ['user2']) |
1730 | - self.context = context.RequestContext('user1', 'proj1') |
1731 | - |
1732 | - def tearDown(self): |
1733 | - """Tear down users and projects.""" |
1734 | - self.auth_manager.delete_project('proj1') |
1735 | - self.auth_manager.delete_project('proj2') |
1736 | - self.auth_manager.delete_user('user1') |
1737 | - self.auth_manager.delete_user('user2') |
1738 | - self.auth_manager.delete_user('admin_user') |
1739 | - super(ObjectStoreTestCase, self).tearDown() |
1740 | - |
1741 | - def test_buckets(self): |
1742 | - """Test the bucket API.""" |
1743 | - objectstore.bucket.Bucket.create('new_bucket', self.context) |
1744 | - bucket = objectstore.bucket.Bucket('new_bucket') |
1745 | - |
1746 | - # creator is authorized to use bucket |
1747 | - self.assert_(bucket.is_authorized(self.context)) |
1748 | - |
1749 | - # another user is not authorized |
1750 | - context2 = context.RequestContext('user2', 'proj2') |
1751 | - self.assertFalse(bucket.is_authorized(context2)) |
1752 | - |
1753 | - # admin is authorized to use bucket |
1754 | - admin_context = context.RequestContext('admin_user', None) |
1755 | - self.assertTrue(bucket.is_authorized(admin_context)) |
1756 | - |
1757 | - # new buckets are empty |
1758 | - self.assertTrue(bucket.list_keys()['Contents'] == []) |
1759 | - |
1760 | - # storing keys works |
1761 | - bucket['foo'] = "bar" |
1762 | - |
1763 | - self.assertEquals(len(bucket.list_keys()['Contents']), 1) |
1764 | - |
1765 | - self.assertEquals(bucket['foo'].read(), 'bar') |
1766 | - |
1767 | - # md5 of key works |
1768 | - self.assertEquals(bucket['foo'].md5, hashlib.md5('bar').hexdigest()) |
1769 | - |
1770 | - # deleting non-empty bucket should throw a NotEmpty exception |
1771 | - self.assertRaises(NotEmpty, bucket.delete) |
1772 | - |
1773 | - # deleting key |
1774 | - del bucket['foo'] |
1775 | - |
1776 | - # deleting empty bucket |
1777 | - bucket.delete() |
1778 | - |
1779 | - # accessing deleted bucket throws exception |
1780 | - self.assertRaises(NotFound, objectstore.bucket.Bucket, 'new_bucket') |
1781 | - |
1782 | - def test_images(self): |
1783 | - self.do_test_images('1mb.manifest.xml', True, |
1784 | - 'image_bucket1', 'i-testing1') |
1785 | - |
1786 | - def test_images_no_kernel_or_ramdisk(self): |
1787 | - self.do_test_images('1mb.no_kernel_or_ramdisk.manifest.xml', |
1788 | - False, 'image_bucket2', 'i-testing2') |
1789 | - |
1790 | - def do_test_images(self, manifest_file, expect_kernel_and_ramdisk, |
1791 | - image_bucket, image_name): |
1792 | - "Test the image API." |
1793 | - |
1794 | - # create a bucket for our bundle |
1795 | - objectstore.bucket.Bucket.create(image_bucket, self.context) |
1796 | - bucket = objectstore.bucket.Bucket(image_bucket) |
1797 | - |
1798 | - # upload an image manifest/parts |
1799 | - bundle_path = os.path.join(os.path.dirname(__file__), 'bundle') |
1800 | - for path in glob.glob(bundle_path + '/*'): |
1801 | - bucket[os.path.basename(path)] = open(path, 'rb').read() |
1802 | - |
1803 | - # register an image |
1804 | - image.Image.register_aws_image(image_name, |
1805 | - '%s/%s' % (image_bucket, manifest_file), |
1806 | - self.context) |
1807 | - |
1808 | - # verify image |
1809 | - my_img = image.Image(image_name) |
1810 | - result_image_file = os.path.join(my_img.path, 'image') |
1811 | - self.assertEqual(os.stat(result_image_file).st_size, 1048576) |
1812 | - |
1813 | - sha = hashlib.sha1(open(result_image_file).read()).hexdigest() |
1814 | - self.assertEqual(sha, '3b71f43ff30f4b15b5cd85dd9e95ebc7e84eb5a3') |
1815 | - |
1816 | - if expect_kernel_and_ramdisk: |
1817 | - # Verify the default kernel and ramdisk are set |
1818 | - self.assertEqual(my_img.metadata['kernelId'], 'aki-test') |
1819 | - self.assertEqual(my_img.metadata['ramdiskId'], 'ari-test') |
1820 | - else: |
1821 | - # Verify that the default kernel and ramdisk (the one from FLAGS) |
1822 | - # doesn't get embedded in the metadata |
1823 | - self.assertFalse('kernelId' in my_img.metadata) |
1824 | - self.assertFalse('ramdiskId' in my_img.metadata) |
1825 | - |
1826 | - # verify image permissions |
1827 | - context2 = context.RequestContext('user2', 'proj2') |
1828 | - self.assertFalse(my_img.is_authorized(context2)) |
1829 | - |
1830 | - # change user-editable fields |
1831 | - my_img.update_user_editable_fields({'display_name': 'my cool image'}) |
1832 | - self.assertEqual('my cool image', my_img.metadata['displayName']) |
1833 | - my_img.update_user_editable_fields({'display_name': ''}) |
1834 | - self.assert_(not my_img.metadata['displayName']) |
1835 | - |
1836 | - |
1837 | -class TestHTTPChannel(http.HTTPChannel): |
1838 | - """Dummy site required for twisted.web""" |
1839 | - |
1840 | - def checkPersistence(self, _, __): # pylint: disable=C0103 |
1841 | - """Otherwise we end up with an unclean reactor.""" |
1842 | - return False |
1843 | - |
1844 | - |
1845 | -class TestSite(server.Site): |
1846 | - """Dummy site required for twisted.web""" |
1847 | - protocol = TestHTTPChannel |
1848 | - |
1849 | - |
1850 | class S3APITestCase(test.TestCase): |
1851 | """Test objectstore through S3 API.""" |
1852 | |
1853 | def setUp(self): |
1854 | """Setup users, projects, and start a test server.""" |
1855 | super(S3APITestCase, self).setUp() |
1856 | - |
1857 | - FLAGS.auth_driver = 'nova.auth.ldapdriver.FakeLdapDriver' |
1858 | - FLAGS.buckets_path = os.path.join(OSS_TEMPDIR, 'buckets') |
1859 | + self.flags(auth_driver='nova.auth.ldapdriver.FakeLdapDriver', |
1860 | + buckets_path=os.path.join(OSS_TEMPDIR, 'buckets'), |
1861 | + s3_host='127.0.0.1') |
1862 | |
1863 | self.auth_manager = manager.AuthManager() |
1864 | self.admin_user = self.auth_manager.create_user('admin', admin=True) |
1865 | @@ -207,23 +69,20 @@ |
1866 | shutil.rmtree(FLAGS.buckets_path) |
1867 | os.mkdir(FLAGS.buckets_path) |
1868 | |
1869 | - root = S3() |
1870 | - self.site = TestSite(root) |
1871 | - # pylint: disable=E1101 |
1872 | - self.listening_port = reactor.listenTCP(0, self.site, |
1873 | - interface='127.0.0.1') |
1874 | - # pylint: enable=E1101 |
1875 | - self.tcp_port = self.listening_port.getHost().port |
1876 | + router = s3server.S3Application(FLAGS.buckets_path) |
1877 | + server = wsgi.Server() |
1878 | + server.start(router, FLAGS.s3_port, host=FLAGS.s3_host) |
1879 | |
1880 | if not boto.config.has_section('Boto'): |
1881 | boto.config.add_section('Boto') |
1882 | boto.config.set('Boto', 'num_retries', '0') |
1883 | - self.conn = S3Connection(aws_access_key_id=self.admin_user.access, |
1884 | - aws_secret_access_key=self.admin_user.secret, |
1885 | - host='127.0.0.1', |
1886 | - port=self.tcp_port, |
1887 | - is_secure=False, |
1888 | - calling_format=OrdinaryCallingFormat()) |
1889 | + conn = s3.S3Connection(aws_access_key_id=self.admin_user.access, |
1890 | + aws_secret_access_key=self.admin_user.secret, |
1891 | + host=FLAGS.s3_host, |
1892 | + port=FLAGS.s3_port, |
1893 | + is_secure=False, |
1894 | + calling_format=s3.OrdinaryCallingFormat()) |
1895 | + self.conn = conn |
1896 | |
1897 | def get_http_connection(host, is_secure): |
1898 | """Get a new S3 connection, don't attempt to reuse connections.""" |
1899 | @@ -243,27 +102,16 @@ |
1900 | |
1901 | def test_000_list_buckets(self): |
1902 | """Make sure we are starting with no buckets.""" |
1903 | - deferred = threads.deferToThread(self.conn.get_all_buckets) |
1904 | - deferred.addCallback(self._ensure_no_buckets) |
1905 | - return deferred |
1906 | + self._ensure_no_buckets(self.conn.get_all_buckets()) |
1907 | |
1908 | def test_001_create_and_delete_bucket(self): |
1909 | """Test bucket creation and deletion.""" |
1910 | bucket_name = 'testbucket' |
1911 | |
1912 | - deferred = threads.deferToThread(self.conn.create_bucket, bucket_name) |
1913 | - deferred.addCallback(lambda _: |
1914 | - threads.deferToThread(self.conn.get_all_buckets)) |
1915 | - |
1916 | - deferred.addCallback(self._ensure_one_bucket, bucket_name) |
1917 | - |
1918 | - deferred.addCallback(lambda _: |
1919 | - threads.deferToThread(self.conn.delete_bucket, |
1920 | - bucket_name)) |
1921 | - deferred.addCallback(lambda _: |
1922 | - threads.deferToThread(self.conn.get_all_buckets)) |
1923 | - deferred.addCallback(self._ensure_no_buckets) |
1924 | - return deferred |
1925 | + self.conn.create_bucket(bucket_name) |
1926 | + self._ensure_one_bucket(self.conn.get_all_buckets(), bucket_name) |
1927 | + self.conn.delete_bucket(bucket_name) |
1928 | + self._ensure_no_buckets(self.conn.get_all_buckets()) |
1929 | |
1930 | def test_002_create_bucket_and_key_and_delete_key_again(self): |
1931 | """Test key operations on buckets.""" |
1932 | @@ -271,45 +119,30 @@ |
1933 | key_name = 'somekey' |
1934 | key_contents = 'somekey' |
1935 | |
1936 | - deferred = threads.deferToThread(self.conn.create_bucket, bucket_name) |
1937 | - deferred.addCallback(lambda b: |
1938 | - threads.deferToThread(b.new_key, key_name)) |
1939 | - deferred.addCallback(lambda k: |
1940 | - threads.deferToThread(k.set_contents_from_string, |
1941 | - key_contents)) |
1942 | - |
1943 | - def ensure_key_contents(bucket_name, key_name, contents): |
1944 | - """Verify contents for a key in the given bucket.""" |
1945 | - bucket = self.conn.get_bucket(bucket_name) |
1946 | - key = bucket.get_key(key_name) |
1947 | - self.assertEquals(key.get_contents_as_string(), contents, |
1948 | - "Bad contents") |
1949 | - |
1950 | - deferred.addCallback(lambda _: |
1951 | - threads.deferToThread(ensure_key_contents, |
1952 | - bucket_name, key_name, |
1953 | - key_contents)) |
1954 | - |
1955 | - def delete_key(bucket_name, key_name): |
1956 | - """Delete a key for the given bucket.""" |
1957 | - bucket = self.conn.get_bucket(bucket_name) |
1958 | - key = bucket.get_key(key_name) |
1959 | - key.delete() |
1960 | - |
1961 | - deferred.addCallback(lambda _: |
1962 | - threads.deferToThread(delete_key, bucket_name, |
1963 | - key_name)) |
1964 | - deferred.addCallback(lambda _: |
1965 | - threads.deferToThread(self.conn.get_bucket, |
1966 | - bucket_name)) |
1967 | - deferred.addCallback(lambda b: threads.deferToThread(b.get_all_keys)) |
1968 | - deferred.addCallback(self._ensure_no_buckets) |
1969 | - return deferred |
1970 | + b = self.conn.create_bucket(bucket_name) |
1971 | + k = b.new_key(key_name) |
1972 | + k.set_contents_from_string(key_contents) |
1973 | + |
1974 | + bucket = self.conn.get_bucket(bucket_name) |
1975 | + |
1976 | + # make sure the contents are correct |
1977 | + key = bucket.get_key(key_name) |
1978 | + self.assertEquals(key.get_contents_as_string(), key_contents, |
1979 | + "Bad contents") |
1980 | + |
1981 | + # delete the key |
1982 | + key.delete() |
1983 | + |
1984 | + self._ensure_no_buckets(bucket.get_all_keys()) |
1985 | + |
1986 | + def test_unknown_bucket(self): |
1987 | + bucket_name = 'falalala' |
1988 | + self.assertRaises(boto_exception.S3ResponseError, |
1989 | + self.conn.get_bucket, |
1990 | + bucket_name) |
1991 | |
1992 | def tearDown(self): |
1993 | """Tear down auth and test server.""" |
1994 | self.auth_manager.delete_user('admin') |
1995 | self.auth_manager.delete_project('admin') |
1996 | - stop_listening = defer.maybeDeferred(self.listening_port.stopListening) |
1997 | super(S3APITestCase, self).tearDown() |
1998 | - return defer.DeferredList([stop_listening]) |
Code looks awesome. Unfortunately this breaks register_image until my patch here is merged:
lp:~vishvananda/nova/kill-objectstore is merged. To avoid having trunk temporarily broken, I suggest that you merge my patch and repropose the merge with that one as a prereq.