Merge lp:~statik/txaws/here-have-some-s4 into lp:~txawsteam/txaws/trunk

Proposed by Elliot Murphy
Status: Work in progress
Proposed branch: lp:~statik/txaws/here-have-some-s4
Merge into: lp:~txawsteam/txaws/trunk
Diff against target: None lines
To merge this branch: bzr merge lp:~statik/txaws/here-have-some-s4
Reviewer Review Type Date Requested Status
Original txAWS Team Pending
Review via email: mp+10388@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Elliot Murphy (statik) wrote :

The ubuntu one team would like to contribute the S4 (Simple Storage Service Simulator) code that we use as a stub to test against when developing software that relies on S3.

Revision history for this message
Robert Collins (lifeless) wrote :
Download full text (5.0 KiB)

Wow, code explosion. Uhm, This perhaps would be easier to review in
parts...

Firstly, the module should be txaws.storage.simulator, I think. Or
storage.server.simulator.

Secondly, copyright headers. txaws does a much leaner one;

# Copyright (C) 2009 $PUT_YOUR_NAMES_HERE
# Licenced under the txaws licence available at /LICENSE in the txaws
source.

Please use that - it prevents skew between different modules.

See txaws/ec2/client.py, for instance.

Thirdly, I'm not sure what python versions we're aiming for. I note that
the simulator code depends on 'with', which carries an implicit version
requirement. Please document the version that the simulator supports
both in /README and perhaps the docstring for the simulator.

On Wed, 2009-08-19 at 14:45 +0000, Elliot Murphy wrote:
> === modified file 'README'
> --- README 2009-04-26 08:32:36 +0000
> +++ README 2009-08-19 14:36:56 +0000
> @@ -7,6 +7,10 @@
>
> * The epsilon python package (python-epsilon on Debian or similar
> systems)
>
> +* The S4 test server has a dependency on boto (python-boto) on Debian
> or similar)
> + This dependency should go away in favor of using txaws
> infrastructure (s4 was
> + originally developed separately from txaws)

This is a problem :). I'd much rather see code land without having a
boto dependency at any point: boto is rather ugly, and the code will
likely be a lot nicer right from the get-go if we don't have it.

> === added file 'txaws/s4/README'

see above - the location doesn't fit with the txaws source layout. This
is a storage server module (contrast with txaws.storage.client).

Having a README in a python module is odd. The content would be better
put in the __init__.py's docstring, so that pydoctor etc can show it.

> --- txaws/s4/README 1970-01-01 00:00:00 +0000
> +++ txaws/s4/README 2009-08-19 14:36:56 +0000
> @@ -0,0 +1,30 @@
> +S4 - a S3 storage system stub
> +=============================
> +
> +the server comes with some sample scripts so you can see how to use
> it.
> +
> +Using twistd
> +------------
> +
> +to start: ./start-s4.sh
> +to stop: ./stop-s4.sh
> +
> +the sample S4.tac defaults to port 8080. if you want to change that
> you can create your own S4.tac.

Given that this is a twisted process, it would be nice for the docs to
say
'to start: twistd -FOO -BAR -BAZ' rather than referring to a shell
script which by its nature won't work on windows.

>
> === added directory 'txaws/s4/contrib'
> === added file 'txaws/s4/contrib/S3.py'
> --- txaws/s4/contrib/S3.py 1970-01-01 00:00:00 +0000
> +++ txaws/s4/contrib/S3.py 2009-08-19 14:36:56 +0000

....

What is this file for? how is it used? It looks like a lot of
duplication with existing code in txaws.
The different (C) terms means it will need to be mentioned in some way
at the top level portion.
I suspect we need to add an AUTHORS file too.

> === added file 'txaws/s4/s4.py'
> ...+if __name__ == "__main__":
> + root = Root()
> + site = server.Site(root)
> + reactor.listenTCP(8808, site)
> + reactor.run()

I've skipped most of this file pending the boto dependency being
removed. But I thought I'd mention that this fragment above is highl...

Read more...

Revision history for this message
Elliot Murphy (statik) wrote :
Download full text (6.5 KiB)

Thanks a lot for the quick review. The code is very much in the state
it was being used internally, and I think your comments all make sense
and will improve the code. I differ on the license header thing - I
explicitly chose not to copy the existing indirect way of specifying
the license. You'd need to go back to the copyright holder to change
the license anyway, so specifying the license that way is not a good
idea IMO.

Just to set expectations, I don't expect to have time to remove the
boto dependency or work on the more involved changes requested before
Karmic ships. Duncan asked about the code and I got it published in
the spirit of jfdi; now that it is free to the world in this branch
I'm back to more pressing Karmic related hacking. I think it's
entirely reasonable for you to want the boto dependency dropped before
it's merged, I just want to be up front and explain that this branch
will probably sit for a couple of months before lucio or I or
Christian will be able to give it that level of attention. I'd
actually like to kill the whole contrib directory too.
--
Elliot Murphy

On Aug 19, 2009, at 9:45 PM, Robert Collins
<email address hidden> wrote:

> Wow, code explosion. Uhm, This perhaps would be easier to review in
> parts...
>
> Firstly, the module should be txaws.storage.simulator, I think. Or
> storage.server.simulator.
>
> Secondly, copyright headers. txaws does a much leaner one;
>
> # Copyright (C) 2009 $PUT_YOUR_NAMES_HERE
> # Licenced under the txaws licence available at /LICENSE in the txaws
> source.
>
> Please use that - it prevents skew between different modules.
>
> See txaws/ec2/client.py, for instance.
>
>
> Thirdly, I'm not sure what python versions we're aiming for. I note
> that
> the simulator code depends on 'with', which carries an implicit
> version
> requirement. Please document the version that the simulator supports
> both in /README and perhaps the docstring for the simulator.
>
>
>
> On Wed, 2009-08-19 at 14:45 +0000, Elliot Murphy wrote:
>> === modified file 'README'
>> --- README 2009-04-26 08:32:36 +0000
>> +++ README 2009-08-19 14:36:56 +0000
>> @@ -7,6 +7,10 @@
>>
>> * The epsilon python package (python-epsilon on Debian or similar
>> systems)
>>
>> +* The S4 test server has a dependency on boto (python-boto) on
>> Debian
>> or similar)
>> + This dependency should go away in favor of using txaws
>> infrastructure (s4 was
>> + originally developed separately from txaws)
>
> This is a problem :). I'd much rather see code land without having a
> boto dependency at any point: boto is rather ugly, and the code will
> likely be a lot nicer right from the get-go if we don't have it.
>
>
>> === added file 'txaws/s4/README'
>
> see above - the location doesn't fit with the txaws source layout.
> This
> is a storage server module (contrast with txaws.storage.client).
>
> Having a README in a python module is odd. The content would be better
> put in the __init__.py's docstring, so that pydoctor etc can show it.
>
>
>> --- txaws/s4/README 1970-01-01 00:00:00 +0000
>> +++ txaws/s4/README 2009-08-19 14:36:56 +0000
>> @@ -0,0 +1,30 @@
>> +S4 - a S3...

Read more...

Revision history for this message
Robert Collins (lifeless) wrote :

On Thu, 2009-08-20 at 03:36 +0000, Elliot Murphy wrote:
> Thanks a lot for the quick review. The code is very much in the state
> it was being used internally, and I think your comments all make sense
> and will improve the code. I differ on the license header thing - I
> explicitly chose not to copy the existing indirect way of specifying
> the license. You'd need to go back to the copyright holder to change
> the license anyway, so specifying the license that way is not a good
> idea IMO.

Licensing is important - while we build free/open source software on the
hack of using copyright to ensure the right to copy :). I think you'll
find all the existing code in txaws uses the pithy approach; the work as
a whole is whats licensed : the centralisation isn't to make /changing/
the license easy - its to make auditing, checking and reading code
easier. Debian for instance, wants to be sure that all relevant licences
are listed; having the licence duplicated in many source files makes
that harder. There is also a DRY aspect to it.

If you believe there is a risk that the code won't be properly protected
(from what - the project licence is MIT - essentially public domain)
then we should certainly investigate that further. Otherwise, I don't
see what is gained by having the same text duplicated in each file, and
really think the shorter reference is much more pleasant. (When I first
joined the project, I'm not even sure there _was_ a license :)).

> Just to set expectations, ...o be up front and explain that this branch
> will probably sit for a couple of months before lucio or I or
> Christian will be able to give it that level of attention. I'd
> actually like to kill the whole contrib directory too.
> --

That's fine with me too - now its out there its possible for someone to
stand up and clean it up too.

-Rob

Revision history for this message
Jamu Kakar (jkakar) wrote :

There's been no movement on this for quite some time, so I'm going to
mark it as 'Work in Progress'.

Unmerged revisions

9. By Elliot Murphy

Dropping start-s4 and stop-s4, those aren't really a good fit for
including directly into txaws.

8. By Elliot Murphy

Contributing the initial version of S4 (previously unpublished, used
internally by Ubuntu One for testing since 2008).

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'README'
2--- README 2009-04-26 08:32:36 +0000
3+++ README 2009-08-19 14:36:56 +0000
4@@ -7,6 +7,10 @@
5
6 * The epsilon python package (python-epsilon on Debian or similar systems)
7
8+* The S4 test server has a dependency on boto (python-boto) on Debian or similar)
9+ This dependency should go away in favor of using txaws infrastructure (s4 was
10+ originally developed separately from txaws)
11+
12 Things present here
13 -------------------
14
15
16=== added directory 'txaws/s4'
17=== added file 'txaws/s4/README'
18--- txaws/s4/README 1970-01-01 00:00:00 +0000
19+++ txaws/s4/README 2009-08-19 14:36:56 +0000
20@@ -0,0 +1,30 @@
21+S4 - a S3 storage system stub
22+=============================
23+
24+the server comes with some sample scripts so you can see how to use it.
25+
26+Using twistd
27+------------
28+
29+to start: ./start-s4.sh
30+to stop: ./stop-s4.sh
31+
32+the sample S4.tac defaults to port 8080. if you want to change that you can create your own S4.tac.
33+
34+For tests or inside another script
35+----------------------------------
36+
37+see s4.tests.test_S4.S4TestBase
38+
39+all tests run in a random unused port.
40+
41+
42+
43+Notes:
44+======
45+Based on twisted
46+Storage is in memory
47+Its not optimal by any means, its just for testing other code.
48+For now, it just implements REST put and GET
49+it comes with a default /test/ bucket already created and a /size/ bucket with virtual objects the size of its name (ie, /size/100 == "0"*100)
50+
51
52=== added file 'txaws/s4/S4.tac'
53--- txaws/s4/S4.tac 1970-01-01 00:00:00 +0000
54+++ txaws/s4/S4.tac 2009-08-19 14:36:56 +0000
55@@ -0,0 +1,74 @@
56+# -*- python -*-
57+# Copyright 2008-2009 Canonical Ltd.
58+# Permission is hereby granted, free of charge, to any person obtaining
59+# a copy of this software and associated documentation files (the
60+# "Software"), to deal in the Software without restriction, including
61+# without limitation the rights to use, copy, modify, merge, publish,
62+# distribute, sublicense, and/or sell copies of the Software, and to
63+# permit persons to whom the Software is furnished to do so, subject to
64+# the following conditions:
65+#
66+# The above copyright notice and this permission notice shall be
67+# included in all copies or substantial portions of the Software.
68+#
69+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
70+# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
71+# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
72+# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
73+# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
74+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
75+# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
76+
77+from __future__ import with_statement
78+
79+import os
80+import logging
81+from optparse import OptionParser
82+
83+import twisted.web.server
84+from twisted.internet import reactor
85+from twisted.application import internet, service
86+
87+from utils import get_arbitrary_port
88+from ubuntuone.config import config
89+
90+logger = logging.getLogger("UbuntuOne.S4")
91+logger.setLevel(config.general.log_level)
92+log_folder = config.general.log_folder
93+log_filename = config.s_four.log_filename
94+if log_folder is not None and log_filename is not None:
95+ if not os.access(log_folder, os.F_OK):
96+ os.mkdir(log_folder)
97+ s = logging.FileHandler(os.path.join(log_folder, log_filename))
98+else:
99+ s = logging.StreamHandler(sys.stderr)
100+s.setFormatter(logging.Formatter(config.general.log_format))
101+logger.addHandler(s)
102+
103+from s4 import s4
104+
105+if config.s_four.storagepath:
106+ storedir = os.path.join(config.root, config.s_four.storagepath)
107+else:
108+ storedir = os.path.join(config.root, "tmp", "s4storage")
109+if not os.path.exists(storedir):
110+ logger.debug("creating S4 storage directory %s" % storedir)
111+ os.mkdir(storedir)
112+application = service.Application('s4')
113+root = s4.Root(storagedir=storedir)
114+# make sure "the bucket" is created
115+root._add_bucket(config.api_server.s3_bucket)
116+site = twisted.web.server.Site(root)
117+
118+port = os.getenv('S4PORT', config.aws_s3.port)
119+if port:
120+ port = int(port)
121+# we test again in case the initial value was the "0" as a string
122+if not port:
123+ port = get_arbitrary_port()
124+
125+with open(os.path.join(config.root, "tmp", "s4.port"), "w") as s4pf:
126+ s4pf.write("%d\n" % port)
127+
128+internet.TCPServer(port, site).setServiceParent(
129+ service.IServiceCollection(application))
130
131=== added file 'txaws/s4/__init__.py'
132--- txaws/s4/__init__.py 1970-01-01 00:00:00 +0000
133+++ txaws/s4/__init__.py 2009-08-19 14:36:56 +0000
134@@ -0,0 +1,1 @@
135+""" S4 - a S3 storage system stub """
136
137=== added directory 'txaws/s4/contrib'
138=== added file 'txaws/s4/contrib/S3.py'
139--- txaws/s4/contrib/S3.py 1970-01-01 00:00:00 +0000
140+++ txaws/s4/contrib/S3.py 2009-08-19 14:36:56 +0000
141@@ -0,0 +1,627 @@
142+#!/usr/bin/env python
143+
144+# This software code is made available "AS IS" without warranties of any
145+# kind. You may copy, display, modify and redistribute the software
146+# code either by itself or as incorporated into your code; provided that
147+# you do not remove any proprietary notices. Your use of this software
148+# code is at your own risk and you waive any claim against Amazon
149+# Digital Services, Inc. or its affiliates with respect to your use of
150+# this software code. (c) 2006-2007 Amazon Digital Services, Inc. or its
151+# affiliates.
152+
153+import base64
154+import hmac
155+import httplib
156+import re
157+import sha
158+import sys
159+import time
160+import urllib
161+import urlparse
162+import xml.sax
163+
164+DEFAULT_HOST = 's3.amazonaws.com'
165+PORTS_BY_SECURITY = { True: 443, False: 80 }
166+METADATA_PREFIX = 'x-amz-meta-'
167+AMAZON_HEADER_PREFIX = 'x-amz-'
168+
169+# generates the aws canonical string for the given parameters
170+def canonical_string(method, bucket="", key="", query_args={}, headers={}, expires=None):
171+ interesting_headers = {}
172+ for header_key in headers:
173+ lk = header_key.lower()
174+ if lk in ['content-md5', 'content-type', 'date'] or lk.startswith(AMAZON_HEADER_PREFIX):
175+ interesting_headers[lk] = headers[header_key].strip()
176+
177+ # these keys get empty strings if they don't exist
178+ if not interesting_headers.has_key('content-type'):
179+ interesting_headers['content-type'] = ''
180+ if not interesting_headers.has_key('content-md5'):
181+ interesting_headers['content-md5'] = ''
182+
183+ # just in case someone used this. it's not necessary in this lib.
184+ if interesting_headers.has_key('x-amz-date'):
185+ interesting_headers['date'] = ''
186+
187+ # if you're using expires for query string auth, then it trumps date
188+ # (and x-amz-date)
189+ if expires:
190+ interesting_headers['date'] = str(expires)
191+
192+ sorted_header_keys = interesting_headers.keys()
193+ sorted_header_keys.sort()
194+
195+ buf = "%s\n" % method
196+ for header_key in sorted_header_keys:
197+ if header_key.startswith(AMAZON_HEADER_PREFIX):
198+ buf += "%s:%s\n" % (header_key, interesting_headers[header_key])
199+ else:
200+ buf += "%s\n" % interesting_headers[header_key]
201+
202+ # append the bucket if it exists
203+ if bucket != "":
204+ buf += "/%s" % bucket
205+
206+ # add the key. even if it doesn't exist, add the slash
207+ buf += "/%s" % urllib.quote_plus(key)
208+
209+ # handle special query string arguments
210+
211+ if query_args.has_key("acl"):
212+ buf += "?acl"
213+ elif query_args.has_key("torrent"):
214+ buf += "?torrent"
215+ elif query_args.has_key("logging"):
216+ buf += "?logging"
217+ elif query_args.has_key("location"):
218+ buf += "?location"
219+
220+ return buf
221+
222+# computes the base64'ed hmac-sha hash of the canonical string and the secret
223+# access key, optionally urlencoding the result
224+def encode(aws_secret_access_key, str, urlencode=False):
225+ b64_hmac = base64.encodestring(hmac.new(aws_secret_access_key, str, sha).digest()).strip()
226+ if urlencode:
227+ return urllib.quote_plus(b64_hmac)
228+ else:
229+ return b64_hmac
230+
231+def merge_meta(headers, metadata):
232+ final_headers = headers.copy()
233+ for k in metadata.keys():
234+ final_headers[METADATA_PREFIX + k] = metadata[k]
235+
236+ return final_headers
237+
238+# builds the query arg string
239+def query_args_hash_to_string(query_args):
240+ query_string = ""
241+ pairs = []
242+ for k, v in query_args.items():
243+ piece = k
244+ if v != None:
245+ piece += "=%s" % urllib.quote_plus(str(v))
246+ pairs.append(piece)
247+
248+ return '&'.join(pairs)
249+
250+
251+class CallingFormat:
252+ PATH = 1
253+ SUBDOMAIN = 2
254+ VANITY = 3
255+
256+ def build_url_base(protocol, server, port, bucket, calling_format):
257+ url_base = '%s://' % protocol
258+
259+ if bucket == '':
260+ url_base += server
261+ elif calling_format == CallingFormat.SUBDOMAIN:
262+ url_base += "%s.%s" % (bucket, server)
263+ elif calling_format == CallingFormat.VANITY:
264+ url_base += bucket
265+ else:
266+ url_base += server
267+
268+ url_base += ":%s" % port
269+
270+ if (bucket != '') and (calling_format == CallingFormat.PATH):
271+ url_base += "/%s" % bucket
272+
273+ return url_base
274+
275+ build_url_base = staticmethod(build_url_base)
276+
277+
278+
279+class Location:
280+ DEFAULT = None
281+ EU = 'EU'
282+
283+
284+
285+class AWSAuthConnection:
286+ def __init__(self, aws_access_key_id, aws_secret_access_key, is_secure=True,
287+ server=DEFAULT_HOST, port=None, calling_format=CallingFormat.SUBDOMAIN):
288+
289+ if not port:
290+ port = PORTS_BY_SECURITY[is_secure]
291+
292+ self.aws_access_key_id = aws_access_key_id
293+ self.aws_secret_access_key = aws_secret_access_key
294+ self.is_secure = is_secure
295+ self.server = server
296+ self.port = port
297+ self.calling_format = calling_format
298+
299+ def create_bucket(self, bucket, headers={}):
300+ return Response(self._make_request('PUT', bucket, '', {}, headers))
301+
302+ def create_located_bucket(self, bucket, location=Location.DEFAULT, headers={}):
303+ if location == Location.DEFAULT:
304+ body = ""
305+ else:
306+ body = "<CreateBucketConstraint><LocationConstraint>" + \
307+ location + \
308+ "</LocationConstraint></CreateBucketConstraint>"
309+ return Response(self._make_request('PUT', bucket, '', {}, headers, body))
310+
311+ def check_bucket_exists(self, bucket):
312+ return self._make_request('HEAD', bucket, '', {}, {})
313+
314+ def list_bucket(self, bucket, options={}, headers={}):
315+ return ListBucketResponse(self._make_request('GET', bucket, '', options, headers))
316+
317+ def delete_bucket(self, bucket, headers={}):
318+ return Response(self._make_request('DELETE', bucket, '', {}, headers))
319+
320+ def put(self, bucket, key, object, headers={}):
321+ if not isinstance(object, S3Object):
322+ object = S3Object(object)
323+
324+ return Response(
325+ self._make_request(
326+ 'PUT',
327+ bucket,
328+ key,
329+ {},
330+ headers,
331+ object.data,
332+ object.metadata))
333+
334+ def get(self, bucket, key, headers={}):
335+ return GetResponse(
336+ self._make_request('GET', bucket, key, {}, headers))
337+
338+ def delete(self, bucket, key, headers={}):
339+ return Response(
340+ self._make_request('DELETE', bucket, key, {}, headers))
341+
342+ def get_bucket_logging(self, bucket, headers={}):
343+ return GetResponse(self._make_request('GET', bucket, '', { 'logging': None }, headers))
344+
345+ def put_bucket_logging(self, bucket, logging_xml_doc, headers={}):
346+ return Response(self._make_request('PUT', bucket, '', { 'logging': None }, headers, logging_xml_doc))
347+
348+ def get_bucket_acl(self, bucket, headers={}):
349+ return self.get_acl(bucket, '', headers)
350+
351+ def get_acl(self, bucket, key, headers={}):
352+ return GetResponse(
353+ self._make_request('GET', bucket, key, { 'acl': None }, headers))
354+
355+ def put_bucket_acl(self, bucket, acl_xml_document, headers={}):
356+ return self.put_acl(bucket, '', acl_xml_document, headers)
357+
358+ def put_acl(self, bucket, key, acl_xml_document, headers={}):
359+ return Response(
360+ self._make_request(
361+ 'PUT',
362+ bucket,
363+ key,
364+ { 'acl': None },
365+ headers,
366+ acl_xml_document))
367+
368+ def list_all_my_buckets(self, headers={}):
369+ return ListAllMyBucketsResponse(self._make_request('GET', '', '', {}, headers))
370+
371+ def get_bucket_location(self, bucket):
372+ return LocationResponse(self._make_request('GET', bucket, '', {'location' : None}))
373+
374+ # end public methods
375+
376+ def _make_request(self, method, bucket='', key='', query_args={}, headers={}, data='', metadata={}):
377+
378+ server = ''
379+ if bucket == '':
380+ server = self.server
381+ elif self.calling_format == CallingFormat.SUBDOMAIN:
382+ server = "%s.%s" % (bucket, self.server)
383+ elif self.calling_format == CallingFormat.VANITY:
384+ server = bucket
385+ else:
386+ server = self.server
387+
388+ path = ''
389+
390+ if (bucket != '') and (self.calling_format == CallingFormat.PATH):
391+ path += "/%s" % bucket
392+
393+ # add the slash after the bucket regardless
394+ # the key will be appended if it is non-empty
395+ path += "/%s" % urllib.quote_plus(key)
396+
397+
398+ # build the path_argument string
399+ # add the ? in all cases since
400+ # signature and credentials follow path args
401+ if len(query_args):
402+ path += "?" + query_args_hash_to_string(query_args)
403+
404+ is_secure = self.is_secure
405+ host = "%s:%d" % (server, self.port)
406+ while True:
407+ if (is_secure):
408+ connection = httplib.HTTPSConnection(host)
409+ else:
410+ connection = httplib.HTTPConnection(host)
411+
412+ final_headers = merge_meta(headers, metadata);
413+ # add auth header
414+ self._add_aws_auth_header(final_headers, method, bucket, key, query_args)
415+
416+ connection.request(method, path, data, final_headers)
417+ resp = connection.getresponse()
418+ if resp.status < 300 or resp.status >= 400:
419+ return resp
420+ # handle redirect
421+ location = resp.getheader('location')
422+ if not location:
423+ return resp
424+ # (close connection)
425+ resp.read()
426+ scheme, host, path, params, query, fragment \
427+ = urlparse.urlparse(location)
428+ if scheme == "http": is_secure = True
429+ elif scheme == "https": is_secure = False
430+ else: raise invalidURL("Not http/https: " + location)
431+ if query: path += "?" + query
432+ # retry with redirect
433+
434+ def _add_aws_auth_header(self, headers, method, bucket, key, query_args):
435+ if not headers.has_key('Date'):
436+ headers['Date'] = time.strftime("%a, %d %b %Y %X GMT", time.gmtime())
437+
438+ c_string = canonical_string(method, bucket, key, query_args, headers)
439+ headers['Authorization'] = \
440+ "AWS %s:%s" % (self.aws_access_key_id, encode(self.aws_secret_access_key, c_string))
441+
442+
443+class QueryStringAuthGenerator:
444+ # by default, expire in 1 minute
445+ DEFAULT_EXPIRES_IN = 60
446+
447+ def __init__(self, aws_access_key_id, aws_secret_access_key, is_secure=True,
448+ server=DEFAULT_HOST, port=None, calling_format=CallingFormat.SUBDOMAIN):
449+
450+ if not port:
451+ port = PORTS_BY_SECURITY[is_secure]
452+
453+ self.aws_access_key_id = aws_access_key_id
454+ self.aws_secret_access_key = aws_secret_access_key
455+ if (is_secure):
456+ self.protocol = 'https'
457+ else:
458+ self.protocol = 'http'
459+
460+ self.is_secure = is_secure
461+ self.server = server
462+ self.port = port
463+ self.calling_format = calling_format
464+ self.__expires_in = QueryStringAuthGenerator.DEFAULT_EXPIRES_IN
465+ self.__expires = None
466+
467+ # for backwards compatibility with older versions
468+ self.server_name = "%s:%s" % (self.server, self.port)
469+
470+ def set_expires_in(self, expires_in):
471+ self.__expires_in = expires_in
472+ self.__expires = None
473+
474+ def set_expires(self, expires):
475+ self.__expires = expires
476+ self.__expires_in = None
477+
478+ def create_bucket(self, bucket, headers={}):
479+ return self.generate_url('PUT', bucket, '', {}, headers)
480+
481+ def list_bucket(self, bucket, options={}, headers={}):
482+ return self.generate_url('GET', bucket, '', options, headers)
483+
484+ def delete_bucket(self, bucket, headers={}):
485+ return self.generate_url('DELETE', bucket, '', {}, headers)
486+
487+ def put(self, bucket, key, object, headers={}):
488+ if not isinstance(object, S3Object):
489+ object = S3Object(object)
490+
491+ return self.generate_url(
492+ 'PUT',
493+ bucket,
494+ key,
495+ {},
496+ merge_meta(headers, object.metadata))
497+
498+ def get(self, bucket, key, headers={}):
499+ return self.generate_url('GET', bucket, key, {}, headers)
500+
501+ def delete(self, bucket, key, headers={}):
502+ return self.generate_url('DELETE', bucket, key, {}, headers)
503+
504+ def get_bucket_logging(self, bucket, headers={}):
505+ return self.generate_url('GET', bucket, '', { 'logging': None }, headers)
506+
507+ def put_bucket_logging(self, bucket, logging_xml_doc, headers={}):
508+ return self.generate_url('PUT', bucket, '', { 'logging': None }, headers)
509+
510+ def get_bucket_acl(self, bucket, headers={}):
511+ return self.get_acl(bucket, '', headers)
512+
513+ def get_acl(self, bucket, key='', headers={}):
514+ return self.generate_url('GET', bucket, key, { 'acl': None }, headers)
515+
516+ def put_bucket_acl(self, bucket, acl_xml_document, headers={}):
517+ return self.put_acl(bucket, '', acl_xml_document, headers)
518+
519+ # don't really care what the doc is here.
520+ def put_acl(self, bucket, key, acl_xml_document, headers={}):
521+ return self.generate_url('PUT', bucket, key, { 'acl': None }, headers)
522+
523+ def list_all_my_buckets(self, headers={}):
524+ return self.generate_url('GET', '', '', {}, headers)
525+
526+ def make_bare_url(self, bucket, key=''):
527+ full_url = self.generate_url(self, bucket, key)
528+ return full_url[:full_url.index('?')]
529+
530+ def generate_url(self, method, bucket='', key='', query_args={}, headers={}):
531+ expires = 0
532+ if self.__expires_in != None:
533+ expires = int(time.time() + self.__expires_in)
534+ elif self.__expires != None:
535+ expires = int(self.__expires)
536+ else:
537+ raise "Invalid expires state"
538+
539+ canonical_str = canonical_string(method, bucket, key, query_args, headers, expires)
540+ encoded_canonical = encode(self.aws_secret_access_key, canonical_str)
541+
542+ url = CallingFormat.build_url_base(self.protocol, self.server, self.port, bucket, self.calling_format)
543+
544+ url += "/%s" % urllib.quote_plus(key)
545+
546+ query_args['Signature'] = encoded_canonical
547+ query_args['Expires'] = expires
548+ query_args['AWSAccessKeyId'] = self.aws_access_key_id
549+
550+ url += "?%s" % query_args_hash_to_string(query_args)
551+
552+ return url
553+
554+
555+class S3Object:
556+ def __init__(self, data, metadata={}):
557+ self.data = data
558+ self.metadata = metadata
559+
560+class Owner:
561+ def __init__(self, id='', display_name=''):
562+ self.id = id
563+ self.display_name = display_name
564+
565+class ListEntry:
566+ def __init__(self, key='', last_modified=None, etag='', size=0, storage_class='', owner=None):
567+ self.key = key
568+ self.last_modified = last_modified
569+ self.etag = etag
570+ self.size = size
571+ self.storage_class = storage_class
572+ self.owner = owner
573+
574+class CommonPrefixEntry:
575+ def __init(self, prefix=''):
576+ self.prefix = prefix
577+
578+class Bucket:
579+ def __init__(self, name='', creation_date=''):
580+ self.name = name
581+ self.creation_date = creation_date
582+
583+class Response:
584+ def __init__(self, http_response):
585+ self.http_response = http_response
586+ # you have to do this read, even if you don't expect a body.
587+ # otherwise, the next request fails.
588+ self.body = http_response.read()
589+ if http_response.status >= 300 and self.body:
590+ self.message = self.body
591+ else:
592+ self.message = "%03d %s" % (http_response.status, http_response.reason)
593+
594+
595+
596+class ListBucketResponse(Response):
597+ def __init__(self, http_response):
598+ Response.__init__(self, http_response)
599+ if http_response.status < 300:
600+ handler = ListBucketHandler()
601+ xml.sax.parseString(self.body, handler)
602+ self.entries = handler.entries
603+ self.common_prefixes = handler.common_prefixes
604+ self.name = handler.name
605+ self.marker = handler.marker
606+ self.prefix = handler.prefix
607+ self.is_truncated = handler.is_truncated
608+ self.delimiter = handler.delimiter
609+ self.max_keys = handler.max_keys
610+ self.next_marker = handler.next_marker
611+ else:
612+ self.entries = []
613+
614+class ListAllMyBucketsResponse(Response):
615+ def __init__(self, http_response):
616+ Response.__init__(self, http_response)
617+ if http_response.status < 300:
618+ handler = ListAllMyBucketsHandler()
619+ xml.sax.parseString(self.body, handler)
620+ self.entries = handler.entries
621+ else:
622+ self.entries = []
623+
624+class GetResponse(Response):
625+ def __init__(self, http_response):
626+ Response.__init__(self, http_response)
627+ response_headers = http_response.msg # older pythons don't have getheaders
628+ metadata = self.get_aws_metadata(response_headers)
629+ self.object = S3Object(self.body, metadata)
630+
631+ def get_aws_metadata(self, headers):
632+ metadata = {}
633+ for hkey in headers.keys():
634+ if hkey.lower().startswith(METADATA_PREFIX):
635+ metadata[hkey[len(METADATA_PREFIX):]] = headers[hkey]
636+ del headers[hkey]
637+
638+ return metadata
639+
640+class LocationResponse(Response):
641+ def __init__(self, http_response):
642+ Response.__init__(self, http_response)
643+ if http_response.status < 300:
644+ handler = LocationHandler()
645+ xml.sax.parseString(self.body, handler)
646+ self.location = handler.location
647+
648+class ListBucketHandler(xml.sax.ContentHandler):
649+ def __init__(self):
650+ self.entries = []
651+ self.curr_entry = None
652+ self.curr_text = ''
653+ self.common_prefixes = []
654+ self.curr_common_prefix = None
655+ self.name = ''
656+ self.marker = ''
657+ self.prefix = ''
658+ self.is_truncated = False
659+ self.delimiter = ''
660+ self.max_keys = 0
661+ self.next_marker = ''
662+ self.is_echoed_prefix_set = False
663+
664+ def startElement(self, name, attrs):
665+ if name == 'Contents':
666+ self.curr_entry = ListEntry()
667+ elif name == 'Owner':
668+ self.curr_entry.owner = Owner()
669+ elif name == 'CommonPrefixes':
670+ self.curr_common_prefix = CommonPrefixEntry()
671+
672+
673+ def endElement(self, name):
674+ if name == 'Contents':
675+ self.entries.append(self.curr_entry)
676+ elif name == 'CommonPrefixes':
677+ self.common_prefixes.append(self.curr_common_prefix)
678+ elif name == 'Key':
679+ self.curr_entry.key = self.curr_text
680+ elif name == 'LastModified':
681+ self.curr_entry.last_modified = self.curr_text
682+ elif name == 'ETag':
683+ self.curr_entry.etag = self.curr_text
684+ elif name == 'Size':
685+ self.curr_entry.size = int(self.curr_text)
686+ elif name == 'ID':
687+ self.curr_entry.owner.id = self.curr_text
688+ elif name == 'DisplayName':
689+ self.curr_entry.owner.display_name = self.curr_text
690+ elif name == 'StorageClass':
691+ self.curr_entry.storage_class = self.curr_text
692+ elif name == 'Name':
693+ self.name = self.curr_text
694+ elif name == 'Prefix' and self.is_echoed_prefix_set:
695+ self.curr_common_prefix.prefix = self.curr_text
696+ elif name == 'Prefix':
697+ self.prefix = self.curr_text
698+ self.is_echoed_prefix_set = True
699+ elif name == 'Marker':
700+ self.marker = self.curr_text
701+ elif name == 'IsTruncated':
702+ self.is_truncated = self.curr_text == 'true'
703+ elif name == 'Delimiter':
704+ self.delimiter = self.curr_text
705+ elif name == 'MaxKeys':
706+ self.max_keys = int(self.curr_text)
707+ elif name == 'NextMarker':
708+ self.next_marker = self.curr_text
709+
710+ self.curr_text = ''
711+
712+ def characters(self, content):
713+ self.curr_text += content
714+
715+
716+class ListAllMyBucketsHandler(xml.sax.ContentHandler):
717+ def __init__(self):
718+ self.entries = []
719+ self.curr_entry = None
720+ self.curr_text = ''
721+
722+ def startElement(self, name, attrs):
723+ if name == 'Bucket':
724+ self.curr_entry = Bucket()
725+
726+ def endElement(self, name):
727+ if name == 'Name':
728+ self.curr_entry.name = self.curr_text
729+ elif name == 'CreationDate':
730+ self.curr_entry.creation_date = self.curr_text
731+ elif name == 'Bucket':
732+ self.entries.append(self.curr_entry)
733+
734+ def characters(self, content):
735+ self.curr_text = content
736+
737+
738+class LocationHandler(xml.sax.ContentHandler):
739+ def __init__(self):
740+ self.location = None
741+ self.state = 'init'
742+
743+ def startElement(self, name, attrs):
744+ if self.state == 'init':
745+ if name == 'LocationConstraint':
746+ self.state = 'tag_location'
747+ self.location = ''
748+ else: self.state = 'bad'
749+ else: self.state = 'bad'
750+
751+ def endElement(self, name):
752+ if self.state == 'tag_location' and name == 'LocationConstraint':
753+ self.state = 'done'
754+ else: self.state = 'bad'
755+
756+ def characters(self, content):
757+ if self.state == 'tag_location':
758+ self.location += content
759+
760+if __name__=="__main__":
761+ keys = raw_input("Enter access and secret key (separated by a space): ")
762+ access_key, secret_key = keys.split(" ")
763+ s3 = AWSAuthConnection(access_key, secret_key)
764+ bucket = "test_s3_lib"
765+ m = s3.put(bucket, "sample", "hola mundo", {"Content-Type":"text/lame"})
766+ print m.http_response.status, m.http_response.reason
767+ print m.http_response.getheaders()
768+ print m.body
769
770=== added file 'txaws/s4/contrib/__init__.py'
771=== added file 'txaws/s4/s4.py'
772--- txaws/s4/s4.py 1970-01-01 00:00:00 +0000
773+++ txaws/s4/s4.py 2009-08-19 14:36:56 +0000
774@@ -0,0 +1,742 @@
775+# Copyright 2009 Canonical Ltd.
776+#
777+# Permission is hereby granted, free of charge, to any person obtaining
778+# a copy of this software and associated documentation files (the
779+# "Software"), to deal in the Software without restriction, including
780+# without limitation the rights to use, copy, modify, merge, publish,
781+# distribute, sublicense, and/or sell copies of the Software, and to
782+# permit persons to whom the Software is furnished to do so, subject to
783+# the following conditions:
784+#
785+# The above copyright notice and this permission notice shall be
786+# included in all copies or substantial portions of the Software.
787+#
788+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
789+# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
790+# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
791+# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
792+# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
793+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
794+# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
795+
796+""" S4 - a S3 storage system stub
797+
798+This module implementes a stub for Amazons S3 storage system.
799+
800+Not all functionality is provided, just enough to test the client.
801+
802+"""
803+from __future__ import with_statement
804+
805+import os
806+import hmac
807+import time
808+import base64
809+import logging
810+import hashlib
811+from urllib import urlencode
812+
813+# cPickle would be faster here, but working around its relative
814+# imports issues for this module requires extra hacking
815+import pickle
816+
817+from boto.utils import canonical_string as canonical_path_string
818+# pylint: disable-msg=W0611
819+from boto.s3.connection import OrdinaryCallingFormat as CallingFormat
820+
821+from twisted.web import server, resource, error, http
822+from twisted.internet import reactor, interfaces
823+# pylint and zope dont work
824+# pylint: disable-msg=E0611
825+# pylint: disable-msg=F0401
826+from zope.interface import implements
827+
828+# xml namespace response header required
829+XMLNS = "http://s3.amazonaws.com/doc/2006-03-01"
830+
831+AWS_DEFAULT_ACCESS_KEY_ID = 'aws_key'
832+AWS_DEFAULT_SECRET_ACCESS_KEY = 'aws_secret'
833+AMAZON_HEADER_PREFIX = 'x-amz-'
834+AMAZON_META_PREFIX = "x-amz-meta-"
835+
836+BLOCK_SIZE = 2**16
837+
838+S4_STATE_FILE = ".s4_state"
839+
840+logger = logging.getLogger('UbuntuOne.S4')
841+
842+# pylint: disable-msg=W0403
843+import s4_xml
844+
845+class S4StorageException(Exception):
846+ """ exception raised when S4 backend store runs into trouble """
847+
848+class FakeContent(object):
849+ """A content that can be accesed by slicing but will never exists in memory
850+ """
851+ def __init__(self, char, size):
852+ """Create the content as char*size."""
853+ self.char = char
854+ self.size = size
855+
856+ def __getitem__(self, slice):
857+ """Get a piece of the content."""
858+ size = min(slice.stop, self.size) - slice.start
859+ return self.char*size
860+
861+ def hexdigest(self):
862+ """Send a fake hexdigest. For big contents this takes too much time
863+ to calculate, so we just fake it."""
864+ block_size = BLOCK_SIZE
865+ start = 0
866+ data = self[start:start+block_size]
867+ md5calc = hashlib.md5()
868+ md5calc.update(data)
869+ return md5calc.hexdigest()
870+
871+ def __len__(self):
872+ """The size."""
873+ return self.size
874+
875+class ContentProducer(object):
876+ """A content producer used to stream big data."""
877+ implements(interfaces.IPullProducer)
878+
879+ def __init__(self, request, content, buffer_size=BLOCK_SIZE):
880+ """Create a producer for request, that produces content."""
881+ self.request = request
882+ self.content = content
883+ self.buffer_size = buffer_size
884+ self.position = 0
885+ self.paused = False
886+
887+ def startProducing(self):
888+ """IPullProducer api."""
889+ self.request.registerProducer(self, streaming=False)
890+
891+ def finished(self):
892+ """Called to finish the request after producing."""
893+ self.request.unregisterProducer()
894+ self.request.finish()
895+
896+ def resumeProducing(self):
897+ """IPullProducer api."""
898+ if self.position > len(self.content):
899+ self.finished()
900+ return
901+
902+ data = self.content[self.position:self.position+self.buffer_size]
903+
904+ self.position += self.buffer_size
905+ self.request.write(data)
906+
907+ def stopProducing(self):
908+ """IPullProducer api."""
909+ pass
910+
911+
912+def canonical_string(method, bucket="", key="", query_args=None, headers=None):
913+ """ compatibility S3 canonical string calculator for cases where passing in
914+ a bucket name, key anme and a hash of query args is easier than using an
915+ S3 path """
916+ path = []
917+ if bucket:
918+ path.append("/%s" % bucket)
919+ path.append("/%s" % key)
920+ if query_args:
921+ path.append("?%s" % urlencode(query_args))
922+ path = "".join(path)
923+ if headers is None:
924+ headers = {}
925+ return canonical_path_string(method=method, path=path, headers=headers)
926+
927+def encode(secret_key, data):
928+ """base64encoded digest of data using secret_key"""
929+ encoded = hmac.new(secret_key, data, hashlib.sha1).digest()
930+ return base64.encodestring(encoded).strip()
931+
932+def parse_range_header(range):
933+ """modeled after twisted.web.static.File._parseRangeHeader()"""
934+ if '=' in range:
935+ type, value = range.split('=', 1)
936+ else:
937+ raise ValueError("Invalid range header, no '='")
938+ if type != 'bytes':
939+ raise ValueError("Invalid range header, must be a 'bytes' range")
940+ raw_ranges = [bytes.strip() for bytes in value.split(',')]
941+ ranges = []
942+ for current_range in raw_ranges:
943+ if '-' not in current_range:
944+ raise ValueError("Illegal byte range: %r" % current_range)
945+ begin, end = current_range.split('-')
946+ if begin:
947+ begin = int(begin)
948+ else:
949+ begin = None
950+ if end:
951+ end = int(end)
952+ else:
953+ end = None
954+ ranges.append((begin, end))
955+ return ranges
956+
957+class _ListResult(resource.Resource):
958+ """ base class for bulding lists of amazon results """
959+ isLeaf = True
960+ def __init__(self):
961+ resource.Resource.__init__(self)
962+ def add_headers(self, request, content):
963+ """ add standard headers to an amazon list result page reply """
964+ request.setHeader("x-amz-id-2", str(request))
965+ request.setHeader("x-amz-request-id", str(request))
966+ request.setHeader("Content-Type", "text/xml")
967+ request.setHeader("Content-Length", str(len(content)))
968+
969+
970+class ListAllMyBucketsResult(_ListResult):
971+ """ builds the result for list all buckets call """
972+ def __init__(self, buckets, owner=None):
973+ _ListResult.__init__(self)
974+ self.buckets = buckets
975+ if owner:
976+ self.owner = owner
977+ else:
978+ self.owner = dict(id = 0, name = "fakeuser")
979+
980+ def render_GET(self, request):
981+ """ render request for a GET listing """
982+ lambr = s4_xml.ListAllMyBucketsResult(self.owner, self.buckets)
983+ content = s4_xml.to_XML(lambr)
984+ self.add_headers(request, content)
985+ return content
986+
987+class ListBucketResult(_ListResult):
988+ """ encapsulates a list of items in a bucket """
989+ def __init__(self, bucket):
990+ _ListResult.__init__(self)
991+ self.bucket = bucket
992+
993+ def render_GET(self, request):
994+ """ Render response for a GET listing """
995+ # pylint: disable-msg=W0631
996+ children = self.bucket.bucket_children.copy()
997+ prefix = request.args.get("prefix", "")
998+ if prefix:
999+ children = dict([x for x in children.iteritems()
1000+ if x[0].startswith(prefix[0])])
1001+ maxkeys = request.args.get("max-keys", 0)
1002+ if maxkeys:
1003+ maxkeys = int(maxkeys[0])
1004+ ck = children.keys()[:maxkeys]
1005+ children = dict([x for x in children.iteritems() if x[0] in ck])
1006+ lbr = s4_xml.ListBucketResult(self.bucket, children)
1007+ s4_xml.add_props(lbr, Prefix=prefix, MaxKeys=maxkeys)
1008+ content = s4_xml.to_XML(lbr)
1009+ self.add_headers(request, content)
1010+ return content
1011+
1012+class BasicS3Object(object):
1013+ """ Basic S3 object class that takes care of contents and properties """
1014+ owner_id = 0
1015+ owner = "fakeuser"
1016+
1017+ def __init__(self, name, contents, content_type="binary/octect-stream",
1018+ content_md5=None):
1019+ self.name = name
1020+ self.content_type = content_type
1021+ self.contents = contents
1022+ if content_md5:
1023+ if isinstance(content_md5, str):
1024+ self._etag = content_md5
1025+ else:
1026+ self._etag = content_md5.hexdigest()
1027+ else:
1028+ self._etag = hashlib.md5(contents).hexdigest()
1029+ self._date = time.asctime()
1030+ self._meta = {}
1031+
1032+ def __getstate__(self):
1033+ d = self.__dict__.copy()
1034+ del d["children"]
1035+ return d
1036+
1037+ def get_etag(self):
1038+ " build an ETag value. Extra quites are mandated by standards "
1039+ return '"%s"' % self._etag
1040+ def set_date(self, datestr):
1041+ """ set the object's time """
1042+ self._date = datestr
1043+ def get_date(self):
1044+ """ get the object's time """
1045+ return self._date
1046+ def get_size(self):
1047+ """ returns size of object's contents """
1048+ return len(self.contents)
1049+ def get_owner(self):
1050+ """ query object's owner """
1051+ return self.owner
1052+ def get_owner_id(self):
1053+ """ query object's owner id """
1054+ return self.owner_id
1055+ def set_meta(self, name, val):
1056+ """ set metadata value for object """
1057+ m = self._meta.setdefault(name, [])
1058+ m.append(val)
1059+ def iter_meta(self):
1060+ """ iterate over object's metadata """
1061+ for k, vals in self._meta.iteritems():
1062+ for v in vals:
1063+ yield k, v
1064+ def delete(self):
1065+ """ clear storage used by object """
1066+ self.contents = None
1067+
1068+class S3Object(BasicS3Object, resource.Resource):
1069+ """ Storage Object
1070+ This objects store the data and metadata
1071+ """
1072+ isLeaf = True
1073+
1074+ def __init__(self, *args, **kw):
1075+ BasicS3Object.__init__(self, *args, **kw)
1076+ resource.Resource.__init__(self)
1077+
1078+ def _render(self, request):
1079+ """render the response for a GET or HEAD request on this object"""
1080+ request.setHeader("x-amz-id-2", str(request))
1081+ request.setHeader("x-amz-request-id", str(request))
1082+ request.setHeader("Content-Type", self.content_type)
1083+ request.setHeader("ETag", self._etag)
1084+ for k, v in self.iter_meta():
1085+ request.setHeader("%s%s" % (AMAZON_META_PREFIX, k), v)
1086+ range = request.getHeader("Range")
1087+ size = len(self.contents)
1088+ if request.method == 'HEAD':
1089+ request.setHeader("Content-Length", size)
1090+ return ""
1091+ if range:
1092+ ranges = parse_range_header(range)
1093+ length = 0
1094+ if len(ranges)==1:
1095+ begin, end = ranges[0]
1096+ if begin is None:
1097+ request.setResponseCode(
1098+ http.REQUESTED_RANGE_NOT_SATISFIABLE)
1099+ return ''
1100+ if not end:
1101+ end = size
1102+ elif end < size:
1103+ end += 1
1104+ if begin >= size:
1105+ request.setResponseCode(
1106+ http.REQUESTED_RANGE_NOT_SATISFIABLE)
1107+ request.setHeader(
1108+ 'content-range', 'bytes */%d' % size)
1109+ return ''
1110+ else:
1111+ request.setHeader(
1112+ 'content-range',
1113+ 'bytes %d-%d/%d' % (begin, end-1, size))
1114+ length = (end - begin)
1115+ request.setHeader("Content-Length", length)
1116+ request.setResponseCode(http.PARTIAL_CONTENT)
1117+ contents = self.contents[begin:end]
1118+ else:
1119+ # multiple ranges should be returned in a multipart response
1120+ request.setResponseCode(
1121+ http.REQUESTED_RANGE_NOT_SATISFIABLE)
1122+ return ''
1123+
1124+ else:
1125+ request.setHeader("Content-Length", str(size))
1126+ contents = self.contents
1127+
1128+ producer = ContentProducer(request, contents)
1129+ producer.startProducing()
1130+ return server.NOT_DONE_YET
1131+ render_GET = _render
1132+ render_HEAD = _render
1133+
1134+class UploadS3Object(resource.Resource):
1135+ """ Class for handling uploads
1136+
1137+ It handles the render_PUT method to update the bucket with the data
1138+ """
1139+ isLeaf = True
1140+ def __init__(self, bucket, name):
1141+ resource.Resource.__init__(self)
1142+ self.bucket = bucket
1143+ self.name = name
1144+
1145+ def render_PUT(self, request):
1146+ """accept the incoming data for a PUT request"""
1147+ data = request.content.read()
1148+ content_type = request.getHeader("Content-Type")
1149+ content_md5 = request.getHeader("Content-MD5")
1150+ if content_md5: # check if the data is good
1151+ header_md5 = base64.decodestring(content_md5)
1152+ data_md5 = hashlib.md5(data)
1153+ assert (data_md5.digest() == header_md5), "md5 check failed!"
1154+ content_md5 = data_md5
1155+ child = S3Object(self.name, data, content_type, content_md5)
1156+ date = request.getHeader("Date")
1157+ if not date:
1158+ date = time.ctime()
1159+ child.set_date(date)
1160+ for k, v in request.getAllHeaders().items():
1161+ if k.startswith(AMAZON_META_PREFIX):
1162+ child.set_meta(k[len(AMAZON_META_PREFIX):], v)
1163+ self.bucket.bucket_children[ self.name ] = child
1164+ request.setHeader("ETag", child.get_etag())
1165+ logger.debug("created object bucket=%s name=%s size=%d" % (
1166+ self.bucket, self.name, len(data)))
1167+ return ""
1168+
1169+
1170+class EmptyPage(resource.Resource):
1171+ """ return Ok/empty document """
1172+ isLeaf = True
1173+ def __init__(self, retcode=http.OK, headers=None, body=""):
1174+ resource.Resource.__init__(self)
1175+ self._retcode = retcode
1176+ self._headers = headers
1177+ self._body = body
1178+
1179+ def render(self, request):
1180+ """ override the render method to return an empty document """
1181+ request.setHeader("x-amz-id-2", str(request))
1182+ request.setHeader("x-amz-request-id", str(request))
1183+ request.setHeader("Content-Type", "text/html")
1184+ request.setHeader("Connection", "close")
1185+ if self._headers:
1186+ for h, v in self._headers.items():
1187+ request.setHeader(h, v)
1188+ request.setResponseCode(self._retcode)
1189+ return self._body
1190+
1191+def ErrorPage(http_code, code, message, path, with_body=True):
1192+ """ helper function that renders an Amazon error response xml page """
1193+ err = s4_xml.AmazonError(code, message, path)
1194+ body = s4_xml.to_XML(err)
1195+ body_size = str(len(body))
1196+ if not with_body:
1197+ body = ""
1198+ logger.info("returning error page %s [%s]%s for %s" % (
1199+ http_code, code, message, path))
1200+ return EmptyPage(http_code, headers={
1201+ "Content-Type": "text/xml",
1202+ "Content-Length": body_size,
1203+ }, body=body)
1204+
1205+# pylint: disable-msg=C0321
1206+class Bucket(resource.Resource):
1207+ """ Storage Bucket
1208+
1209+ Buckets hold objects with data and receive uploads in case of PUT
1210+ """
1211+ def __init__(self, name):
1212+ resource.Resource.__init__(self)
1213+ # cant use children, resource already has that name
1214+ # and it would work as a cache
1215+ self.bucket_children = {}
1216+ self._name = name
1217+ self._date = time.time()
1218+
1219+ def get_name(self):
1220+ """ returns this bucket's name """
1221+ return self._name
1222+ def __len__(self):
1223+ """ returns how many objects are in this bucket """
1224+ return len(self.bucket_children)
1225+ def iter_children(self):
1226+ """ iterator that returns each children objects """
1227+ for (key, val) in self.bucket_children.iteritems():
1228+ yield key, val
1229+ def delete(self):
1230+ """ clean up internal state to prepare bucket for deletion """
1231+ pass
1232+ def _get_state_file(self, rootdir, check=True):
1233+ """ builds the pathname of the state file """
1234+ state_file = os.path.join(rootdir, "%s%s" % (self._name, S4_STATE_FILE))
1235+ if check and not os.path.exists(state_file):
1236+ return None
1237+ return state_file
1238+ def _save(self, rootdir):
1239+ """ saves the state of a bucket """
1240+ state_file = self._get_state_file(rootdir, check=False)
1241+ data = dict(
1242+ name = self._name,
1243+ date = self._date,
1244+ objects = dict([ x for x in self.bucket_children.iteritems() ])
1245+ )
1246+ with open(state_file, "wb") as state_fd:
1247+ pickle.dump(data, state_fd)
1248+ logger.debug("saved bucket '%s' in file '%s'" % (
1249+ self._name, state_file))
1250+ return
1251+ def _load(self, rootdir):
1252+ """ loads a saved bucket state """
1253+ state_file = self._get_state_file(rootdir)
1254+ if not state_file:
1255+ return
1256+ with open(state_file, "rb") as state_fd:
1257+ data = pickle.load(state_fd)
1258+ assert (self._name == data["name"]), \
1259+ "can not load bucket with different name"
1260+ self._date = data["date"]
1261+ self.bucket_children = data["objects"]
1262+ return
1263+
1264+ def getChild(self, name, request):
1265+ """get the next object down the chain"""
1266+ # avoid recursion into the key names
1267+ # (which can contain / as a valid char!)
1268+ if name and request.postpath:
1269+ name = os.path.join(*((name,)+tuple(request.postpath)))
1270+ assert (name), "Wrong call stack for name='%s'" % (name,)
1271+ if request.method == "PUT":
1272+ child = UploadS3Object(self, name)
1273+ elif request.method in ("GET", "HEAD") :
1274+ child = self.bucket_children.get(name, None)
1275+ elif request.method == "DELETE":
1276+ child = self.bucket_children.get(name, None)
1277+ if child is None: # delete unknown object
1278+ return EmptyPage(http.NO_CONTENT)
1279+ child.delete()
1280+ del self.bucket_children[name]
1281+ return EmptyPage(http.NO_CONTENT)
1282+ else:
1283+ logger.error("UNHANDLED request method %s" % request.method)
1284+ return ErrorPage(http.BAD_REQUEST, "BadRequest",
1285+ "Your '%s' request is invalid" % request.method,
1286+ request.path)
1287+ if child is None:
1288+ return ErrorPage(http.NOT_FOUND, "NoSuchKey",
1289+ "The specified key does not exist.",
1290+ request.path, with_body=(request.method!="HEAD"))
1291+ return child
1292+
1293+class DiscardBucket(Bucket):
1294+ """A bucket that will just discard all data as it arrives."""
1295+
1296+ def getChild(self, name, request):
1297+ """accept uploads and discard them."""
1298+ if request.method == "PUT":
1299+ return self
1300+ else:
1301+ return ErrorPage(http.NOT_FOUND, "NoSuchKey",
1302+ "The specified key does not exist.",
1303+ request.path)
1304+
1305+ def render_PUT(self, request):
1306+ """accept the incoming data for a PUT request"""
1307+ # we need to compute a correct md5/etag to send back to the client
1308+ etag = hashlib.md5()
1309+ # this loop should be deadlocking with the client code that writes the
1310+ # data. But render put doesnt get called until the streamer has
1311+ # put all the that. The python mem usage is constant. And it works.
1312+ while True:
1313+ data = request.content.read(BLOCK_SIZE)
1314+ if not data:
1315+ break
1316+ etag.update(data)
1317+ request.setHeader("ETag", '"%s"' % etag.hexdigest())
1318+ return ""
1319+
1320+class SizeBucket(Bucket):
1321+ """ SizeBucket
1322+
1323+ Fakes contents and always returns an object with size = int(objname)
1324+ """
1325+
1326+ def getChild(self, name, request):
1327+ """get the next object down the chain"""
1328+ try:
1329+ fake = FakeContent("0", int(name))
1330+ o = S3Object(name, fake, "text/plain", fake.hexdigest())
1331+ return o
1332+ except ValueError:
1333+ return "this buckets requires integer named objects"
1334+
1335+
1336+class Root(resource.Resource):
1337+ """ Site Root
1338+
1339+ handles all the requests.
1340+ on initialization it configures some default buckets
1341+ """
1342+ owner_id = 0
1343+ owner = "fakeuser"
1344+
1345+ def __init__(self, storagedir=None, allow_default_access=True):
1346+ resource.Resource.__init__(self)
1347+
1348+ self.auth = {}
1349+ if allow_default_access:
1350+ self.auth[ AWS_DEFAULT_ACCESS_KEY_ID ] = \
1351+ AWS_DEFAULT_SECRET_ACCESS_KEY
1352+ self.fail_next = {}
1353+ self.buckets = dict(
1354+ size = SizeBucket("size"),
1355+ discard = DiscardBucket("discard"))
1356+
1357+ self._rootdir = storagedir
1358+ if self._rootdir:
1359+ self._load()
1360+
1361+ def _add_bucket(self, name):
1362+ """ create a new bucket """
1363+ if self.buckets.has_key(name):
1364+ return self.buckets[name]
1365+ bucket = Bucket(name)
1366+ self.buckets[name] = bucket
1367+ if self._rootdir:
1368+ bucket._save(self._rootdir)
1369+ self._save()
1370+ return bucket
1371+
1372+ def _get_state_file(self, check=True):
1373+ """ locate the saved state file on disk """
1374+ assert self._rootdir, "S4 storage has not been initialized"
1375+ state_file = os.path.join(self._rootdir, S4_STATE_FILE)
1376+ if check and not os.path.exists(state_file):
1377+ return None
1378+ return state_file
1379+ def _load(self):
1380+ "load a saved bucket list state from disk "
1381+ state_file = self._get_state_file()
1382+ if not state_file:
1383+ return
1384+ data = dict(buckets=[])
1385+ with open(state_file, "rb") as state_fd:
1386+ data = pickle.load(state_fd)
1387+ self.owner_id = data["owner_id"]
1388+ self.owner = data["owner"]
1389+ for bucket_name in data["buckets"]:
1390+ bucket = Bucket(bucket_name)
1391+ bucket._load(self._rootdir)
1392+ self.buckets[bucket_name] = bucket
1393+ self._save(with_buckets=False)
1394+ return
1395+ def _save(self, with_buckets=True):
1396+ """ save current state to disk """
1397+ state_file = self._get_state_file(check=False)
1398+ data = dict(
1399+ owner = self.owner,
1400+ owner_id = self.owner_id,
1401+ buckets = [ x for x in self.buckets.keys()
1402+ if x not in ("size", "discard")],
1403+ )
1404+ with open(state_file, "wb") as state_fd:
1405+ pickle.dump(data, state_fd)
1406+ logger.debug("saved state file %s" % state_file)
1407+ if not with_buckets:
1408+ return
1409+ for bucket_name in data["buckets"]:
1410+ bucket = self.buckets[bucket_name]
1411+ bucket._save(self._rootdir)
1412+ return
1413+ def fail_next_put(self, error=http.INTERNAL_SERVER_ERROR,
1414+ message="Internal Server Error"):
1415+ """
1416+ Force next PUT request to return an error
1417+ """
1418+ logger.debug("will fail next put with %d (%s)", error, message)
1419+ self.fail_next['PUT'] = error, message
1420+
1421+ def fail_next_get(self, error=http.INTERNAL_SERVER_ERROR,
1422+ message="Internal Server Error"):
1423+ """
1424+ Force next GET request to return an error
1425+ """
1426+ logger.debug("will fail next get with %d (%s)", error, message)
1427+ self.fail_next['GET'] = error, message
1428+
1429+ def getChild(self, name, request):
1430+ """get the next object down the resource path"""
1431+ if not self.check_auth( request ):
1432+ return ErrorPage(http.FORBIDDEN, "InvalidSecurity",
1433+ "The provided security credentials are not valid.",
1434+ request.path)
1435+ if request.method in self.fail_next:
1436+ err, message = self.fail_next.pop(request.method)
1437+ return error.ErrorPage(err, message, message)
1438+ if request.path == "/" and request.method == "GET":
1439+ # this is a getallbuckets call
1440+ return ListAllMyBucketsResult(self.buckets.values())
1441+
1442+ # need to record when things change and save bucket state
1443+ if self._rootdir and name and request.method in ("PUT", "DELETE"):
1444+ def save_state(result, self, name, method):
1445+ """ callback for when rendering is finished """
1446+ bucket = self.buckets[name]
1447+ return bucket._save(self._rootdir)
1448+ _defer = request.notifyFinish()
1449+ _defer.addCallback(save_state, self, name, request.method)
1450+
1451+ bucket = self.buckets.get(name, None)
1452+ # if we operate on a key, pass control
1453+ if request.postpath and request.postpath[0]:
1454+ if bucket is None:
1455+ # bucket does not exist, yet we attempt operation on
1456+ # an object from that bucket
1457+ return ErrorPage(http.NOT_FOUND, "InvalidBucketName",
1458+ "The specified bucket is not valid",
1459+ request.path)
1460+ return bucket
1461+
1462+ # these are operations that are happening on a bucket and
1463+ # which are better handled from the root handler
1464+
1465+ # we're asked to list a bucket
1466+ if request.method in ("GET", "HEAD"):
1467+ if bucket is None:
1468+ return ErrorPage(http.NOT_FOUND, "NoSuchBucket",
1469+ "The specified bucket does not exist.",
1470+ request.path)
1471+ return ListBucketResult(bucket)
1472+ # bucket creation. if bucket already exists, noop
1473+ elif request.method == "PUT":
1474+ if bucket is None:
1475+ bucket = self._add_bucket(name)
1476+ return EmptyPage()
1477+ # we're asked to delete a bucket
1478+ elif request.method == "DELETE":
1479+ if len(bucket): # non-empty buckets can not be deleted
1480+ return ErrorPage(http.CONFLICT, "BucketNotEmpty",
1481+ "The bucket you tried to delete is not empty.",
1482+ request.path)
1483+ bucket.delete()
1484+ del self.buckets[name]
1485+ if self._rootdir:
1486+ self._save(with_buckets=False)
1487+ return EmptyPage(http.NO_CONTENT,
1488+ headers=dict(Location=request.path))
1489+ else:
1490+ return ErrorPage(http.BAD_REQUEST, "BadRequest",
1491+ "Your '%s' request is invalid" % request.method,
1492+ request.path)
1493+ return bucket
1494+
1495+ def check_auth(self, request):
1496+ """ Validates key/secret """
1497+ auth_str = request.getHeader('Authorization')
1498+ if not auth_str.startswith("AWS "):
1499+ return False
1500+ access_key, signature = auth_str[4:].split(":")
1501+ if not access_key in self.auth:
1502+ return False
1503+ secret_key = self.auth[ access_key ]
1504+ headers = request.getAllHeaders()
1505+ c_string = canonical_path_string(
1506+ request.method, request.path, headers)
1507+ if encode(secret_key, c_string) != signature:
1508+ return False
1509+ return True
1510+
1511+
1512+if __name__ == "__main__":
1513+ root = Root()
1514+ site = server.Site(root)
1515+ reactor.listenTCP(8808, site)
1516+ reactor.run()
1517
1518=== added file 'txaws/s4/s4_xml.py'
1519--- txaws/s4/s4_xml.py 1970-01-01 00:00:00 +0000
1520+++ txaws/s4/s4_xml.py 2009-08-19 14:36:56 +0000
1521@@ -0,0 +1,155 @@
1522+# Copyright 2008-2009 Canonical Ltd.
1523+#
1524+# Permission is hereby granted, free of charge, to any person obtaining
1525+# a copy of this software and associated documentation files (the
1526+# "Software"), to deal in the Software without restriction, including
1527+# without limitation the rights to use, copy, modify, merge, publish,
1528+# distribute, sublicense, and/or sell copies of the Software, and to
1529+# permit persons to whom the Software is furnished to do so, subject to
1530+# the following conditions:
1531+#
1532+# The above copyright notice and this permission notice shall be
1533+# included in all copies or substantial portions of the Software.
1534+#
1535+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
1536+# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
1537+# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
1538+# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
1539+# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
1540+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
1541+# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
1542+
1543+""" build XML responses that mimic the behavior of the real S3 """
1544+
1545+from StringIO import StringIO
1546+from xml.etree.ElementTree import Element, ElementTree
1547+
1548+XMLNS = "http://s3.amazonaws.com/doc/2006-03-01"
1549+
1550+# <?xml version="1.0" encoding="UTF-8"?>
1551+def to_XML(elem):
1552+ """ renders an xml element to a text/xml page """
1553+ s = StringIO()
1554+ s.write("""<?xml version="1.0" encoding="UTF-8"?>\n""")
1555+ tree = ElementTree(elem)
1556+ tree.write(s)
1557+ return s.getvalue()
1558+
1559+def add_props(elem, **kw):
1560+ """ add subnodes to a xml node based on a dictionary """
1561+ for (key, val) in kw.iteritems():
1562+ prop = Element(key)
1563+ prop.tail = "\n"
1564+ if val is None:
1565+ val = ""
1566+ elif isinstance(val, bool):
1567+ val = str(val).lower()
1568+ elif not isinstance(val, str):
1569+ val = str(val)
1570+ prop.text = val
1571+ elem.append(prop)
1572+
1573+# <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01">
1574+# <Name>bucket</Name>
1575+# <Prefix>prefix</Prefix>
1576+# <Marker>marker</Marker>
1577+# <MaxKeys>max-keys</MaxKeys>
1578+# <IsTruncated>false</IsTruncated>
1579+# <Contents>
1580+# <Key>object</Key>
1581+# <LastModified>date</LastModified>
1582+# <ETag>etag</ETag>
1583+# <Size>size</Size>
1584+# <StorageClass>STANDARD</StorageClass>
1585+# <Owner>
1586+# <ID>owner_id</ID>
1587+# <DisplayName>owner_name</DisplayName>
1588+# </Owner>
1589+# </Contents>
1590+# ...
1591+# </ListBucketResult>
1592+def ListBucketResult(bucket, children):
1593+ """ builds the xml tree corresponding to a bucket listing """
1594+ root = Element("ListBucketResult", dict(xmlns=XMLNS))
1595+ root.tail = root.text = "\n"
1596+ add_props(root, **dict(
1597+ Name = bucket.get_name(),
1598+ IsTruncated = False,
1599+ Marker = 0,
1600+ ))
1601+ for (obname, ob) in children.iteritems():
1602+ contents = Element("Contents")
1603+ add_props(contents, **dict(
1604+ Key = obname,
1605+ LastModified = ob.get_date(),
1606+ ETag = ob.get_etag(),
1607+ Size = ob.get_size(),
1608+ StorageClass = "STANDARD",
1609+ ))
1610+ owner = Element("Owner")
1611+ add_props(owner, **dict(
1612+ ID = ob.get_owner_id(),
1613+ DisplayName = ob.get_owner(), ))
1614+ contents.append(owner)
1615+ root.append(contents)
1616+ return root
1617+
1618+# <Error>
1619+# <Code>NoSuchKey</Code>
1620+# <Message>The resource you requested does not exist</Message>
1621+# <Resource>/mybucket/myfoto.jpg</Resource>
1622+# <RequestId>4442587FB7D0A2F9</RequestId>
1623+# </Error>
1624+def AmazonError(code, message, resource, req_id=""):
1625+ """ builds xml tree corresponding to an Amazon error xml page """
1626+ root = Element("Error")
1627+ root.tail = root.text = "\n"
1628+ add_props(root, **dict(
1629+ Code = code,
1630+ Message = message,
1631+ Resource = resource,
1632+ RequestId = req_id))
1633+ return root
1634+
1635+# <ListAllMyBucketsResult xmlns="http://doc.s3.amazonaws.com/2006-03-01">
1636+# <Owner>
1637+# <ID>user_id</ID>
1638+# <DisplayName>display_name</DisplayName>
1639+# </Owner>
1640+# <Buckets>
1641+# <Bucket>
1642+# <Name>bucket_name</Name>
1643+# <CreationDate>date</CreationDate>
1644+# </Bucket>
1645+# ...
1646+# </Buckets>
1647+# </ListAllMyBucketsResult>
1648+def ListAllMyBucketsResult(owner, buckets):
1649+ """ builds xml tree corresponding to an Amazon list all buckets """
1650+ root = Element("ListAllMyBucketsResult", dict(xmlns=XMLNS))
1651+ root.tail = root.text = "\n"
1652+ xml_owner = Element("Owner")
1653+ add_props(xml_owner, **dict(
1654+ ID = owner["id"],
1655+ DisplayName = owner["name"] ))
1656+ root.append(xml_owner)
1657+ xml_buckets = Element("Buckets")
1658+ for bucket in buckets:
1659+ b = Element("Bucket")
1660+ add_props(b, **dict(
1661+ Name = bucket._name,
1662+ CreationDate = bucket._date))
1663+ xml_buckets.append(b)
1664+ root.append(xml_buckets)
1665+ return root
1666+
1667+if __name__ == '__main__':
1668+ # pylint: disable-msg=W0403
1669+ # pylint: disable-msg=E0611
1670+ from s4 import Bucket
1671+ bucket = Bucket("test-bucket")
1672+ lbr = ListBucketResult(bucket)
1673+ print to_XML(lbr)
1674+ print
1675+
1676+
1677
1678=== added directory 'txaws/s4/testing'
1679=== added file 'txaws/s4/testing/__init__.py'
1680=== added file 'txaws/s4/testing/testcase.py'
1681--- txaws/s4/testing/testcase.py 1970-01-01 00:00:00 +0000
1682+++ txaws/s4/testing/testcase.py 2009-08-19 14:36:56 +0000
1683@@ -0,0 +1,131 @@
1684+# Copyright 2008-2009 Canonical Ltd.
1685+#
1686+# Permission is hereby granted, free of charge, to any person obtaining
1687+# a copy of this software and associated documentation files (the
1688+# "Software"), to deal in the Software without restriction, including
1689+# without limitation the rights to use, copy, modify, merge, publish,
1690+# distribute, sublicense, and/or sell copies of the Software, and to
1691+# permit persons to whom the Software is furnished to do so, subject to
1692+# the following conditions:
1693+#
1694+# The above copyright notice and this permission notice shall be
1695+# included in all copies or substantial portions of the Software.
1696+#
1697+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
1698+# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
1699+# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
1700+# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
1701+# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
1702+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
1703+# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
1704+
1705+"""Test case for S4 test server"""
1706+
1707+import os
1708+import tempfile
1709+import shutil
1710+
1711+from twisted.web import server
1712+from twisted.internet import reactor
1713+from twisted.trial.unittest import TestCase as TwistedTestCase
1714+
1715+from txaws.s4 import s4
1716+from boto.s3 import connection
1717+
1718+# pylint: disable-msg=W0201
1719+class S4TestCase(TwistedTestCase):
1720+ """ Base class for testing S4
1721+
1722+ This class takes care of starting a server instance for all S4 tests
1723+
1724+ As S4 is based on twisted, we inherit from TwistedTestCase.
1725+ As our tests are blocking, we decorate them with 'blocking_test' to
1726+ handle that.
1727+ """
1728+ s3 = None
1729+ logfile = None
1730+ storagedir = None
1731+ active = False
1732+ def setUp(self):
1733+ """Setup method."""
1734+ if not self.active:
1735+ self.start_server()
1736+
1737+ def tearDown(self):
1738+ """ tear down end testcase method """
1739+ # dirty hack to force closing all the cruft boto might be
1740+ # leaving around
1741+ if self.s3:
1742+ # this for is intentional to deal with s3._cache.__iter__ breakage
1743+ for key in [x for x in self.s3._cache]:
1744+ self.s3._cache[key].close()
1745+ self.s3._cache[key] = None
1746+ self.s3 = None
1747+ self.stop_server()
1748+
1749+ def connect_ok(self, access=s4.AWS_DEFAULT_ACCESS_KEY_ID,
1750+ secret=s4.AWS_DEFAULT_SECRET_ACCESS_KEY):
1751+ """ Get a valid connection to S3 (actually, to S4) """
1752+ if self.s3:
1753+ return self.s3
1754+ s3 = connection.S3Connection(access, secret, is_secure=False,
1755+ host="localhost", port=self.port,
1756+ calling_format=s4.CallingFormat())
1757+ # don't let boto do it's braindead retrying for us
1758+ s3.num_retries = 0
1759+ # Need to keep track of this connection
1760+ self.s3 = s3
1761+ return s3
1762+
1763+ @property
1764+ def port(self):
1765+ """The port."""
1766+ return self.conn.getHost().port
1767+
1768+ def start_server(self, persistent=False):
1769+ """ start the S4 listening server """
1770+ if self.active:
1771+ return
1772+ if persistent:
1773+ if not self.storagedir:
1774+ self.storagedir = tempfile.mkdtemp(
1775+ prefix="test-s4-boto-", suffix="-cache")
1776+ root = s4.Root(storagedir=self.storagedir)
1777+ else:
1778+ root = s4.Root()
1779+ self.site = server.Site(root)
1780+ self.active = True
1781+ self.conn = reactor.listenTCP(0, self.site)
1782+
1783+ def stop_server(self):
1784+ """ stop the S4 listening server """
1785+ self.active = False
1786+ self.conn.stopListening()
1787+ if self.storagedir and os.path.exists(self.storagedir):
1788+ shutil.rmtree(self.storagedir, ignore_errors=True)
1789+ self.storagedir = None
1790+
1791+ def restart_server(self, persistent=False):
1792+ """ restarts the S4 listening server """
1793+ self.stop_server()
1794+ self.start_server(persistent=persistent)
1795+
1796+
1797+from twisted.internet import threads
1798+from twisted.python.util import mergeFunctionMetadata
1799+
1800+def defer_to_thread(function):
1801+ """Run in a thread and return a Deferred that fires when done."""
1802+ def decorated(*args, **kwargs):
1803+ """Run in a thread and return a Deferred that fires when done."""
1804+ return threads.deferToThread(function, *args, **kwargs)
1805+ return mergeFunctionMetadata(function, decorated)
1806+
1807+def skip_test(reason):
1808+ """ tag a testcase to be skipped by the test runner """
1809+ def deco(f, *args, **kw):
1810+ """ testcase decorator """
1811+ f.skip = reason
1812+ return deco
1813+
1814+
1815
1816=== added directory 'txaws/s4/tests'
1817=== added file 'txaws/s4/tests/__init__.py'
1818=== added file 'txaws/s4/tests/test_S4.py'
1819--- txaws/s4/tests/test_S4.py 1970-01-01 00:00:00 +0000
1820+++ txaws/s4/tests/test_S4.py 2009-08-19 14:36:56 +0000
1821@@ -0,0 +1,194 @@
1822+# Copyright 2008 Canonical Ltd.
1823+#
1824+# Permission is hereby granted, free of charge, to any person obtaining
1825+# a copy of this software and associated documentation files (the
1826+# "Software"), to deal in the Software without restriction, including
1827+# without limitation the rights to use, copy, modify, merge, publish,
1828+# distribute, sublicense, and/or sell copies of the Software, and to
1829+# permit persons to whom the Software is furnished to do so, subject to
1830+# the following conditions:
1831+#
1832+# The above copyright notice and this permission notice shall be
1833+# included in all copies or substantial portions of the Software.
1834+#
1835+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
1836+# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
1837+# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
1838+# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
1839+# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
1840+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
1841+# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
1842+
1843+"""Unit tests for S4 test server"""
1844+
1845+import time
1846+import unittest
1847+
1848+from txaws.s4.testing.testcase import S4TestCase, defer_to_thread
1849+from boto.exception import S3ResponseError, BotoServerError
1850+
1851+class TestBasicObjectManipulation(S4TestCase):
1852+ """Tests for basic object manipulation."""
1853+
1854+ def _get_sample_key(self, s3, content, content_type=None):
1855+ """ cerate a new bucket and return a sample key from content """
1856+ bname = "test-%.2f" % time.time()
1857+ bucket = s3.create_bucket(bname)
1858+ key = bucket.new_key("sample")
1859+ if content_type:
1860+ key.content_type = content_type
1861+ key.set_contents_from_string(content)
1862+ return key
1863+
1864+ @defer_to_thread
1865+ def test_get(self):
1866+ """ Get one object """
1867+
1868+ s3 = self.connect_ok()
1869+ size = 30
1870+ b = s3.get_bucket("size")
1871+ m = b.get_key(str(size))
1872+
1873+ body = m.get_contents_as_string()
1874+ self.assertEquals(body, "0"*size)
1875+ self.assertEquals(m.size, size)
1876+ self.assertEquals(m.content_type, "text/plain")
1877+
1878+ @defer_to_thread
1879+ def test_get_range(self):
1880+ """Get part of one object"""
1881+
1882+ s3 = self.connect_ok()
1883+ content = '0123456789'
1884+ key = self._get_sample_key(s3, content)
1885+ size = len(content)
1886+
1887+ def _get_range(range_start, range_size=None):
1888+ """test range get for various ranges"""
1889+ if range_size:
1890+ range_header = {"Range" : "bytes=%s-%s" % (
1891+ range_start, range_start + range_size - 1 )}
1892+ else:
1893+ range_header = {"Range" : "bytes=%s-" % (range_start,)}
1894+ range_size = size - range_start
1895+ key.open_read(headers=range_header)
1896+ self.assertEquals(key.size, range_size)
1897+ self.assertEquals(key.resp.status, 206)
1898+ ret = key.read()
1899+ body = content[range_start:range_start+range_size]
1900+ self.assertEquals(ret, body)
1901+ key.close()
1902+ # get a test range
1903+ range_size = 5
1904+ range_start = 2
1905+ _get_range(range_start)
1906+ _get_range(range_start, range_size)
1907+
1908+ @defer_to_thread
1909+ def test_get_multiple_range(self):
1910+ """Get part of one object"""
1911+
1912+ s3 = self.connect_ok()
1913+ content = '0123456789'
1914+ size = len(content)
1915+ key = self._get_sample_key(s3, content)
1916+ range_header = {"Range" : "bytes=0-1,5-6,9-" }
1917+ exc = self.assertRaises(S3ResponseError, key.open_read,
1918+ headers=range_header)
1919+ self.assertEquals(exc.status, 416)
1920+ key.close()
1921+
1922+ @defer_to_thread
1923+ def test_get_illegal_range(self):
1924+ """make sure first-byte-pos is present"""
1925+
1926+ s3 = self.connect_ok()
1927+ content = '0123456789'
1928+ size = len(content)
1929+ key = self._get_sample_key(s3, content)
1930+ range_header = {"Range" : "bytes=-1" }
1931+ exc = self.assertRaises(S3ResponseError, key.open_read,
1932+ headers=range_header)
1933+ self.assertEquals(exc.status, 416)
1934+ key.close()
1935+
1936+ @defer_to_thread
1937+ def test_get_404(self):
1938+ """ Try to get an object thats not there, expect 404 """
1939+
1940+ s3 = self.connect_ok()
1941+ bname = "test-%.2f" % time.time()
1942+ bucket = s3.create_bucket(bname)
1943+ # this does not create a key on the server side yet
1944+ key = bucket.new_key(bname)
1945+ # ... which is why we should get errors when attempting to read it
1946+ exc = self.assertRaises(S3ResponseError, key.open_read)
1947+ self.assertEquals(key.resp.status, 404)
1948+ self.assertEquals(exc.status, 404)
1949+
1950+ @defer_to_thread
1951+ def test_get_403(self):
1952+ """ Try to get an object with invalid credentials """
1953+ s3 = self.connect_ok(secret="bad secret")
1954+ exc = self.assertRaises(S3ResponseError, s3.get_bucket, "size")
1955+ self.assertEquals(exc.status, 403)
1956+
1957+
1958+ @defer_to_thread
1959+ def test_discarded(self):
1960+ """ put an object, get a 404 """
1961+ s3 = self.connect_ok()
1962+ bucket = s3.get_bucket("discard")
1963+ key = bucket.new_key("sample")
1964+ message = "Hello World!"
1965+ key.content_type = "text/lame"
1966+ key.set_contents_from_string(message)
1967+ exc = self.assertRaises(S3ResponseError, key.read)
1968+ self.assertEquals(exc.status, 404)
1969+
1970+ @defer_to_thread
1971+ def test_put(self):
1972+ """ put an object, get it back """
1973+ s3 = self.connect_ok()
1974+
1975+ message = "Hello World!"
1976+ key = self._get_sample_key(s3, message, "text/lame")
1977+ for x in range(1, 10):
1978+ body = key.get_contents_as_string()
1979+ self.assertEquals(body, message*x)
1980+ key.set_contents_from_string(message*(x+1))
1981+ self.assertEquals(key.content_type, "text/lame")
1982+
1983+ @defer_to_thread
1984+ def test_fail_next(self):
1985+ """ Test whether fail_next_put works """
1986+ s3 = self.connect_ok()
1987+ message = "Hello World!"
1988+ key = self._get_sample_key(s3, message, "text/lamest")
1989+
1990+ # dirty poking at our own internals, but it works...
1991+ self.site.resource.fail_next_put()
1992+
1993+ exc = self.assertRaises(BotoServerError, key.set_contents_from_string,
1994+ message)
1995+ self.assertEquals(exc.status, 500)
1996+ # next one should work
1997+ key.set_contents_from_string(message*2)
1998+ body = key.get_contents_as_string()
1999+ self.assertEquals(body, message*2)
2000+
2001+ # now test the get fail
2002+ self.site.resource.fail_next_get()
2003+ key.set_contents_from_string(message*3)
2004+ exc = self.assertRaises(BotoServerError, key.read)
2005+ self.assertEquals(exc.status, 500)
2006+ # next get should work
2007+ body = key.get_contents_as_string()
2008+ self.assertEquals(body, message*3)
2009+
2010+def test_suite():
2011+ """Used by the rest runner to find the tests in this module"""
2012+ return unittest.TestLoader().loadTestsFromName(__name__)
2013+
2014+if __name__ == "__main__":
2015+ unittest.main()
2016
2017=== added file 'txaws/s4/tests/test_boto.py'
2018--- txaws/s4/tests/test_boto.py 1970-01-01 00:00:00 +0000
2019+++ txaws/s4/tests/test_boto.py 2009-08-19 14:36:56 +0000
2020@@ -0,0 +1,275 @@
2021+#!/usr/bin/python
2022+#
2023+# Permission is hereby granted, free of charge, to any person obtaining
2024+# a copy of this software and associated documentation files (the
2025+# "Software"), to deal in the Software without restriction, including
2026+# without limitation the rights to use, copy, modify, merge, publish,
2027+# distribute, sublicense, and/or sell copies of the Software, and to
2028+# permit persons to whom the Software is furnished to do so, subject to
2029+# the following conditions:
2030+#
2031+# The above copyright notice and this permission notice shall be
2032+# included in all copies or substantial portions of the Software.
2033+#
2034+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
2035+# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
2036+# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
2037+# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
2038+# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
2039+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
2040+# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
2041+
2042+#
2043+# test s4 implementation using the python-boto client
2044+
2045+"""
2046+imported (from boto) unit tests for the S3Connection
2047+"""
2048+import unittest
2049+
2050+import os
2051+import time
2052+import tempfile
2053+
2054+from StringIO import StringIO
2055+
2056+from txaws.s4.testing.testcase import S4TestCase, defer_to_thread, skip_test
2057+from boto.exception import S3PermissionsError
2058+
2059+# pylint: disable-msg=C0111
2060+class S3ConnectionTest(S4TestCase):
2061+ def _get_bucket(self, s3conn):
2062+ # create a new, empty bucket
2063+ bucket_name = 'test-%.3f' % time.time()
2064+ bucket = s3conn.create_bucket(bucket_name)
2065+ # now try a get_bucket call and see if it's really there
2066+ bucket = s3conn.get_bucket(bucket_name)
2067+ return bucket
2068+
2069+ @defer_to_thread
2070+ def test_basic(self):
2071+ T1 = 'This is a test of file upload and download'
2072+ s3conn = self.connect_ok()
2073+
2074+ all_buckets = s3conn.get_all_buckets()
2075+ bucket = self._get_bucket(s3conn)
2076+ all_buckets = s3conn.get_all_buckets()
2077+ self.failUnless(bucket.name in [x.name for x in all_buckets])
2078+ # bucket should be empty now
2079+ self.failUnlessEqual(bucket.get_key("missing"), None)
2080+ all = bucket.get_all_keys()
2081+ self.failUnlessEqual(len(all), 0)
2082+ # create a new key and store it's content from a string
2083+ k = bucket.new_key()
2084+ k.name = 'foobar'
2085+ k.set_contents_from_string(T1)
2086+ fp = StringIO()
2087+ # now get the contents from s3 to a local file
2088+ k.get_contents_to_file(fp)
2089+ # check to make sure content read from s3 is identical to original
2090+ self.failUnlessEqual(T1, fp.getvalue())
2091+ bucket.delete_key(k)
2092+ self.failUnlessEqual(bucket.get_key(k.name), None)
2093+
2094+ @defer_to_thread
2095+ def test_lookup(self):
2096+ T1 = 'This is a test of file upload and download'
2097+ T2 = 'This is a second string to test file upload and download'
2098+ s3conn = self.connect_ok()
2099+ bucket = self._get_bucket(s3conn)
2100+ # create a new key and store it's content from a string
2101+ k = bucket.new_key()
2102+ # test a few variations on get_all_keys - first load some data
2103+ # for the first one, let's override the content type
2104+ (fd, fname) = tempfile.mkstemp()
2105+ os.write(fd, T1)
2106+ os.close(fd)
2107+ phony_mimetype = 'application/x-boto-test'
2108+ headers = {'Content-Type': phony_mimetype}
2109+ k.name = 'foo/bar'
2110+ k.set_contents_from_string(T1, headers)
2111+ k.name = 'foo/bas'
2112+ k.set_contents_from_filename(fname)
2113+ k.name = 'foo/bat'
2114+ k.set_contents_from_string(T1)
2115+ k.name = 'fie/bar'
2116+ k.set_contents_from_string(T1)
2117+ k.name = 'fie/bas'
2118+ k.set_contents_from_string(T1)
2119+ k.name = 'fie/bat'
2120+ k.set_contents_from_string(T1)
2121+ # try resetting the contents to another value
2122+ md5 = k.md5
2123+ k.set_contents_from_string(T2)
2124+ self.failIfEqual(k.md5, md5)
2125+ os.unlink(fname)
2126+ all = bucket.get_all_keys()
2127+ self.failUnlessEqual(len(all), 6)
2128+ rs = bucket.get_all_keys(prefix='foo')
2129+ self.failUnlessEqual(len(rs), 3)
2130+ rs = bucket.get_all_keys(maxkeys=5)
2131+ self.failUnlessEqual(len(rs), 5)
2132+ # test the lookup method
2133+ k = bucket.lookup('foo/bar')
2134+ self.failUnless(isinstance(k, bucket.key_class))
2135+ self.failUnlessEqual(k.content_type, phony_mimetype)
2136+ k = bucket.lookup('notthere')
2137+ self.failUnlessEqual(k, None)
2138+
2139+ @defer_to_thread
2140+ def test_metadata(self):
2141+ T1 = 'This is a test of file upload and download'
2142+ s3conn = self.connect_ok()
2143+ bucket = self._get_bucket(s3conn)
2144+ # try some metadata stuff
2145+ k = bucket.new_key()
2146+ k.name = 'has_metadata'
2147+ mdkey1 = 'meta1'
2148+ mdval1 = 'This is the first metadata value'
2149+ k.set_metadata(mdkey1, mdval1)
2150+ mdkey2 = 'meta2'
2151+ mdval2 = 'This is the second metadata value'
2152+ k.set_metadata(mdkey2, mdval2)
2153+ k.set_contents_from_string(T1)
2154+ k = bucket.lookup('has_metadata')
2155+ self.failUnlessEqual(k.get_metadata(mdkey1), mdval1)
2156+ self.failUnlessEqual(k.get_metadata(mdkey2), mdval2)
2157+ k = bucket.new_key()
2158+ k.name = 'has_metadata'
2159+ k.get_contents_as_string()
2160+ self.failUnlessEqual(k.get_metadata(mdkey1), mdval1)
2161+ self.failUnlessEqual(k.get_metadata(mdkey2), mdval2)
2162+ bucket.delete_key(k)
2163+ # try a key with a funny character
2164+ rs = bucket.get_all_keys()
2165+ num_keys = len(rs)
2166+ k = bucket.new_key()
2167+ k.name = 'testnewline\n'
2168+ k.set_contents_from_string('This is a test')
2169+ rs = bucket.get_all_keys()
2170+ self.failUnlessEqual(len(rs), num_keys + 1)
2171+ bucket.delete_key(k)
2172+ rs = bucket.get_all_keys()
2173+ self.failUnlessEqual(len(rs), num_keys)
2174+
2175+ # tests removing objects from the store
2176+ @defer_to_thread
2177+ def test_cleanup(self):
2178+ s3conn = self.connect_ok()
2179+ bucket = self._get_bucket(s3conn)
2180+ for x in range(10):
2181+ k = bucket.new_key()
2182+ k.name = "foo%d" % x
2183+ k.set_contents_from_string("test %d" % x)
2184+ all = bucket.get_all_keys()
2185+ # now delete all keys in bucket
2186+ for k in all:
2187+ bucket.delete_key(k)
2188+ # now delete bucket
2189+ s3conn.delete_bucket(bucket)
2190+
2191+ @defer_to_thread
2192+ def test_connection(self):
2193+ s3conn = self.connect_ok()
2194+ bucket = self._get_bucket(s3conn)
2195+ all_buckets = s3conn.get_all_buckets()
2196+ size_bucket = s3conn.get_bucket("size")
2197+ discard_buucket = s3conn.get_bucket("discard")
2198+
2199+ @defer_to_thread
2200+ def test_persistence(self):
2201+ # pylint: disable-msg=W0631
2202+ # first, stop the server and restart it in persistent mode
2203+ self.restart_server(persistent=True)
2204+ s3conn = self.connect_ok()
2205+ for bcount in range(1, 5):
2206+ bucket = self._get_bucket(s3conn)
2207+ for kcount in range(1, 5):
2208+ k = bucket.new_key()
2209+ k.name = "bucket-%d-key-%d" % (bcount, kcount)
2210+ k.set_contents_from_string(
2211+ "This is key %d from bucket %d (%s)" %(
2212+ kcount, bcount, bucket.name))
2213+ k.set_metadata("bcount", bcount)
2214+ k.set_metadata("kcount", kcount)
2215+ # now get a list of all the buckets and objects in the store
2216+ all_buckets = s3conn.get_all_buckets()
2217+ all_objects = {}
2218+ for x in all_buckets:
2219+ if x.name in ["size", "discard"]:
2220+ continue
2221+ objset = all_objects.setdefault(x.name, set())
2222+ bucket = s3conn.get_bucket(x.name)
2223+ for obj in bucket.get_all_keys():
2224+ objset.add(obj)
2225+ # XXX: test metadata
2226+ # now stop the S4Server and restart it
2227+ self.restart_server(persistent=True)
2228+ new_buckets = s3conn.get_all_buckets()
2229+ self.failUnlessEqual(
2230+ set([x.name for x in all_buckets]),
2231+ set([x.name for x in new_buckets]) )
2232+ new_objects = {}
2233+ for x in new_buckets:
2234+ if x.name in ["size", "discard"]:
2235+ continue
2236+ objset = new_objects.setdefault(x.name, set())
2237+ bucket = s3conn.get_bucket(x.name)
2238+ for obj in bucket.get_all_keys():
2239+ objset.add(obj)
2240+ # XXX: test metadata
2241+ # test the newobjects
2242+ self.failUnlessEqual(
2243+ set(all_objects.keys()),
2244+ set(new_objects.keys()) )
2245+ for key in all_objects.keys():
2246+ self.failUnlessEqual(
2247+ set([x.name for x in all_objects[key]]),
2248+ set([x.name for x in new_objects[key]]) )
2249+
2250+ @defer_to_thread
2251+ def test_size_bucket(self):
2252+ s3conn = self.connect_ok()
2253+ bucket = s3conn.get_bucket("size")
2254+ all_keys = bucket.get_all_keys()
2255+ self.failUnlessEqual(all_keys, [])
2256+ for size in range(1, 10**7, 10000):
2257+ k = bucket.get_key(str(size))
2258+ self.failUnlessEqual(size, k.size)
2259+ # try to read in the last key (should be the biggest)
2260+ size = 0
2261+ k.open("r")
2262+ for chunk in k:
2263+ size += len(chunk)
2264+ self.failUnlessEqual(size, k.size)
2265+
2266+ @skip_test("S4 does not have this functionality yet")
2267+ @defer_to_thread
2268+ def test_acl(self):
2269+ s3conn = self.connect_ok()
2270+ bucket = self._get_bucket(s3conn)
2271+ # try some acl stuff
2272+ bucket.set_acl('public-read')
2273+ policy = bucket.get_acl()
2274+ assert len(policy.acl.grants) == 2
2275+ bucket.set_acl('private')
2276+ policy = bucket.get_acl()
2277+ assert len(policy.acl.grants) == 1
2278+ k = bucket.lookup('foo/bar')
2279+ k.set_acl('public-read')
2280+ policy = k.get_acl()
2281+ assert len(policy.acl.grants) == 2
2282+ k.set_acl('private')
2283+ policy = k.get_acl()
2284+ assert len(policy.acl.grants) == 1
2285+ # try the convenience methods for grants
2286+ bucket.add_user_grant(
2287+ 'FULL_CONTROL',
2288+ 'c1e724fbfa0979a4448393c59a8c055011f739b6d102fb37a65f26414653cd67')
2289+ self.failUnlessRaises(S3PermissionsError, bucket.add_email_grant,
2290+ 'foobar', 'foo@bar.com')
2291+
2292+if __name__ == '__main__':
2293+ suite = unittest.TestSuite()
2294+ suite.addTest(unittest.makeSuite(S3ConnectionTest))
2295+ unittest.TextTestRunner(verbosity=2).run(suite)

Subscribers

People subscribed via source and target branches