Merge lp:~gholt/swift/consync into lp:~hudson-openstack/swift/trunk
- consync
- Merge into trunk
Status: | Superseded |
---|---|
Proposed branch: | lp:~gholt/swift/consync |
Merge into: | lp:~hudson-openstack/swift/trunk |
Diff against target: |
8295 lines (+1918/-5294) 38 files modified
bin/st (+47/-15) bin/swauth-add-account (+0/-68) bin/swauth-add-user (+0/-93) bin/swauth-cleanup-tokens (+0/-118) bin/swauth-delete-account (+0/-60) bin/swauth-delete-user (+0/-60) bin/swauth-list (+0/-86) bin/swauth-prep (+0/-59) bin/swauth-set-account-service (+0/-73) bin/swift-container-sync (+23/-0) doc/source/admin_guide.rst (+0/-16) doc/source/container.rst (+7/-0) doc/source/deployment_guide.rst (+33/-28) doc/source/development_auth.rst (+7/-7) doc/source/development_saio.rst (+19/-19) doc/source/howto_installmultinode.rst (+11/-52) doc/source/index.rst (+1/-0) doc/source/misc.rst (+6/-6) doc/source/overview_auth.rst (+8/-151) doc/source/overview_container_sync.rst (+220/-0) etc/container-server.conf-sample (+13/-0) etc/proxy-server.conf-sample (+27/-17) setup.py (+2/-6) swift/common/client.py (+28/-10) swift/common/db.py (+64/-9) swift/common/manager.py (+4/-3) swift/common/middleware/staticweb.py (+1/-1) swift/common/middleware/swauth.py (+0/-1374) swift/common/middleware/tempauth.py (+495/-0) swift/common/utils.py (+26/-0) swift/container/server.py (+21/-4) swift/container/sync.py (+409/-0) swift/obj/server.py (+20/-11) swift/proxy/server.py (+65/-22) test/probe/common.py (+19/-21) test/unit/common/middleware/test_tempauth.py (+230/-2905) test/unit/common/test_db.py (+92/-0) test/unit/common/test_utils.py (+20/-0) |
To merge this branch: | bzr merge lp:~gholt/swift/consync |
Related bugs: | |
Related blueprints: |
Multiple Cluster Container Syncing
(Undefined)
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Swift Core security contacts | Pending | ||
Review via email: mp+51081@code.launchpad.net |
Commit message
Container synchronization feature
Description of the change
First, create containers that will sync to each other. The -t specifies the x-container-sync-to value, which should be the full URL to the other container. The -k specifies the x-container-
$ st post -t 'http://
$ st post -t 'http://
Then, upload a file to one container and quickly note that it's not in the other:
$ st upload container README
$ st list container2
Now, run the synchronizer:
$ swift-init container-sync once
The file should have been synced over to the container missing it:
$ st list container2
README
To show it goes both ways, upload a file to the other container and note that it's not in the first:
$ st upload container2 AUTHORS
$ st list container
README
Run the synchronizer:
$ swift-init container-sync once
And see the file is in the first container now:
$ st list container
AUTHORS
README
Deletes work as well.
I still have a lot of work to do, but it's basically working.
Known To Do List
----------------
* Merge with DeSwauth
* Add support to TempAuth
* HTTP Proxy
* Change POSTs to in-place COPYs
* Write tests
* Update documentation
* Release Swauth 1.0.2
- 226. By gholt
-
Restrict hosts that can be targets/sources of container syncing
- 227. By gholt
-
made x_container_
sync_row its own column - 228. By gholt
-
More docs
- 229. By gholt
-
Added [container-sync] to SAIO instructions
- 230. By gholt
-
Merged from trunk
- 231. By gholt
-
Double sync point code to divy up work amongst main nodes
- 232. By gholt
-
Container sync doc updates
- 233. By gholt
-
Merged from trunk
- 234. By gholt
-
Require x-timestamp for container-sync requests
- 235. By gholt
-
Merged from trunk
- 236. By gholt
-
Merged from trunk
- 237. By gholt
-
Merged from trunk
- 238. By gholt
-
consync: Make send-all-keys be send-all-
keys-we- didnt-already- send - 239. By gholt
-
Bring st to date with client.py
- 240. By gholt
-
consync: Make validate_sync_to explicitly return None on validation
- 241. By gholt
-
Merged from trunk
- 242. By gholt
-
Merge from trunk
- 243. By gholt
-
Merged from trunk
- 244. By gholt
-
Merged from trunk
- 245. By gholt
-
Merged with trunk
- 246. By gholt
-
Merge from trunk
- 247. By gholt
-
Merged with deswauth
- 248. By gholt
-
container-sync: Support HTTP proxy.
- 249. By gholt
-
st: resync with client.py changes
- 250. By gholt
-
Updated container-
server. conf-sample - 251. By gholt
-
Merged from trunk
- 252. By gholt
-
Merged from trunk
- 253. By gholt
-
Merged from trunk
- 254. By gholt
-
Merged from trunk
- 255. By gholt
-
Removing bin/st in prep for merge from trunk
- 256. By gholt
-
Merge from trunk
- 257. By gholt
-
Readded changes to bin/swift after merge from trunk
- 258. By gholt
-
consync: Some more tests and bugfixes.
- 259. By gholt
-
consync: More tests and slight refactor to be more testable
- 260. By gholt
-
consync: Now queries all primary nodes for a put and uses the newest object if it is newer or equal to the object to sync
- 261. By gholt
-
consync: Minor change to ignore 404 is there is some other error from another node
- 262. By gholt
-
Merge from trunk
- 263. By gholt
-
Doc updates
- 264. By gholt
-
consync: updated class docs
- 265. By gholt
-
Merged from trunk
- 266. By gholt
-
consync: updated client.py to better work with proxies. Had to use the private httplib._set_tunnel though. :/
- 267. By gholt
-
Updated swift util with client.py changes.
- 268. By gholt
-
consync: fixes as per the code roast
- 269. By gholt
-
comment on domain_remap regarding container sync
- 270. By gholt
-
Merged from trunk
- 271. By gholt
-
Added notes about container sync and large objects
- 272. By gholt
-
Reset container sync points when the sync-to changes
- 273. By gholt
-
Ensure paired alter table commands are in same transaction
Unmerged revisions
Preview Diff
1 | === modified file 'bin/st' |
2 | --- bin/st 2011-05-19 14:48:15 +0000 |
3 | +++ bin/st 2011-06-03 00:13:27 +0000 |
4 | @@ -578,9 +578,9 @@ |
5 | return resp_headers |
6 | |
7 | |
8 | -def put_object(url, token, container, name, contents, content_length=None, |
9 | - etag=None, chunk_size=65536, content_type=None, headers=None, |
10 | - http_conn=None): |
11 | +def put_object(url, token=None, container=None, name=None, contents=None, |
12 | + content_length=None, etag=None, chunk_size=65536, |
13 | + content_type=None, headers=None, http_conn=None): |
14 | """ |
15 | Put an object |
16 | |
17 | @@ -604,10 +604,17 @@ |
18 | parsed, conn = http_conn |
19 | else: |
20 | parsed, conn = http_connection(url) |
21 | - path = '%s/%s/%s' % (parsed.path, quote(container), quote(name)) |
22 | - if not headers: |
23 | + path = parsed.path |
24 | + if container: |
25 | + path = '%s/%s' % (path.rstrip('/'), quote(container)) |
26 | + if name: |
27 | + path = '%s/%s' % (path.rstrip('/'), quote(name)) |
28 | + if headers: |
29 | + headers = dict(headers) |
30 | + else: |
31 | headers = {} |
32 | - headers['X-Auth-Token'] = token |
33 | + if token: |
34 | + headers['X-Auth-Token'] = token |
35 | if etag: |
36 | headers['ETag'] = etag.strip('"') |
37 | if content_length is not None: |
38 | @@ -646,7 +653,7 @@ |
39 | raise ClientException('Object PUT failed', http_scheme=parsed.scheme, |
40 | http_host=conn.host, http_port=conn.port, http_path=path, |
41 | http_status=resp.status, http_reason=resp.reason) |
42 | - return resp.getheader('etag').strip('"') |
43 | + return resp.getheader('etag', '').strip('"') |
44 | |
45 | |
46 | def post_object(url, token, container, name, headers, http_conn=None): |
47 | @@ -677,7 +684,8 @@ |
48 | http_status=resp.status, http_reason=resp.reason) |
49 | |
50 | |
51 | -def delete_object(url, token, container, name, http_conn=None): |
52 | +def delete_object(url, token=None, container=None, name=None, http_conn=None, |
53 | + headers=None): |
54 | """ |
55 | Delete object |
56 | |
57 | @@ -693,8 +701,18 @@ |
58 | parsed, conn = http_conn |
59 | else: |
60 | parsed, conn = http_connection(url) |
61 | - path = '%s/%s/%s' % (parsed.path, quote(container), quote(name)) |
62 | - conn.request('DELETE', path, '', {'X-Auth-Token': token}) |
63 | + path = parsed.path |
64 | + if container: |
65 | + path = '%s/%s' % (path.rstrip('/'), quote(container)) |
66 | + if name: |
67 | + path = '%s/%s' % (path.rstrip('/'), quote(name)) |
68 | + if headers: |
69 | + headers = dict(headers) |
70 | + else: |
71 | + headers = {} |
72 | + if token: |
73 | + headers['X-Auth-Token'] = token |
74 | + conn.request('DELETE', path, '', headers) |
75 | resp = conn.getresponse() |
76 | resp.read() |
77 | if resp.status < 200 or resp.status >= 300: |
78 | @@ -1363,10 +1381,14 @@ |
79 | Objects: %d |
80 | Bytes: %d |
81 | Read ACL: %s |
82 | -Write ACL: %s'''.strip('\n') % (conn.url.rsplit('/', 1)[-1], args[0], |
83 | +Write ACL: %s |
84 | + Sync To: %s |
85 | + Sync Key: %s'''.strip('\n') % (conn.url.rsplit('/', 1)[-1], args[0], |
86 | object_count, bytes_used, |
87 | headers.get('x-container-read', ''), |
88 | - headers.get('x-container-write', ''))) |
89 | + headers.get('x-container-write', ''), |
90 | + headers.get('x-container-sync-to', ''), |
91 | + headers.get('x-container-sync-key', ''))) |
92 | for key, value in headers.items(): |
93 | if key.startswith('x-container-meta-'): |
94 | print_queue.put('%9s: %s' % ('Meta %s' % |
95 | @@ -1375,7 +1397,8 @@ |
96 | if not key.startswith('x-container-meta-') and key not in ( |
97 | 'content-length', 'date', 'x-container-object-count', |
98 | 'x-container-bytes-used', 'x-container-read', |
99 | - 'x-container-write'): |
100 | + 'x-container-write', 'x-container-sync-to', |
101 | + 'x-container-sync-key'): |
102 | print_queue.put( |
103 | '%9s: %s' % (key.title(), value)) |
104 | except ClientException, err: |
105 | @@ -1440,13 +1463,18 @@ |
106 | parser.add_option('-w', '--write-acl', dest='write_acl', help='Sets the ' |
107 | 'Write ACL for containers. Quick summary of ACL syntax: account1, ' |
108 | 'account2:user2') |
109 | + parser.add_option('-t', '--sync-to', dest='sync_to', help='Sets the ' |
110 | + 'Sync To for containers, for multi-cluster replication.') |
111 | + parser.add_option('-k', '--sync-key', dest='sync_key', help='Sets the ' |
112 | + 'Sync Key for containers, for multi-cluster replication.') |
113 | parser.add_option('-m', '--meta', action='append', dest='meta', default=[], |
114 | help='Sets a meta data item with the syntax name:value. This option ' |
115 | 'may be repeated. Example: -m Color:Blue -m Size:Large') |
116 | (options, args) = parse_args(parser, args) |
117 | args = args[1:] |
118 | - if (options.read_acl or options.write_acl) and not args: |
119 | - exit('-r and -w options only allowed for containers') |
120 | + if (options.read_acl or options.write_acl or options.sync_to or |
121 | + options.sync_key) and not args: |
122 | + exit('-r, -w, -t, and -k options only allowed for containers') |
123 | conn = Connection(options.auth, options.user, options.key) |
124 | if not args: |
125 | headers = {} |
126 | @@ -1474,6 +1502,10 @@ |
127 | headers['X-Container-Read'] = options.read_acl |
128 | if options.write_acl is not None: |
129 | headers['X-Container-Write'] = options.write_acl |
130 | + if options.sync_to is not None: |
131 | + headers['X-Container-Sync-To'] = options.sync_to |
132 | + if options.sync_key is not None: |
133 | + headers['X-Container-Sync-Key'] = options.sync_key |
134 | try: |
135 | conn.post_container(args[0], headers=headers) |
136 | except ClientException, err: |
137 | |
138 | === removed file 'bin/swauth-add-account' |
139 | --- bin/swauth-add-account 2011-04-18 16:08:48 +0000 |
140 | +++ bin/swauth-add-account 1970-01-01 00:00:00 +0000 |
141 | @@ -1,68 +0,0 @@ |
142 | -#!/usr/bin/env python |
143 | -# Copyright (c) 2010 OpenStack, LLC. |
144 | -# |
145 | -# Licensed under the Apache License, Version 2.0 (the "License"); |
146 | -# you may not use this file except in compliance with the License. |
147 | -# You may obtain a copy of the License at |
148 | -# |
149 | -# http://www.apache.org/licenses/LICENSE-2.0 |
150 | -# |
151 | -# Unless required by applicable law or agreed to in writing, software |
152 | -# distributed under the License is distributed on an "AS IS" BASIS, |
153 | -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
154 | -# implied. |
155 | -# See the License for the specific language governing permissions and |
156 | -# limitations under the License. |
157 | - |
158 | -import gettext |
159 | -from optparse import OptionParser |
160 | -from os.path import basename |
161 | -from sys import argv, exit |
162 | - |
163 | -from swift.common.bufferedhttp import http_connect_raw as http_connect |
164 | -from swift.common.utils import urlparse |
165 | - |
166 | - |
167 | -if __name__ == '__main__': |
168 | - gettext.install('swift', unicode=1) |
169 | - parser = OptionParser(usage='Usage: %prog [options] <account>') |
170 | - parser.add_option('-s', '--suffix', dest='suffix', |
171 | - default='', help='The suffix to use with the reseller prefix as the ' |
172 | - 'storage account name (default: <randomly-generated-uuid4>) Note: If ' |
173 | - 'the account already exists, this will have no effect on existing ' |
174 | - 'service URLs. Those will need to be updated with ' |
175 | - 'swauth-set-account-service') |
176 | - parser.add_option('-A', '--admin-url', dest='admin_url', |
177 | - default='http://127.0.0.1:8080/auth/', help='The URL to the auth ' |
178 | - 'subsystem (default: http://127.0.0.1:8080/auth/)') |
179 | - parser.add_option('-U', '--admin-user', dest='admin_user', |
180 | - default='.super_admin', help='The user with admin rights to add users ' |
181 | - '(default: .super_admin).') |
182 | - parser.add_option('-K', '--admin-key', dest='admin_key', |
183 | - help='The key for the user with admin rights to add users.') |
184 | - args = argv[1:] |
185 | - if not args: |
186 | - args.append('-h') |
187 | - (options, args) = parser.parse_args(args) |
188 | - if len(args) != 1: |
189 | - parser.parse_args(['-h']) |
190 | - account = args[0] |
191 | - parsed = urlparse(options.admin_url) |
192 | - if parsed.scheme not in ('http', 'https'): |
193 | - raise Exception('Cannot handle protocol scheme %s for url %s' % |
194 | - (parsed.scheme, repr(options.admin_url))) |
195 | - parsed_path = parsed.path |
196 | - if not parsed_path: |
197 | - parsed_path = '/' |
198 | - elif parsed_path[-1] != '/': |
199 | - parsed_path += '/' |
200 | - path = '%sv2/%s' % (parsed_path, account) |
201 | - headers = {'X-Auth-Admin-User': options.admin_user, |
202 | - 'X-Auth-Admin-Key': options.admin_key} |
203 | - if options.suffix: |
204 | - headers['X-Account-Suffix'] = options.suffix |
205 | - conn = http_connect(parsed.hostname, parsed.port, 'PUT', path, headers, |
206 | - ssl=(parsed.scheme == 'https')) |
207 | - resp = conn.getresponse() |
208 | - if resp.status // 100 != 2: |
209 | - exit('Account creation failed: %s %s' % (resp.status, resp.reason)) |
210 | |
211 | === removed file 'bin/swauth-add-user' |
212 | --- bin/swauth-add-user 2011-04-18 16:08:48 +0000 |
213 | +++ bin/swauth-add-user 1970-01-01 00:00:00 +0000 |
214 | @@ -1,93 +0,0 @@ |
215 | -#!/usr/bin/env python |
216 | -# Copyright (c) 2010 OpenStack, LLC. |
217 | -# |
218 | -# Licensed under the Apache License, Version 2.0 (the "License"); |
219 | -# you may not use this file except in compliance with the License. |
220 | -# You may obtain a copy of the License at |
221 | -# |
222 | -# http://www.apache.org/licenses/LICENSE-2.0 |
223 | -# |
224 | -# Unless required by applicable law or agreed to in writing, software |
225 | -# distributed under the License is distributed on an "AS IS" BASIS, |
226 | -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
227 | -# implied. |
228 | -# See the License for the specific language governing permissions and |
229 | -# limitations under the License. |
230 | - |
231 | -import gettext |
232 | -from optparse import OptionParser |
233 | -from os.path import basename |
234 | -from sys import argv, exit |
235 | - |
236 | -from swift.common.bufferedhttp import http_connect_raw as http_connect |
237 | -from swift.common.utils import urlparse |
238 | - |
239 | - |
240 | -if __name__ == '__main__': |
241 | - gettext.install('swift', unicode=1) |
242 | - parser = OptionParser( |
243 | - usage='Usage: %prog [options] <account> <user> <password>') |
244 | - parser.add_option('-a', '--admin', dest='admin', action='store_true', |
245 | - default=False, help='Give the user administrator access; otherwise ' |
246 | - 'the user will only have access to containers specifically allowed ' |
247 | - 'with ACLs.') |
248 | - parser.add_option('-r', '--reseller-admin', dest='reseller_admin', |
249 | - action='store_true', default=False, help='Give the user full reseller ' |
250 | - 'administrator access, giving them full access to all accounts within ' |
251 | - 'the reseller, including the ability to create new accounts. Creating ' |
252 | - 'a new reseller admin requires super_admin rights.') |
253 | - parser.add_option('-s', '--suffix', dest='suffix', |
254 | - default='', help='The suffix to use with the reseller prefix as the ' |
255 | - 'storage account name (default: <randomly-generated-uuid4>) Note: If ' |
256 | - 'the account already exists, this will have no effect on existing ' |
257 | - 'service URLs. Those will need to be updated with ' |
258 | - 'swauth-set-account-service') |
259 | - parser.add_option('-A', '--admin-url', dest='admin_url', |
260 | - default='http://127.0.0.1:8080/auth/', help='The URL to the auth ' |
261 | - 'subsystem (default: http://127.0.0.1:8080/auth/') |
262 | - parser.add_option('-U', '--admin-user', dest='admin_user', |
263 | - default='.super_admin', help='The user with admin rights to add users ' |
264 | - '(default: .super_admin).') |
265 | - parser.add_option('-K', '--admin-key', dest='admin_key', |
266 | - help='The key for the user with admin rights to add users.') |
267 | - args = argv[1:] |
268 | - if not args: |
269 | - args.append('-h') |
270 | - (options, args) = parser.parse_args(args) |
271 | - if len(args) != 3: |
272 | - parser.parse_args(['-h']) |
273 | - account, user, password = args |
274 | - parsed = urlparse(options.admin_url) |
275 | - if parsed.scheme not in ('http', 'https'): |
276 | - raise Exception('Cannot handle protocol scheme %s for url %s' % |
277 | - (parsed.scheme, repr(options.admin_url))) |
278 | - parsed_path = parsed.path |
279 | - if not parsed_path: |
280 | - parsed_path = '/' |
281 | - elif parsed_path[-1] != '/': |
282 | - parsed_path += '/' |
283 | - # Ensure the account exists |
284 | - path = '%sv2/%s' % (parsed_path, account) |
285 | - headers = {'X-Auth-Admin-User': options.admin_user, |
286 | - 'X-Auth-Admin-Key': options.admin_key} |
287 | - if options.suffix: |
288 | - headers['X-Account-Suffix'] = options.suffix |
289 | - conn = http_connect(parsed.hostname, parsed.port, 'PUT', path, headers, |
290 | - ssl=(parsed.scheme == 'https')) |
291 | - resp = conn.getresponse() |
292 | - if resp.status // 100 != 2: |
293 | - print 'Account creation failed: %s %s' % (resp.status, resp.reason) |
294 | - # Add the user |
295 | - path = '%sv2/%s/%s' % (parsed_path, account, user) |
296 | - headers = {'X-Auth-Admin-User': options.admin_user, |
297 | - 'X-Auth-Admin-Key': options.admin_key, |
298 | - 'X-Auth-User-Key': password} |
299 | - if options.admin: |
300 | - headers['X-Auth-User-Admin'] = 'true' |
301 | - if options.reseller_admin: |
302 | - headers['X-Auth-User-Reseller-Admin'] = 'true' |
303 | - conn = http_connect(parsed.hostname, parsed.port, 'PUT', path, headers, |
304 | - ssl=(parsed.scheme == 'https')) |
305 | - resp = conn.getresponse() |
306 | - if resp.status // 100 != 2: |
307 | - exit('User creation failed: %s %s' % (resp.status, resp.reason)) |
308 | |
309 | === removed file 'bin/swauth-cleanup-tokens' |
310 | --- bin/swauth-cleanup-tokens 2011-04-18 16:08:48 +0000 |
311 | +++ bin/swauth-cleanup-tokens 1970-01-01 00:00:00 +0000 |
312 | @@ -1,118 +0,0 @@ |
313 | -#!/usr/bin/env python |
314 | -# Copyright (c) 2010 OpenStack, LLC. |
315 | -# |
316 | -# Licensed under the Apache License, Version 2.0 (the "License"); |
317 | -# you may not use this file except in compliance with the License. |
318 | -# You may obtain a copy of the License at |
319 | -# |
320 | -# http://www.apache.org/licenses/LICENSE-2.0 |
321 | -# |
322 | -# Unless required by applicable law or agreed to in writing, software |
323 | -# distributed under the License is distributed on an "AS IS" BASIS, |
324 | -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
325 | -# implied. |
326 | -# See the License for the specific language governing permissions and |
327 | -# limitations under the License. |
328 | - |
329 | -try: |
330 | - import simplejson as json |
331 | -except ImportError: |
332 | - import json |
333 | -import gettext |
334 | -import re |
335 | -from datetime import datetime, timedelta |
336 | -from optparse import OptionParser |
337 | -from sys import argv, exit |
338 | -from time import sleep, time |
339 | - |
340 | -from swift.common.client import Connection, ClientException |
341 | - |
342 | - |
343 | -if __name__ == '__main__': |
344 | - gettext.install('swift', unicode=1) |
345 | - parser = OptionParser(usage='Usage: %prog [options]') |
346 | - parser.add_option('-t', '--token-life', dest='token_life', |
347 | - default='86400', help='The expected life of tokens; token objects ' |
348 | - 'modified more than this number of seconds ago will be checked for ' |
349 | - 'expiration (default: 86400).') |
350 | - parser.add_option('-s', '--sleep', dest='sleep', |
351 | - default='0.1', help='The number of seconds to sleep between token ' |
352 | - 'checks (default: 0.1)') |
353 | - parser.add_option('-v', '--verbose', dest='verbose', action='store_true', |
354 | - default=False, help='Outputs everything done instead of just the ' |
355 | - 'deletions.') |
356 | - parser.add_option('-A', '--admin-url', dest='admin_url', |
357 | - default='http://127.0.0.1:8080/auth/', help='The URL to the auth ' |
358 | - 'subsystem (default: http://127.0.0.1:8080/auth/)') |
359 | - parser.add_option('-K', '--admin-key', dest='admin_key', |
360 | - help='The key for .super_admin.') |
361 | - args = argv[1:] |
362 | - if not args: |
363 | - args.append('-h') |
364 | - (options, args) = parser.parse_args(args) |
365 | - if len(args) != 0: |
366 | - parser.parse_args(['-h']) |
367 | - options.admin_url = options.admin_url.rstrip('/') |
368 | - if not options.admin_url.endswith('/v1.0'): |
369 | - options.admin_url += '/v1.0' |
370 | - options.admin_user = '.super_admin:.super_admin' |
371 | - options.token_life = timedelta(0, float(options.token_life)) |
372 | - options.sleep = float(options.sleep) |
373 | - conn = Connection(options.admin_url, options.admin_user, options.admin_key) |
374 | - for x in xrange(16): |
375 | - container = '.token_%x' % x |
376 | - marker = None |
377 | - while True: |
378 | - if options.verbose: |
379 | - print 'GET %s?marker=%s' % (container, marker) |
380 | - try: |
381 | - objs = conn.get_container(container, marker=marker)[1] |
382 | - except ClientException, e: |
383 | - if e.http_status == 404: |
384 | - exit('Container %s not found. swauth-prep needs to be ' |
385 | - 'rerun' % (container)) |
386 | - else: |
387 | - exit('Object listing on container %s failed with status ' |
388 | - 'code %d' % (container, e.http_status)) |
389 | - if objs: |
390 | - marker = objs[-1]['name'] |
391 | - else: |
392 | - if options.verbose: |
393 | - print 'No more objects in %s' % container |
394 | - break |
395 | - for obj in objs: |
396 | - last_modified = datetime(*map(int, re.split('[^\d]', |
397 | - obj['last_modified'])[:-1])) |
398 | - ago = datetime.utcnow() - last_modified |
399 | - if ago > options.token_life: |
400 | - if options.verbose: |
401 | - print '%s/%s last modified %ss ago; investigating' % \ |
402 | - (container, obj['name'], |
403 | - ago.days * 86400 + ago.seconds) |
404 | - print 'GET %s/%s' % (container, obj['name']) |
405 | - detail = conn.get_object(container, obj['name'])[1] |
406 | - detail = json.loads(detail) |
407 | - if detail['expires'] < time(): |
408 | - if options.verbose: |
409 | - print '%s/%s expired %ds ago; deleting' % \ |
410 | - (container, obj['name'], |
411 | - time() - detail['expires']) |
412 | - print 'DELETE %s/%s' % (container, obj['name']) |
413 | - try: |
414 | - conn.delete_object(container, obj['name']) |
415 | - except ClientException, e: |
416 | - if e.http_status != 404: |
417 | - print 'DELETE of %s/%s failed with status ' \ |
418 | - 'code %d' % (container, obj['name'], |
419 | - e.http_status) |
420 | - elif options.verbose: |
421 | - print "%s/%s won't expire for %ds; skipping" % \ |
422 | - (container, obj['name'], |
423 | - detail['expires'] - time()) |
424 | - elif options.verbose: |
425 | - print '%s/%s last modified %ss ago; skipping' % \ |
426 | - (container, obj['name'], |
427 | - ago.days * 86400 + ago.seconds) |
428 | - sleep(options.sleep) |
429 | - if options.verbose: |
430 | - print 'Done.' |
431 | |
432 | === removed file 'bin/swauth-delete-account' |
433 | --- bin/swauth-delete-account 2011-04-18 16:08:48 +0000 |
434 | +++ bin/swauth-delete-account 1970-01-01 00:00:00 +0000 |
435 | @@ -1,60 +0,0 @@ |
436 | -#!/usr/bin/env python |
437 | -# Copyright (c) 2010 OpenStack, LLC. |
438 | -# |
439 | -# Licensed under the Apache License, Version 2.0 (the "License"); |
440 | -# you may not use this file except in compliance with the License. |
441 | -# You may obtain a copy of the License at |
442 | -# |
443 | -# http://www.apache.org/licenses/LICENSE-2.0 |
444 | -# |
445 | -# Unless required by applicable law or agreed to in writing, software |
446 | -# distributed under the License is distributed on an "AS IS" BASIS, |
447 | -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
448 | -# implied. |
449 | -# See the License for the specific language governing permissions and |
450 | -# limitations under the License. |
451 | - |
452 | -import gettext |
453 | -from optparse import OptionParser |
454 | -from os.path import basename |
455 | -from sys import argv, exit |
456 | - |
457 | -from swift.common.bufferedhttp import http_connect_raw as http_connect |
458 | -from swift.common.utils import urlparse |
459 | - |
460 | - |
461 | -if __name__ == '__main__': |
462 | - gettext.install('swift', unicode=1) |
463 | - parser = OptionParser(usage='Usage: %prog [options] <account>') |
464 | - parser.add_option('-A', '--admin-url', dest='admin_url', |
465 | - default='http://127.0.0.1:8080/auth/', help='The URL to the auth ' |
466 | - 'subsystem (default: http://127.0.0.1:8080/auth/') |
467 | - parser.add_option('-U', '--admin-user', dest='admin_user', |
468 | - default='.super_admin', help='The user with admin rights to add users ' |
469 | - '(default: .super_admin).') |
470 | - parser.add_option('-K', '--admin-key', dest='admin_key', |
471 | - help='The key for the user with admin rights to add users.') |
472 | - args = argv[1:] |
473 | - if not args: |
474 | - args.append('-h') |
475 | - (options, args) = parser.parse_args(args) |
476 | - if len(args) != 1: |
477 | - parser.parse_args(['-h']) |
478 | - account = args[0] |
479 | - parsed = urlparse(options.admin_url) |
480 | - if parsed.scheme not in ('http', 'https'): |
481 | - raise Exception('Cannot handle protocol scheme %s for url %s' % |
482 | - (parsed.scheme, repr(options.admin_url))) |
483 | - parsed_path = parsed.path |
484 | - if not parsed_path: |
485 | - parsed_path = '/' |
486 | - elif parsed_path[-1] != '/': |
487 | - parsed_path += '/' |
488 | - path = '%sv2/%s' % (parsed_path, account) |
489 | - headers = {'X-Auth-Admin-User': options.admin_user, |
490 | - 'X-Auth-Admin-Key': options.admin_key} |
491 | - conn = http_connect(parsed.hostname, parsed.port, 'DELETE', path, headers, |
492 | - ssl=(parsed.scheme == 'https')) |
493 | - resp = conn.getresponse() |
494 | - if resp.status // 100 != 2: |
495 | - exit('Account deletion failed: %s %s' % (resp.status, resp.reason)) |
496 | |
497 | === removed file 'bin/swauth-delete-user' |
498 | --- bin/swauth-delete-user 2011-04-18 16:08:48 +0000 |
499 | +++ bin/swauth-delete-user 1970-01-01 00:00:00 +0000 |
500 | @@ -1,60 +0,0 @@ |
501 | -#!/usr/bin/env python |
502 | -# Copyright (c) 2010 OpenStack, LLC. |
503 | -# |
504 | -# Licensed under the Apache License, Version 2.0 (the "License"); |
505 | -# you may not use this file except in compliance with the License. |
506 | -# You may obtain a copy of the License at |
507 | -# |
508 | -# http://www.apache.org/licenses/LICENSE-2.0 |
509 | -# |
510 | -# Unless required by applicable law or agreed to in writing, software |
511 | -# distributed under the License is distributed on an "AS IS" BASIS, |
512 | -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
513 | -# implied. |
514 | -# See the License for the specific language governing permissions and |
515 | -# limitations under the License. |
516 | - |
517 | -import gettext |
518 | -from optparse import OptionParser |
519 | -from os.path import basename |
520 | -from sys import argv, exit |
521 | - |
522 | -from swift.common.bufferedhttp import http_connect_raw as http_connect |
523 | -from swift.common.utils import urlparse |
524 | - |
525 | - |
526 | -if __name__ == '__main__': |
527 | - gettext.install('swift', unicode=1) |
528 | - parser = OptionParser(usage='Usage: %prog [options] <account> <user>') |
529 | - parser.add_option('-A', '--admin-url', dest='admin_url', |
530 | - default='http://127.0.0.1:8080/auth/', help='The URL to the auth ' |
531 | - 'subsystem (default: http://127.0.0.1:8080/auth/') |
532 | - parser.add_option('-U', '--admin-user', dest='admin_user', |
533 | - default='.super_admin', help='The user with admin rights to add users ' |
534 | - '(default: .super_admin).') |
535 | - parser.add_option('-K', '--admin-key', dest='admin_key', |
536 | - help='The key for the user with admin rights to add users.') |
537 | - args = argv[1:] |
538 | - if not args: |
539 | - args.append('-h') |
540 | - (options, args) = parser.parse_args(args) |
541 | - if len(args) != 2: |
542 | - parser.parse_args(['-h']) |
543 | - account, user = args |
544 | - parsed = urlparse(options.admin_url) |
545 | - if parsed.scheme not in ('http', 'https'): |
546 | - raise Exception('Cannot handle protocol scheme %s for url %s' % |
547 | - (parsed.scheme, repr(options.admin_url))) |
548 | - parsed_path = parsed.path |
549 | - if not parsed_path: |
550 | - parsed_path = '/' |
551 | - elif parsed_path[-1] != '/': |
552 | - parsed_path += '/' |
553 | - path = '%sv2/%s/%s' % (parsed_path, account, user) |
554 | - headers = {'X-Auth-Admin-User': options.admin_user, |
555 | - 'X-Auth-Admin-Key': options.admin_key} |
556 | - conn = http_connect(parsed.hostname, parsed.port, 'DELETE', path, headers, |
557 | - ssl=(parsed.scheme == 'https')) |
558 | - resp = conn.getresponse() |
559 | - if resp.status // 100 != 2: |
560 | - exit('User deletion failed: %s %s' % (resp.status, resp.reason)) |
561 | |
562 | === removed file 'bin/swauth-list' |
563 | --- bin/swauth-list 2011-04-18 16:08:48 +0000 |
564 | +++ bin/swauth-list 1970-01-01 00:00:00 +0000 |
565 | @@ -1,86 +0,0 @@ |
566 | -#!/usr/bin/env python |
567 | -# Copyright (c) 2010 OpenStack, LLC. |
568 | -# |
569 | -# Licensed under the Apache License, Version 2.0 (the "License"); |
570 | -# you may not use this file except in compliance with the License. |
571 | -# You may obtain a copy of the License at |
572 | -# |
573 | -# http://www.apache.org/licenses/LICENSE-2.0 |
574 | -# |
575 | -# Unless required by applicable law or agreed to in writing, software |
576 | -# distributed under the License is distributed on an "AS IS" BASIS, |
577 | -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
578 | -# implied. |
579 | -# See the License for the specific language governing permissions and |
580 | -# limitations under the License. |
581 | - |
582 | -try: |
583 | - import simplejson as json |
584 | -except ImportError: |
585 | - import json |
586 | -import gettext |
587 | -from optparse import OptionParser |
588 | -from os.path import basename |
589 | -from sys import argv, exit |
590 | - |
591 | -from swift.common.bufferedhttp import http_connect_raw as http_connect |
592 | -from swift.common.utils import urlparse |
593 | - |
594 | - |
595 | -if __name__ == '__main__': |
596 | - gettext.install('swift', unicode=1) |
597 | - parser = OptionParser(usage=''' |
598 | -Usage: %prog [options] [account] [user] |
599 | - |
600 | -If [account] and [user] are omitted, a list of accounts will be output. |
601 | - |
602 | -If [account] is included but not [user], an account's information will be |
603 | -output, including a list of users within the account. |
604 | - |
605 | -If [account] and [user] are included, the user's information will be output, |
606 | -including a list of groups the user belongs to. |
607 | - |
608 | -If the [user] is '.groups', the active groups for the account will be listed. |
609 | -'''.strip()) |
610 | - parser.add_option('-p', '--plain-text', dest='plain_text', |
611 | - action='store_true', default=False, help='Changes the output from ' |
612 | - 'JSON to plain text. This will cause an account to list only the ' |
613 | - 'users and a user to list only the groups.') |
614 | - parser.add_option('-A', '--admin-url', dest='admin_url', |
615 | - default='http://127.0.0.1:8080/auth/', help='The URL to the auth ' |
616 | - 'subsystem (default: http://127.0.0.1:8080/auth/') |
617 | - parser.add_option('-U', '--admin-user', dest='admin_user', |
618 | - default='.super_admin', help='The user with admin rights to add users ' |
619 | - '(default: .super_admin).') |
620 | - parser.add_option('-K', '--admin-key', dest='admin_key', |
621 | - help='The key for the user with admin rights to add users.') |
622 | - args = argv[1:] |
623 | - if not args: |
624 | - args.append('-h') |
625 | - (options, args) = parser.parse_args(args) |
626 | - if len(args) > 2: |
627 | - parser.parse_args(['-h']) |
628 | - parsed = urlparse(options.admin_url) |
629 | - if parsed.scheme not in ('http', 'https'): |
630 | - raise Exception('Cannot handle protocol scheme %s for url %s' % |
631 | - (parsed.scheme, repr(options.admin_url))) |
632 | - parsed_path = parsed.path |
633 | - if not parsed_path: |
634 | - parsed_path = '/' |
635 | - elif parsed_path[-1] != '/': |
636 | - parsed_path += '/' |
637 | - path = '%sv2/%s' % (parsed_path, '/'.join(args)) |
638 | - headers = {'X-Auth-Admin-User': options.admin_user, |
639 | - 'X-Auth-Admin-Key': options.admin_key} |
640 | - conn = http_connect(parsed.hostname, parsed.port, 'GET', path, headers, |
641 | - ssl=(parsed.scheme == 'https')) |
642 | - resp = conn.getresponse() |
643 | - body = resp.read() |
644 | - if resp.status // 100 != 2: |
645 | - exit('List failed: %s %s' % (resp.status, resp.reason)) |
646 | - if options.plain_text: |
647 | - info = json.loads(body) |
648 | - for group in info[['accounts', 'users', 'groups'][len(args)]]: |
649 | - print group['name'] |
650 | - else: |
651 | - print body |
652 | |
653 | === removed file 'bin/swauth-prep' |
654 | --- bin/swauth-prep 2011-04-18 16:08:48 +0000 |
655 | +++ bin/swauth-prep 1970-01-01 00:00:00 +0000 |
656 | @@ -1,59 +0,0 @@ |
657 | -#!/usr/bin/env python |
658 | -# Copyright (c) 2010 OpenStack, LLC. |
659 | -# |
660 | -# Licensed under the Apache License, Version 2.0 (the "License"); |
661 | -# you may not use this file except in compliance with the License. |
662 | -# You may obtain a copy of the License at |
663 | -# |
664 | -# http://www.apache.org/licenses/LICENSE-2.0 |
665 | -# |
666 | -# Unless required by applicable law or agreed to in writing, software |
667 | -# distributed under the License is distributed on an "AS IS" BASIS, |
668 | -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
669 | -# implied. |
670 | -# See the License for the specific language governing permissions and |
671 | -# limitations under the License. |
672 | - |
673 | -import gettext |
674 | -from optparse import OptionParser |
675 | -from os.path import basename |
676 | -from sys import argv, exit |
677 | - |
678 | -from swift.common.bufferedhttp import http_connect_raw as http_connect |
679 | -from swift.common.utils import urlparse |
680 | - |
681 | - |
682 | -if __name__ == '__main__': |
683 | - gettext.install('swift', unicode=1) |
684 | - parser = OptionParser(usage='Usage: %prog [options]') |
685 | - parser.add_option('-A', '--admin-url', dest='admin_url', |
686 | - default='http://127.0.0.1:8080/auth/', help='The URL to the auth ' |
687 | - 'subsystem (default: http://127.0.0.1:8080/auth/') |
688 | - parser.add_option('-U', '--admin-user', dest='admin_user', |
689 | - default='.super_admin', help='The user with admin rights to add users ' |
690 | - '(default: .super_admin).') |
691 | - parser.add_option('-K', '--admin-key', dest='admin_key', |
692 | - help='The key for the user with admin rights to add users.') |
693 | - args = argv[1:] |
694 | - if not args: |
695 | - args.append('-h') |
696 | - (options, args) = parser.parse_args(args) |
697 | - if args: |
698 | - parser.parse_args(['-h']) |
699 | - parsed = urlparse(options.admin_url) |
700 | - if parsed.scheme not in ('http', 'https'): |
701 | - raise Exception('Cannot handle protocol scheme %s for url %s' % |
702 | - (parsed.scheme, repr(options.admin_url))) |
703 | - parsed_path = parsed.path |
704 | - if not parsed_path: |
705 | - parsed_path = '/' |
706 | - elif parsed_path[-1] != '/': |
707 | - parsed_path += '/' |
708 | - path = '%sv2/.prep' % parsed_path |
709 | - headers = {'X-Auth-Admin-User': options.admin_user, |
710 | - 'X-Auth-Admin-Key': options.admin_key} |
711 | - conn = http_connect(parsed.hostname, parsed.port, 'POST', path, headers, |
712 | - ssl=(parsed.scheme == 'https')) |
713 | - resp = conn.getresponse() |
714 | - if resp.status // 100 != 2: |
715 | - exit('Auth subsystem prep failed: %s %s' % (resp.status, resp.reason)) |
716 | |
717 | === removed file 'bin/swauth-set-account-service' |
718 | --- bin/swauth-set-account-service 2011-04-18 16:08:48 +0000 |
719 | +++ bin/swauth-set-account-service 1970-01-01 00:00:00 +0000 |
720 | @@ -1,73 +0,0 @@ |
721 | -#!/usr/bin/env python |
722 | -# Copyright (c) 2010 OpenStack, LLC. |
723 | -# |
724 | -# Licensed under the Apache License, Version 2.0 (the "License"); |
725 | -# you may not use this file except in compliance with the License. |
726 | -# You may obtain a copy of the License at |
727 | -# |
728 | -# http://www.apache.org/licenses/LICENSE-2.0 |
729 | -# |
730 | -# Unless required by applicable law or agreed to in writing, software |
731 | -# distributed under the License is distributed on an "AS IS" BASIS, |
732 | -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
733 | -# implied. |
734 | -# See the License for the specific language governing permissions and |
735 | -# limitations under the License. |
736 | - |
737 | -try: |
738 | - import simplejson as json |
739 | -except ImportError: |
740 | - import json |
741 | -import gettext |
742 | -from optparse import OptionParser |
743 | -from os.path import basename |
744 | -from sys import argv, exit |
745 | - |
746 | -from swift.common.bufferedhttp import http_connect_raw as http_connect |
747 | -from swift.common.utils import urlparse |
748 | - |
749 | - |
750 | -if __name__ == '__main__': |
751 | - gettext.install('swift', unicode=1) |
752 | - parser = OptionParser(usage=''' |
753 | -Usage: %prog [options] <account> <service> <name> <value> |
754 | - |
755 | -Sets a service URL for an account. Can only be set by a reseller admin. |
756 | - |
757 | -Example: %prog -K swauthkey test storage local http://127.0.0.1:8080/v1/AUTH_018c3946-23f8-4efb-a8fb-b67aae8e4162 |
758 | -'''.strip()) |
759 | - parser.add_option('-A', '--admin-url', dest='admin_url', |
760 | - default='http://127.0.0.1:8080/auth/', help='The URL to the auth ' |
761 | - 'subsystem (default: http://127.0.0.1:8080/auth/)') |
762 | - parser.add_option('-U', '--admin-user', dest='admin_user', |
763 | - default='.super_admin', help='The user with admin rights to add users ' |
764 | - '(default: .super_admin).') |
765 | - parser.add_option('-K', '--admin-key', dest='admin_key', |
766 | - help='The key for the user with admin rights to add users.') |
767 | - args = argv[1:] |
768 | - if not args: |
769 | - args.append('-h') |
770 | - (options, args) = parser.parse_args(args) |
771 | - if len(args) != 4: |
772 | - parser.parse_args(['-h']) |
773 | - account, service, name, url = args |
774 | - parsed = urlparse(options.admin_url) |
775 | - if parsed.scheme not in ('http', 'https'): |
776 | - raise Exception('Cannot handle protocol scheme %s for url %s' % |
777 | - (parsed.scheme, repr(options.admin_url))) |
778 | - parsed_path = parsed.path |
779 | - if not parsed_path: |
780 | - parsed_path = '/' |
781 | - elif parsed_path[-1] != '/': |
782 | - parsed_path += '/' |
783 | - path = '%sv2/%s/.services' % (parsed_path, account) |
784 | - body = json.dumps({service: {name: url}}) |
785 | - headers = {'Content-Length': str(len(body)), |
786 | - 'X-Auth-Admin-User': options.admin_user, |
787 | - 'X-Auth-Admin-Key': options.admin_key} |
788 | - conn = http_connect(parsed.hostname, parsed.port, 'POST', path, headers, |
789 | - ssl=(parsed.scheme == 'https')) |
790 | - conn.send(body) |
791 | - resp = conn.getresponse() |
792 | - if resp.status // 100 != 2: |
793 | - exit('Service set failed: %s %s' % (resp.status, resp.reason)) |
794 | |
795 | === added file 'bin/swift-container-sync' |
796 | --- bin/swift-container-sync 1970-01-01 00:00:00 +0000 |
797 | +++ bin/swift-container-sync 2011-06-03 00:13:27 +0000 |
798 | @@ -0,0 +1,23 @@ |
799 | +#!/usr/bin/python |
800 | +# Copyright (c) 2010-2011 OpenStack, LLC. |
801 | +# |
802 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
803 | +# you may not use this file except in compliance with the License. |
804 | +# You may obtain a copy of the License at |
805 | +# |
806 | +# http://www.apache.org/licenses/LICENSE-2.0 |
807 | +# |
808 | +# Unless required by applicable law or agreed to in writing, software |
809 | +# distributed under the License is distributed on an "AS IS" BASIS, |
810 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
811 | +# implied. |
812 | +# See the License for the specific language governing permissions and |
813 | +# limitations under the License. |
814 | + |
815 | +from swift.container.sync import ContainerSync |
816 | +from swift.common.utils import parse_options |
817 | +from swift.common.daemon import run_daemon |
818 | + |
819 | +if __name__ == '__main__': |
820 | + conf_file, options = parse_options(once=True) |
821 | + run_daemon(ContainerSync, conf_file, **options) |
822 | |
823 | === modified file 'doc/source/admin_guide.rst' |
824 | --- doc/source/admin_guide.rst 2011-03-31 22:32:41 +0000 |
825 | +++ doc/source/admin_guide.rst 2011-06-03 00:13:27 +0000 |
826 | @@ -222,22 +222,6 @@ |
827 | Sample represents 1.00% of the object partition space |
828 | |
829 | |
830 | ------------------------------------- |
831 | -Additional Cleanup Script for Swauth |
832 | ------------------------------------- |
833 | - |
834 | -With Swauth, you'll want to install a cronjob to clean up any |
835 | -orphaned expired tokens. These orphaned tokens can occur when a "stampede" |
836 | -occurs where a single user authenticates several times concurrently. Generally, |
837 | -these orphaned tokens don't pose much of an issue, but it's good to clean them |
838 | -up once a "token life" period (default: 1 day or 86400 seconds). |
839 | - |
840 | -This should be as simple as adding `swauth-cleanup-tokens -A |
841 | -https://<PROXY_HOSTNAME>:8080/auth/ -K swauthkey > /dev/null` to a crontab |
842 | -entry on one of the proxies that is running Swauth; but run |
843 | -`swauth-cleanup-tokens` with no arguments for detailed help on the options |
844 | -available. |
845 | - |
846 | ------------------------ |
847 | Debugging Tips and Tools |
848 | ------------------------ |
849 | |
850 | === modified file 'doc/source/container.rst' |
851 | --- doc/source/container.rst 2010-07-19 16:25:18 +0000 |
852 | +++ doc/source/container.rst 2011-06-03 00:13:27 +0000 |
853 | @@ -34,3 +34,10 @@ |
854 | :undoc-members: |
855 | :show-inheritance: |
856 | |
857 | +Container Sync |
858 | +============== |
859 | + |
860 | +.. automodule:: swift.container.sync |
861 | + :members: |
862 | + :undoc-members: |
863 | + :show-inheritance: |
864 | |
865 | === modified file 'doc/source/deployment_guide.rst' |
866 | --- doc/source/deployment_guide.rst 2011-01-25 00:28:22 +0000 |
867 | +++ doc/source/deployment_guide.rst 2011-06-03 00:13:27 +0000 |
868 | @@ -549,35 +549,17 @@ |
869 | are even callable |
870 | ============================ =============== ============================= |
871 | |
872 | -[auth] |
873 | - |
874 | -============ =================================== ======================== |
875 | -Option Default Description |
876 | ------------- ----------------------------------- ------------------------ |
877 | -use Entry point for paste.deploy |
878 | - to use for auth. To |
879 | - use the swift dev auth, |
880 | - set to: |
881 | - `egg:swift#auth` |
882 | -ip 127.0.0.1 IP address of auth |
883 | - server |
884 | -port 11000 Port of auth server |
885 | -ssl False If True, use SSL to |
886 | - connect to auth |
887 | -node_timeout 10 Request timeout |
888 | -============ =================================== ======================== |
889 | - |
890 | -[swauth] |
891 | +[tempauth] |
892 | |
893 | ===================== =============================== ======================= |
894 | Option Default Description |
895 | --------------------- ------------------------------- ----------------------- |
896 | use Entry point for |
897 | paste.deploy to use for |
898 | - auth. To use the swauth |
899 | + auth. To use tempauth |
900 | set to: |
901 | - `egg:swift#swauth` |
902 | -set log_name auth-server Label used when logging |
903 | + `egg:swift#tempauth` |
904 | +set log_name tempauth Label used when logging |
905 | set log_facility LOG_LOCAL0 Syslog log facility |
906 | set log_level INFO Log level |
907 | set log_headers True If True, log headers in |
908 | @@ -593,16 +575,39 @@ |
909 | reserves anything |
910 | beginning with the |
911 | letter `v`. |
912 | -default_swift_cluster local#http://127.0.0.1:8080/v1 The default Swift |
913 | - cluster to place newly |
914 | - created accounts on. |
915 | token_life 86400 The number of seconds a |
916 | token is valid. |
917 | -node_timeout 10 Request timeout |
918 | -super_admin_key None The key for the |
919 | - .super_admin account. |
920 | ===================== =============================== ======================= |
921 | |
922 | +Additionally, you need to list all the accounts/users you want here. The format |
923 | +is:: |
924 | + |
925 | + user_<account>_<user> = <key> [group] [group] [...] [storage_url] |
926 | + |
927 | +There are special groups of:: |
928 | + |
929 | + .reseller_admin = can do anything to any account for this auth |
930 | + .admin = can do anything within the account |
931 | + |
932 | +If neither of these groups are specified, the user can only access containers |
933 | +that have been explicitly allowed for them by a .admin or .reseller_admin. |
934 | + |
935 | +The trailing optional storage_url allows you to specify an alternate url to |
936 | +hand back to the user upon authentication. If not specified, this defaults to:: |
937 | + |
938 | + http[s]://<ip>:<port>/v1/<reseller_prefix>_<account> |
939 | + |
940 | +Where http or https depends on whether cert_file is specified in the [DEFAULT] |
941 | +section, <ip> and <port> are based on the [DEFAULT] section's bind_ip and |
942 | +bind_port (falling back to 127.0.0.1 and 8080), <reseller_prefix> is from this |
943 | +section, and <account> is from the user_<account>_<user> name. |
944 | + |
945 | +Here are example entries, required for running the tests:: |
946 | + |
947 | + user_admin_admin = admin .admin .reseller_admin |
948 | + user_test_tester = testing .admin |
949 | + user_test2_tester2 = testing2 .admin |
950 | + user_test_tester3 = testing3 |
951 | |
952 | ------------------------ |
953 | Memcached Considerations |
954 | |
955 | === modified file 'doc/source/development_auth.rst' |
956 | --- doc/source/development_auth.rst 2011-03-14 02:56:37 +0000 |
957 | +++ doc/source/development_auth.rst 2011-06-03 00:13:27 +0000 |
958 | @@ -6,7 +6,7 @@ |
959 | Creating Your Own Auth Server and Middleware |
960 | -------------------------------------------- |
961 | |
962 | -The included swift/common/middleware/swauth.py is a good example of how to |
963 | +The included swift/common/middleware/tempauth.py is a good example of how to |
964 | create an auth subsystem with proxy server auth middleware. The main points are |
965 | that the auth middleware can reject requests up front, before they ever get to |
966 | the Swift Proxy application, and afterwards when the proxy issues callbacks to |
967 | @@ -27,7 +27,7 @@ |
968 | environ['REMOTE_USER'] set to the authenticated user string but often more |
969 | information is needed than just that. |
970 | |
971 | -The included Swauth will set the REMOTE_USER to a comma separated list of |
972 | +The included TempAuth will set the REMOTE_USER to a comma separated list of |
973 | groups the user belongs to. The first group will be the "user's group", a group |
974 | that only the user belongs to. The second group will be the "account's group", |
975 | a group that includes all users for that auth account (different than the |
976 | @@ -37,7 +37,7 @@ |
977 | |
978 | It is highly recommended that authentication server implementers prefix their |
979 | tokens and Swift storage accounts they create with a configurable reseller |
980 | -prefix (`AUTH_` by default with the included Swauth). This prefix will avoid |
981 | +prefix (`AUTH_` by default with the included TempAuth). This prefix will avoid |
982 | conflicts with other authentication servers that might be using the same |
983 | Swift cluster. Otherwise, the Swift cluster will have to try all the resellers |
984 | until one validates a token or all fail. |
985 | @@ -46,14 +46,14 @@ |
986 | '.' as that is reserved for internal Swift use (such as the .r for referrer |
987 | designations as you'll see later). |
988 | |
989 | -Example Authentication with Swauth: |
990 | +Example Authentication with TempAuth: |
991 | |
992 | - * Token AUTH_tkabcd is given to the Swauth middleware in a request's |
993 | + * Token AUTH_tkabcd is given to the TempAuth middleware in a request's |
994 | X-Auth-Token header. |
995 | - * The Swauth middleware validates the token AUTH_tkabcd and discovers |
996 | + * The TempAuth middleware validates the token AUTH_tkabcd and discovers |
997 | it matches the "tester" user within the "test" account for the storage |
998 | account "AUTH_storage_xyz". |
999 | - * The Swauth server sets the REMOTE_USER to |
1000 | + * The TempAuth middleware sets the REMOTE_USER to |
1001 | "test:tester,test,AUTH_storage_xyz" |
1002 | * Now this user will have full access (via authorization procedures later) |
1003 | to the AUTH_storage_xyz Swift storage account and access to containers in |
1004 | |
1005 | === modified file 'doc/source/development_saio.rst' |
1006 | --- doc/source/development_saio.rst 2011-05-18 15:26:52 +0000 |
1007 | +++ doc/source/development_saio.rst 2011-06-03 00:13:27 +0000 |
1008 | @@ -265,16 +265,18 @@ |
1009 | log_facility = LOG_LOCAL1 |
1010 | |
1011 | [pipeline:main] |
1012 | - pipeline = healthcheck cache swauth proxy-server |
1013 | + pipeline = healthcheck cache tempauth proxy-server |
1014 | |
1015 | [app:proxy-server] |
1016 | use = egg:swift#proxy |
1017 | allow_account_management = true |
1018 | |
1019 | - [filter:swauth] |
1020 | - use = egg:swift#swauth |
1021 | - # Highly recommended to change this. |
1022 | - super_admin_key = swauthkey |
1023 | + [filter:tempauth] |
1024 | + use = egg:swift#tempauth |
1025 | + user_admin_admin = admin .admin .reseller_admin |
1026 | + user_test_tester = testing .admin |
1027 | + user_test2_tester2 = testing2 .admin |
1028 | + user_test_tester3 = testing3 |
1029 | |
1030 | [filter:healthcheck] |
1031 | use = egg:swift#healthcheck |
1032 | @@ -398,6 +400,8 @@ |
1033 | |
1034 | [container-auditor] |
1035 | |
1036 | + [container-sync] |
1037 | + |
1038 | #. Create `/etc/swift/container-server/2.conf`:: |
1039 | |
1040 | [DEFAULT] |
1041 | @@ -420,6 +424,8 @@ |
1042 | |
1043 | [container-auditor] |
1044 | |
1045 | + [container-sync] |
1046 | + |
1047 | #. Create `/etc/swift/container-server/3.conf`:: |
1048 | |
1049 | [DEFAULT] |
1050 | @@ -442,6 +448,8 @@ |
1051 | |
1052 | [container-auditor] |
1053 | |
1054 | + [container-sync] |
1055 | + |
1056 | #. Create `/etc/swift/container-server/4.conf`:: |
1057 | |
1058 | [DEFAULT] |
1059 | @@ -464,6 +472,8 @@ |
1060 | |
1061 | [container-auditor] |
1062 | |
1063 | + [container-sync] |
1064 | + |
1065 | |
1066 | #. Create `/etc/swift/object-server/1.conf`:: |
1067 | |
1068 | @@ -558,8 +568,10 @@ |
1069 | ------------------------------------ |
1070 | |
1071 | #. Create `~/bin/resetswift.` |
1072 | - If you are using a loopback device substitute `/dev/sdb1` with `/srv/swift-disk`. |
1073 | - If you did not set up rsyslog for individual logging, remove the `find /var/log/swift...` line:: |
1074 | + |
1075 | + If you are using a loopback device substitute `/dev/sdb1` with `/srv/swift-disk`. |
1076 | + |
1077 | + If you did not set up rsyslog for individual logging, remove the `find /var/log/swift...` line:: |
1078 | |
1079 | #!/bin/bash |
1080 | |
1081 | @@ -608,18 +620,6 @@ |
1082 | |
1083 | swift-init main start |
1084 | |
1085 | - #. Create `~/bin/recreateaccounts`:: |
1086 | - |
1087 | - #!/bin/bash |
1088 | - |
1089 | - # Replace swauthkey with whatever your super_admin key is (recorded in |
1090 | - # /etc/swift/proxy-server.conf). |
1091 | - swauth-prep -K swauthkey |
1092 | - swauth-add-user -K swauthkey -a test tester testing |
1093 | - swauth-add-user -K swauthkey -a test2 tester2 testing2 |
1094 | - swauth-add-user -K swauthkey test tester3 testing3 |
1095 | - swauth-add-user -K swauthkey -a -r reseller reseller reseller |
1096 | - |
1097 | #. Create `~/bin/startrest`:: |
1098 | |
1099 | #!/bin/bash |
1100 | |
1101 | === modified file 'doc/source/howto_installmultinode.rst' |
1102 | --- doc/source/howto_installmultinode.rst 2011-05-17 03:59:57 +0000 |
1103 | +++ doc/source/howto_installmultinode.rst 2011-06-03 00:13:27 +0000 |
1104 | @@ -13,7 +13,7 @@ |
1105 | Basic architecture and terms |
1106 | ---------------------------- |
1107 | - *node* - a host machine running one or more Swift services |
1108 | -- *Proxy node* - node that runs Proxy services; also runs Swauth |
1109 | +- *Proxy node* - node that runs Proxy services; also runs TempAuth |
1110 | - *Storage node* - node that runs Account, Container, and Object services |
1111 | - *ring* - a set of mappings of Swift data to physical devices |
1112 | |
1113 | @@ -23,7 +23,7 @@ |
1114 | |
1115 | - Runs the swift-proxy-server processes which proxy requests to the |
1116 | appropriate Storage nodes. The proxy server will also contain |
1117 | - the Swauth service as WSGI middleware. |
1118 | + the TempAuth service as WSGI middleware. |
1119 | |
1120 | - five Storage nodes |
1121 | |
1122 | @@ -130,17 +130,15 @@ |
1123 | user = swift |
1124 | |
1125 | [pipeline:main] |
1126 | - pipeline = healthcheck cache swauth proxy-server |
1127 | + pipeline = healthcheck cache tempauth proxy-server |
1128 | |
1129 | [app:proxy-server] |
1130 | use = egg:swift#proxy |
1131 | allow_account_management = true |
1132 | |
1133 | - [filter:swauth] |
1134 | - use = egg:swift#swauth |
1135 | - default_swift_cluster = local#https://$PROXY_LOCAL_NET_IP:8080/v1 |
1136 | - # Highly recommended to change this key to something else! |
1137 | - super_admin_key = swauthkey |
1138 | + [filter:tempauth] |
1139 | + use = egg:swift#tempauth |
1140 | + user_system_root = testpass .admin https://$PROXY_LOCAL_NET_IP:8080/v1/AUTH_system |
1141 | |
1142 | [filter:healthcheck] |
1143 | use = egg:swift#healthcheck |
1144 | @@ -366,16 +364,6 @@ |
1145 | |
1146 | You run these commands from the Proxy node. |
1147 | |
1148 | -#. Create a user with administrative privileges (account = system, |
1149 | - username = root, password = testpass). Make sure to replace |
1150 | - ``swauthkey`` with whatever super_admin key you assigned in |
1151 | - the proxy-server.conf file |
1152 | - above. *Note: None of the values of |
1153 | - account, username, or password are special - they can be anything.*:: |
1154 | - |
1155 | - swauth-prep -A https://$PROXY_LOCAL_NET_IP:8080/auth/ -K swauthkey |
1156 | - swauth-add-user -A https://$PROXY_LOCAL_NET_IP:8080/auth/ -K swauthkey -a system root testpass |
1157 | - |
1158 | #. Get an X-Storage-Url and X-Auth-Token:: |
1159 | |
1160 | curl -k -v -H 'X-Storage-User: system:root' -H 'X-Storage-Pass: testpass' https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 |
1161 | @@ -430,45 +418,16 @@ |
1162 | use = egg:swift#memcache |
1163 | memcache_servers = $PROXY_LOCAL_NET_IP:11211 |
1164 | |
1165 | -#. Change the default_cluster_url to point to the load balanced url, rather than the first proxy server you created in /etc/swift/proxy-server.conf:: |
1166 | - |
1167 | - [filter:swauth] |
1168 | - use = egg:swift#swauth |
1169 | - default_swift_cluster = local#http://<LOAD_BALANCER_HOSTNAME>/v1 |
1170 | - # Highly recommended to change this key to something else! |
1171 | - super_admin_key = swauthkey |
1172 | - |
1173 | -#. The above will make new accounts with the new default_swift_cluster URL, however it won't change any existing accounts. You can change a service URL for existing accounts with:: |
1174 | - |
1175 | - First retreve what the URL was:: |
1176 | - |
1177 | - swauth-list -A https://$PROXY_LOCAL_NET_IP:8080/auth/ -K swauthkey <account> |
1178 | - |
1179 | - And then update it with:: |
1180 | - |
1181 | - swauth-set-account-service -A https://$PROXY_LOCAL_NET_IP:8080/auth/ -K swauthkey <account> storage local <new_url_for_the_account> |
1182 | - |
1183 | - Make the <new_url_for_the_account> look just like it's original URL but with the host:port update you want. |
1184 | +#. Change the storage url for any users to point to the load balanced url, rather than the first proxy server you created in /etc/swift/proxy-server.conf:: |
1185 | + |
1186 | + [filter:tempauth] |
1187 | + use = egg:swift#tempauth |
1188 | + user_system_root = testpass .admin http[s]://<LOAD_BALANCER_HOSTNAME>:<PORT>/v1/AUTH_system |
1189 | |
1190 | #. Next, copy all the ring information to all the nodes, including your new proxy nodes, and ensure the ring info gets to all the storage nodes as well. |
1191 | |
1192 | #. After you sync all the nodes, make sure the admin has the keys in /etc/swift and the ownership for the ring file is correct. |
1193 | |
1194 | -Additional Cleanup Script for Swauth |
1195 | ------------------------------------- |
1196 | - |
1197 | -With Swauth, you'll want to install a cronjob to clean up any |
1198 | -orphaned expired tokens. These orphaned tokens can occur when a "stampede" |
1199 | -occurs where a single user authenticates several times concurrently. Generally, |
1200 | -these orphaned tokens don't pose much of an issue, but it's good to clean them |
1201 | -up once a "token life" period (default: 1 day or 86400 seconds). |
1202 | - |
1203 | -This should be as simple as adding `swauth-cleanup-tokens -A |
1204 | -https://<PROXY_HOSTNAME>:8080/auth/ -K swauthkey > /dev/null` to a crontab |
1205 | -entry on one of the proxies that is running Swauth; but run |
1206 | -`swauth-cleanup-tokens` with no arguments for detailed help on the options |
1207 | -available. |
1208 | - |
1209 | Troubleshooting Notes |
1210 | --------------------- |
1211 | If you see problems, look in var/log/syslog (or messages on some distros). |
1212 | |
1213 | === modified file 'doc/source/index.rst' |
1214 | --- doc/source/index.rst 2011-03-14 02:56:37 +0000 |
1215 | +++ doc/source/index.rst 2011-06-03 00:13:27 +0000 |
1216 | @@ -45,6 +45,7 @@ |
1217 | overview_stats |
1218 | ratelimit |
1219 | overview_large_objects |
1220 | + overview_container_sync |
1221 | |
1222 | Developer Documentation |
1223 | ======================= |
1224 | |
1225 | === modified file 'doc/source/misc.rst' |
1226 | --- doc/source/misc.rst 2011-03-24 03:37:07 +0000 |
1227 | +++ doc/source/misc.rst 2011-06-03 00:13:27 +0000 |
1228 | @@ -33,12 +33,12 @@ |
1229 | :members: |
1230 | :show-inheritance: |
1231 | |
1232 | -.. _common_swauth: |
1233 | - |
1234 | -Swauth |
1235 | -====== |
1236 | - |
1237 | -.. automodule:: swift.common.middleware.swauth |
1238 | +.. _common_tempauth: |
1239 | + |
1240 | +TempAuth |
1241 | +======== |
1242 | + |
1243 | +.. automodule:: swift.common.middleware.tempauth |
1244 | :members: |
1245 | :show-inheritance: |
1246 | |
1247 | |
1248 | === modified file 'doc/source/overview_auth.rst' |
1249 | --- doc/source/overview_auth.rst 2011-03-14 02:56:37 +0000 |
1250 | +++ doc/source/overview_auth.rst 2011-06-03 00:13:27 +0000 |
1251 | @@ -2,9 +2,9 @@ |
1252 | The Auth System |
1253 | =============== |
1254 | |
1255 | ------- |
1256 | -Swauth |
1257 | ------- |
1258 | +-------- |
1259 | +TempAuth |
1260 | +-------- |
1261 | |
1262 | The auth system for Swift is loosely based on the auth system from the existing |
1263 | Rackspace architecture -- actually from a few existing auth systems -- and is |
1264 | @@ -27,7 +27,7 @@ |
1265 | Swift will make calls to the auth system, giving the auth token to be |
1266 | validated. For a valid token, the auth system responds with an overall |
1267 | expiration in seconds from now. Swift will cache the token up to the expiration |
1268 | -time. The included Swauth also has the concept of admin and non-admin users |
1269 | +time. The included TempAuth also has the concept of admin and non-admin users |
1270 | within an account. Admin users can do anything within the account. Non-admin |
1271 | users can only perform operations per container based on the container's |
1272 | X-Container-Read and X-Container-Write ACLs. For more information on ACLs, see |
1273 | @@ -40,152 +40,9 @@ |
1274 | Extending Auth |
1275 | -------------- |
1276 | |
1277 | -Swauth is written as wsgi middleware, so implementing your own auth is as easy |
1278 | -as writing new wsgi middleware, and plugging it in to the proxy server. |
1279 | +TempAuth is written as wsgi middleware, so implementing your own auth is as |
1280 | +easy as writing new wsgi middleware, and plugging it in to the proxy server. |
1281 | +The KeyStone project and the Swauth project are examples of additional auth |
1282 | +services. |
1283 | |
1284 | Also, see :doc:`development_auth`. |
1285 | - |
1286 | - |
1287 | --------------- |
1288 | -Swauth Details |
1289 | --------------- |
1290 | - |
1291 | -The Swauth system is included at swift/common/middleware/swauth.py; a scalable |
1292 | -authentication and authorization system that uses Swift itself as its backing |
1293 | -store. This section will describe how it stores its data. |
1294 | - |
1295 | -At the topmost level, the auth system has its own Swift account it stores its |
1296 | -own account information within. This Swift account is known as |
1297 | -self.auth_account in the code and its name is in the format |
1298 | -self.reseller_prefix + ".auth". In this text, we'll refer to this account as |
1299 | -<auth_account>. |
1300 | - |
1301 | -The containers whose names do not begin with a period represent the accounts |
1302 | -within the auth service. For example, the <auth_account>/test container would |
1303 | -represent the "test" account. |
1304 | - |
1305 | -The objects within each container represent the users for that auth service |
1306 | -account. For example, the <auth_account>/test/bob object would represent the |
1307 | -user "bob" within the auth service account of "test". Each of these user |
1308 | -objects contain a JSON dictionary of the format:: |
1309 | - |
1310 | - {"auth": "<auth_type>:<auth_value>", "groups": <groups_array>} |
1311 | - |
1312 | -The `<auth_type>` can only be `plaintext` at this time, and the `<auth_value>` |
1313 | -is the plain text password itself. |
1314 | - |
1315 | -The `<groups_array>` contains at least two groups. The first is a unique group |
1316 | -identifying that user and it's name is of the format `<user>:<account>`. The |
1317 | -second group is the `<account>` itself. Additional groups of `.admin` for |
1318 | -account administrators and `.reseller_admin` for reseller administrators may |
1319 | -exist. Here's an example user JSON dictionary:: |
1320 | - |
1321 | - {"auth": "plaintext:testing", |
1322 | - "groups": ["name": "test:tester", "name": "test", "name": ".admin"]} |
1323 | - |
1324 | -To map an auth service account to a Swift storage account, the Service Account |
1325 | -Id string is stored in the `X-Container-Meta-Account-Id` header for the |
1326 | -<auth_account>/<account> container. To map back the other way, an |
1327 | -<auth_account>/.account_id/<account_id> object is created with the contents of |
1328 | -the corresponding auth service's account name. |
1329 | - |
1330 | -Also, to support a future where the auth service will support multiple Swift |
1331 | -clusters or even multiple services for the same auth service account, an |
1332 | -<auth_account>/<account>/.services object is created with its contents having a |
1333 | -JSON dictionary of the format:: |
1334 | - |
1335 | - {"storage": {"default": "local", "local": <url>}} |
1336 | - |
1337 | -The "default" is always "local" right now, and "local" is always the single |
1338 | -Swift cluster URL; but in the future there can be more than one cluster with |
1339 | -various names instead of just "local", and the "default" key's value will |
1340 | -contain the primary cluster to use for that account. Also, there may be more |
1341 | -services in addition to the current "storage" service right now. |
1342 | - |
1343 | -Here's an example .services dictionary at the moment:: |
1344 | - |
1345 | - {"storage": |
1346 | - {"default": "local", |
1347 | - "local": "http://127.0.0.1:8080/v1/AUTH_8980f74b1cda41e483cbe0a925f448a9"}} |
1348 | - |
1349 | -But, here's an example of what the dictionary may look like in the future:: |
1350 | - |
1351 | - {"storage": |
1352 | - {"default": "dfw", |
1353 | - "dfw": "http://dfw.storage.com:8080/v1/AUTH_8980f74b1cda41e483cbe0a925f448a9", |
1354 | - "ord": "http://ord.storage.com:8080/v1/AUTH_8980f74b1cda41e483cbe0a925f448a9", |
1355 | - "sat": "http://ord.storage.com:8080/v1/AUTH_8980f74b1cda41e483cbe0a925f448a9"}, |
1356 | - "servers": |
1357 | - {"default": "dfw", |
1358 | - "dfw": "http://dfw.servers.com:8080/v1/AUTH_8980f74b1cda41e483cbe0a925f448a9", |
1359 | - "ord": "http://ord.servers.com:8080/v1/AUTH_8980f74b1cda41e483cbe0a925f448a9", |
1360 | - "sat": "http://ord.servers.com:8080/v1/AUTH_8980f74b1cda41e483cbe0a925f448a9"}} |
1361 | - |
1362 | -Lastly, the tokens themselves are stored as objects in the |
1363 | -`<auth_account>/.token_[0-f]` containers. The names of the objects are the |
1364 | -token strings themselves, such as `AUTH_tked86bbd01864458aa2bd746879438d5a`. |
1365 | -The exact `.token_[0-f]` container chosen is based on the final digit of the |
1366 | -token name, such as `.token_a` for the token |
1367 | -`AUTH_tked86bbd01864458aa2bd746879438d5a`. The contents of the token objects |
1368 | -are JSON dictionaries of the format:: |
1369 | - |
1370 | - {"account": <account>, |
1371 | - "user": <user>, |
1372 | - "account_id": <account_id>, |
1373 | - "groups": <groups_array>, |
1374 | - "expires": <time.time() value>} |
1375 | - |
1376 | -The `<account>` is the auth service account's name for that token. The `<user>` |
1377 | -is the user within the account for that token. The `<account_id>` is the |
1378 | -same as the `X-Container-Meta-Account-Id` for the auth service's account, |
1379 | -as described above. The `<groups_array>` is the user's groups, as described |
1380 | -above with the user object. The "expires" value indicates when the token is no |
1381 | -longer valid, as compared to Python's time.time() value. |
1382 | - |
1383 | -Here's an example token object's JSON dictionary:: |
1384 | - |
1385 | - {"account": "test", |
1386 | - "user": "tester", |
1387 | - "account_id": "AUTH_8980f74b1cda41e483cbe0a925f448a9", |
1388 | - "groups": ["name": "test:tester", "name": "test", "name": ".admin"], |
1389 | - "expires": 1291273147.1624689} |
1390 | - |
1391 | -To easily map a user to an already issued token, the token name is stored in |
1392 | -the user object's `X-Object-Meta-Auth-Token` header. |
1393 | - |
1394 | -Here is an example full listing of an <auth_account>:: |
1395 | - |
1396 | - .account_id |
1397 | - AUTH_2282f516-559f-4966-b239-b5c88829e927 |
1398 | - AUTH_f6f57a3c-33b5-4e85-95a5-a801e67505c8 |
1399 | - AUTH_fea96a36-c177-4ca4-8c7e-b8c715d9d37b |
1400 | - .token_0 |
1401 | - .token_1 |
1402 | - .token_2 |
1403 | - .token_3 |
1404 | - .token_4 |
1405 | - .token_5 |
1406 | - .token_6 |
1407 | - AUTH_tk9d2941b13d524b268367116ef956dee6 |
1408 | - .token_7 |
1409 | - .token_8 |
1410 | - AUTH_tk93627c6324c64f78be746f1e6a4e3f98 |
1411 | - .token_9 |
1412 | - .token_a |
1413 | - .token_b |
1414 | - .token_c |
1415 | - .token_d |
1416 | - .token_e |
1417 | - AUTH_tk0d37d286af2c43ffad06e99112b3ec4e |
1418 | - .token_f |
1419 | - AUTH_tk766bbde93771489982d8dc76979d11cf |
1420 | - reseller |
1421 | - .services |
1422 | - reseller |
1423 | - test |
1424 | - .services |
1425 | - tester |
1426 | - tester3 |
1427 | - test2 |
1428 | - .services |
1429 | - tester2 |
1430 | |
1431 | === added file 'doc/source/overview_container_sync.rst' |
1432 | --- doc/source/overview_container_sync.rst 1970-01-01 00:00:00 +0000 |
1433 | +++ doc/source/overview_container_sync.rst 2011-06-03 00:13:27 +0000 |
1434 | @@ -0,0 +1,220 @@ |
1435 | +====================================== |
1436 | +Container to Container Synchronization |
1437 | +====================================== |
1438 | + |
1439 | +-------- |
1440 | +Overview |
1441 | +-------- |
1442 | + |
1443 | +Swift has a feature where all the contents of a container can be mirrored to |
1444 | +another container through background synchronization. Swift cluster operators |
1445 | +configure their cluster to allow/accept sync requests to/from other clusters, |
1446 | +and the user specifies where to sync their container to along with a secret |
1447 | +synchronization key. |
1448 | + |
1449 | +.. note:: |
1450 | + |
1451 | + This does not sync standard object POSTs, as those do not cause container |
1452 | + updates. A workaround is to do X-Copy-From POSTs. We're considering |
1453 | + solutions to this limitation but leaving it as is for now since POSTs are |
1454 | + fairly uncommon. |
1455 | + |
1456 | +-------------------------------------------- |
1457 | +Configuring a Cluster's Allowable Sync Hosts |
1458 | +-------------------------------------------- |
1459 | + |
1460 | +The Swift cluster operator must allow synchronization with a set of hosts |
1461 | +before the user can enable container synchronization. First, the backend |
1462 | +container server needs to be given this list of hosts in the |
1463 | +container-server.conf file:: |
1464 | + |
1465 | + [DEFAULT] |
1466 | + # This is a comma separated list of hosts allowed in the |
1467 | + # X-Container-Sync-To field for containers. |
1468 | + # allowed_sync_hosts = 127.0.0.1 |
1469 | + allowed_sync_hosts = host1,host2,etc. |
1470 | + ... |
1471 | + |
1472 | + [container-sync] |
1473 | + # You can override the default log routing for this app here (don't |
1474 | + # use set!): |
1475 | + # log_name = container-sync |
1476 | + # log_facility = LOG_LOCAL0 |
1477 | + # log_level = INFO |
1478 | + # Will sync, at most, each container once per interval |
1479 | + # interval = 300 |
1480 | + # Maximum amount of time to spend syncing each container |
1481 | + # container_time = 60 |
1482 | + |
1483 | +The authentication system also needs to be configured to allow synchronization |
1484 | +requests. Here are examples with DevAuth and Swauth:: |
1485 | + |
1486 | + [filter:auth] |
1487 | + # This is a comma separated list of hosts allowed to send |
1488 | + # X-Container-Sync-Key requests. |
1489 | + # allowed_sync_hosts = 127.0.0.1 |
1490 | + allowed_sync_hosts = host1,host2,etc. |
1491 | + |
1492 | + [filter:swauth] |
1493 | + # This is a comma separated list of hosts allowed to send |
1494 | + # X-Container-Sync-Key requests. |
1495 | + # allowed_sync_hosts = 127.0.0.1 |
1496 | + allowed_sync_hosts = host1,host2,etc. |
1497 | + |
1498 | +The default of 127.0.0.1 is just so no configuration is required for SAIO |
1499 | +setups -- for testing. |
1500 | + |
1501 | +---------------------------------------------- |
1502 | +Using ``st`` to set up synchronized containers |
1503 | +---------------------------------------------- |
1504 | + |
1505 | +.. note:: |
1506 | + |
1507 | + You must be the account admin on the account to set synchronization targets |
1508 | + and keys. |
1509 | + |
1510 | +You simply tell each container where to sync to and give it a secret |
1511 | +synchronization key. First, let's get the account details for our two cluster |
1512 | +accounts:: |
1513 | + |
1514 | + $ st -A http://cluster1/auth/v1.0 -U test:tester -K testing stat -v |
1515 | + StorageURL: http://cluster1/v1/AUTH_208d1854-e475-4500-b315-81de645d060e |
1516 | + Auth Token: AUTH_tkd5359e46ff9e419fa193dbd367f3cd19 |
1517 | + Account: AUTH_208d1854-e475-4500-b315-81de645d060e |
1518 | + Containers: 0 |
1519 | + Objects: 0 |
1520 | + Bytes: 0 |
1521 | + |
1522 | + $ st -A http://cluster2/auth/v1.0 -U test2:tester2 -K testing2 stat -v |
1523 | + StorageURL: http://cluster2/v1/AUTH_33cdcad8-09fb-4940-90da-0f00cbf21c7c |
1524 | + Auth Token: AUTH_tk816a1aaf403c49adb92ecfca2f88e430 |
1525 | + Account: AUTH_33cdcad8-09fb-4940-90da-0f00cbf21c7c |
1526 | + Containers: 0 |
1527 | + Objects: 0 |
1528 | + Bytes: 0 |
1529 | + |
1530 | +Now, let's make our first container and tell it to synchronize to a second |
1531 | +we'll make next:: |
1532 | + |
1533 | + $ st -A http://cluster1/auth/v1.0 -U test:tester -K testing post \ |
1534 | + -t 'http://cluster2/v1/AUTH_33cdcad8-09fb-4940-90da-0f00cbf21c7c/container2' \ |
1535 | + -k 'secret' container1 |
1536 | + |
1537 | +The ``-t`` indicates the URL to sync to, which is the ``StorageURL`` from |
1538 | +cluster2 we retrieved above plus the container name. The ``-k`` specifies the |
1539 | +secret key the two containers will share for synchronization. Now, we'll do |
1540 | +something similar for the second cluster's container:: |
1541 | + |
1542 | + $ st -A http://cluster2/auth/v1.0 -U test2:tester2 -K testing2 post \ |
1543 | + -t 'http://cluster1/v1/AUTH_208d1854-e475-4500-b315-81de645d060e/container1' \ |
1544 | + -k 'secret' container2 |
1545 | + |
1546 | +That's it. Now we can upload a bunch of stuff to the first container and watch |
1547 | +as it gets synchronized over to the second:: |
1548 | + |
1549 | + $ st -A http://cluster1/auth/v1.0 -U test:tester -K testing \ |
1550 | + upload container1 . |
1551 | + photo002.png |
1552 | + photo004.png |
1553 | + photo001.png |
1554 | + photo003.png |
1555 | + |
1556 | + $ st -A http://cluster2/auth/v1.0 -U test2:tester2 -K testing2 \ |
1557 | + list container2 |
1558 | + |
1559 | + [Nothing there yet, so we wait a bit...] |
1560 | + [If you're an operator running SAIO and just testing, you may need to |
1561 | + run 'swift-init container-sync once' to perform a sync scan.] |
1562 | + |
1563 | + $ st -A http://cluster2/auth/v1.0 -U test2:tester2 -K testing2 \ |
1564 | + list container2 |
1565 | + photo001.png |
1566 | + photo002.png |
1567 | + photo003.png |
1568 | + photo004.png |
1569 | + |
1570 | +You can also set up a chain of synced containers if you want more than two. |
1571 | +You'd point 1 -> 2, then 2 -> 3, and finally 3 -> 1 for three containers. |
1572 | +They'd all need to share the same secret synchronization key. |
1573 | + |
1574 | +----------------------------------- |
1575 | +Using curl (or other tools) instead |
1576 | +----------------------------------- |
1577 | + |
1578 | +So what's ``st`` doing behind the scenes? Nothing overly complicated. It |
1579 | +translates the ``-t <value>`` option into an ``X-Container-Sync-To: <value>`` |
1580 | +header and the ``-k <value>`` option into an ``X-Container-Sync-Key: <value>`` |
1581 | +header. |
1582 | + |
1583 | +For instance, when we created the first container above and told it to |
1584 | +synchronize to the second, we could have used this curl command:: |
1585 | + |
1586 | + $ curl -i -X POST -H 'X-Auth-Token: AUTH_tkd5359e46ff9e419fa193dbd367f3cd19' \ |
1587 | + -H 'X-Container-Sync-To: http://cluster2/v1/AUTH_33cdcad8-09fb-4940-90da-0f00cbf21c7c/container2' \ |
1588 | + -H 'X-Container-Sync-Key: secret' \ |
1589 | + 'http://cluster1/v1/AUTH_208d1854-e475-4500-b315-81de645d060e/container1' |
1590 | + HTTP/1.1 204 No Content |
1591 | + Content-Length: 0 |
1592 | + Content-Type: text/plain; charset=UTF-8 |
1593 | + Date: Thu, 24 Feb 2011 22:39:14 GMT |
1594 | + |
1595 | +-------------------------------------------------- |
1596 | +What's going on behind the scenes, in the cluster? |
1597 | +-------------------------------------------------- |
1598 | + |
1599 | +The swift-container-sync does the job of sending updates to the remote |
1600 | +container. |
1601 | + |
1602 | +This is done by scanning the local devices for container databases and |
1603 | +checking for x-container-sync-to and x-container-sync-key metadata values. |
1604 | +If they exist, newer rows since the last sync will trigger PUTs or DELETEs |
1605 | +to the other container. |
1606 | + |
1607 | +.. note:: |
1608 | + |
1609 | + This does not sync standard object POSTs, as those do not cause |
1610 | + container row updates. A workaround is to do X-Copy-From POSTs. We're |
1611 | + considering solutions to this limitation but leaving it as is for now |
1612 | + since POSTs are fairly uncommon. |
1613 | + |
1614 | +The actual syncing is slightly more complicated to make use of the three |
1615 | +(or number-of-replicas) main nodes for a container without each trying to |
1616 | +do the exact same work but also without missing work if one node happens to |
1617 | +be down. |
1618 | + |
1619 | +Two sync points are kept per container database. All rows between the two |
1620 | +sync points trigger updates. Any rows newer than both sync points cause |
1621 | +updates depending on the node's position for the container (primary nodes |
1622 | +do one third, etc. depending on the replica count of course). After a sync |
1623 | +run, the first sync point is set to the newest ROWID known and the second |
1624 | +sync point is set to newest ROWID for which all updates have been sent. |
1625 | + |
1626 | +An example may help. Assume replica count is 3 and perfectly matching |
1627 | +ROWIDs starting at 1. |
1628 | + |
1629 | + First sync run, database has 6 rows: |
1630 | + |
1631 | + * SyncPoint1 starts as -1. |
1632 | + * SyncPoint2 starts as -1. |
1633 | + * No rows between points, so no "all updates" rows. |
1634 | + * Six rows newer than SyncPoint1, so a third of the rows are sent |
1635 | + by node 1, another third by node 2, remaining third by node 3. |
1636 | + * SyncPoint1 is set as 6 (the newest ROWID known). |
1637 | + * SyncPoint2 is left as -1 since no "all updates" rows were synced. |
1638 | + |
1639 | + Next sync run, database has 12 rows: |
1640 | + |
1641 | + * SyncPoint1 starts as 6. |
1642 | + * SyncPoint2 starts as -1. |
1643 | + * The rows between -1 and 6 all trigger updates (most of which |
1644 | + should short-circuit on the remote end as having already been |
1645 | + done). |
1646 | + * Six more rows newer than SyncPoint1, so a third of the rows are |
1647 | + sent by node 1, another third by node 2, remaining third by node |
1648 | + 3. |
1649 | + * SyncPoint1 is set as 12 (the newest ROWID known). |
1650 | + * SyncPoint2 is set as 6 (the newest "all updates" ROWID). |
1651 | + |
1652 | +In this way, under normal circumstances each node sends its share of |
1653 | +updates each run and just sends a batch of older updates to ensure nothing |
1654 | +was missed. |
1655 | |
1656 | === modified file 'etc/container-server.conf-sample' |
1657 | --- etc/container-server.conf-sample 2011-01-25 00:28:22 +0000 |
1658 | +++ etc/container-server.conf-sample 2011-06-03 00:13:27 +0000 |
1659 | @@ -7,6 +7,9 @@ |
1660 | # swift_dir = /etc/swift |
1661 | # devices = /srv/node |
1662 | # mount_check = true |
1663 | +# This is a comma separated list of hosts allowed in the X-Container-Sync-To |
1664 | +# field for containers. |
1665 | +# allowed_sync_hosts = 127.0.0.1 |
1666 | # You can specify default log routing here if you want: |
1667 | # log_name = swift |
1668 | # log_facility = LOG_LOCAL0 |
1669 | @@ -60,3 +63,13 @@ |
1670 | # log_level = INFO |
1671 | # Will audit, at most, 1 container per device per interval |
1672 | # interval = 1800 |
1673 | + |
1674 | +[container-sync] |
1675 | +# You can override the default log routing for this app here (don't use set!): |
1676 | +# log_name = container-sync |
1677 | +# log_facility = LOG_LOCAL0 |
1678 | +# log_level = INFO |
1679 | +# Will sync, at most, each container once per interval |
1680 | +# interval = 300 |
1681 | +# Maximum amount of time to spend syncing each container |
1682 | +# container_time = 60 |
1683 | |
1684 | === modified file 'etc/proxy-server.conf-sample' |
1685 | --- etc/proxy-server.conf-sample 2011-03-25 08:33:46 +0000 |
1686 | +++ etc/proxy-server.conf-sample 2011-06-03 00:13:27 +0000 |
1687 | @@ -13,7 +13,7 @@ |
1688 | # log_level = INFO |
1689 | |
1690 | [pipeline:main] |
1691 | -pipeline = catch_errors healthcheck cache ratelimit swauth proxy-server |
1692 | +pipeline = catch_errors healthcheck cache ratelimit tempauth proxy-server |
1693 | |
1694 | [app:proxy-server] |
1695 | use = egg:swift#proxy |
1696 | @@ -41,10 +41,10 @@ |
1697 | # 'false' no one, even authorized, can. |
1698 | # allow_account_management = false |
1699 | |
1700 | -[filter:swauth] |
1701 | -use = egg:swift#swauth |
1702 | +[filter:tempauth] |
1703 | +use = egg:swift#tempauth |
1704 | # You can override the default log routing for this filter here: |
1705 | -# set log_name = auth-server |
1706 | +# set log_name = tempauth |
1707 | # set log_facility = LOG_LOCAL0 |
1708 | # set log_level = INFO |
1709 | # set log_headers = False |
1710 | @@ -54,21 +54,31 @@ |
1711 | # multiple auth systems are in use for one Swift cluster. |
1712 | # reseller_prefix = AUTH |
1713 | # The auth prefix will cause requests beginning with this prefix to be routed |
1714 | -# to the auth subsystem, for granting tokens, creating accounts, users, etc. |
1715 | +# to the auth subsystem, for granting tokens, etc. |
1716 | # auth_prefix = /auth/ |
1717 | -# Cluster strings are of the format name#url where name is a short name for the |
1718 | -# Swift cluster and url is the url to the proxy server(s) for the cluster. |
1719 | -# default_swift_cluster = local#http://127.0.0.1:8080/v1 |
1720 | -# You may also use the format name#url#url where the first url is the one |
1721 | -# given to users to access their account (public url) and the second is the one |
1722 | -# used by swauth itself to create and delete accounts (private url). This is |
1723 | -# useful when a load balancer url should be used by users, but swauth itself is |
1724 | -# behind the load balancer. Example: |
1725 | -# default_swift_cluster = local#https://public.com:8080/v1#http://private.com:8080/v1 |
1726 | # token_life = 86400 |
1727 | -# node_timeout = 10 |
1728 | -# Highly recommended to change this. |
1729 | -super_admin_key = swauthkey |
1730 | +# This is a comma separated list of hosts allowed to send X-Container-Sync-Key |
1731 | +# requests. |
1732 | +# allowed_sync_hosts = 127.0.0.1 |
1733 | +# Lastly, you need to list all the accounts/users you want here. The format is: |
1734 | +# user_<account>_<user> = <key> [group] [group] [...] [storage_url] |
1735 | +# There are special groups of: |
1736 | +# .reseller_admin = can do anything to any account for this auth |
1737 | +# .admin = can do anything within the account |
1738 | +# If neither of these groups are specified, the user can only access containers |
1739 | +# that have been explicitly allowed for them by a .admin or .reseller_admin. |
1740 | +# The trailing optional storage_url allows you to specify an alternate url to |
1741 | +# hand back to the user upon authentication. If not specified, this defaults to |
1742 | +# http[s]://<ip>:<port>/v1/<reseller_prefix>_<account> where http or https |
1743 | +# depends on whether cert_file is specified in the [DEFAULT] section, <ip> and |
1744 | +# <port> are based on the [DEFAULT] section's bind_ip and bind_port (falling |
1745 | +# back to 127.0.0.1 and 8080), <reseller_prefix> is from this section, and |
1746 | +# <account> is from the user_<account>_<user> name. |
1747 | +# Here are example entries, required for running the tests: |
1748 | +user_admin_admin = admin .admin .reseller_admin |
1749 | +user_test_tester = testing .admin |
1750 | +user_test2_tester2 = testing2 .admin |
1751 | +user_test_tester3 = testing3 |
1752 | |
1753 | [filter:healthcheck] |
1754 | use = egg:swift#healthcheck |
1755 | |
1756 | === modified file 'setup.py' |
1757 | --- setup.py 2011-05-26 09:25:39 +0000 |
1758 | +++ setup.py 2011-06-03 00:13:27 +0000 |
1759 | @@ -80,7 +80,7 @@ |
1760 | 'bin/swift-account-audit', 'bin/swift-account-reaper', |
1761 | 'bin/swift-account-replicator', 'bin/swift-account-server', |
1762 | 'bin/swift-container-auditor', |
1763 | - 'bin/swift-container-replicator', |
1764 | + 'bin/swift-container-replicator', 'bin/swift-container-sync', |
1765 | 'bin/swift-container-server', 'bin/swift-container-updater', |
1766 | 'bin/swift-drive-audit', 'bin/swift-get-nodes', |
1767 | 'bin/swift-init', 'bin/swift-object-auditor', |
1768 | @@ -96,10 +96,6 @@ |
1769 | 'bin/swift-log-stats-collector', |
1770 | 'bin/swift-account-stats-logger', |
1771 | 'bin/swift-container-stats-logger', |
1772 | - 'bin/swauth-add-account', 'bin/swauth-add-user', |
1773 | - 'bin/swauth-cleanup-tokens', 'bin/swauth-delete-account', |
1774 | - 'bin/swauth-delete-user', 'bin/swauth-list', 'bin/swauth-prep', |
1775 | - 'bin/swauth-set-account-service', |
1776 | ], |
1777 | entry_points={ |
1778 | 'paste.app_factory': [ |
1779 | @@ -109,7 +105,6 @@ |
1780 | 'account=swift.account.server:app_factory', |
1781 | ], |
1782 | 'paste.filter_factory': [ |
1783 | - 'swauth=swift.common.middleware.swauth:filter_factory', |
1784 | 'healthcheck=swift.common.middleware.healthcheck:filter_factory', |
1785 | 'memcache=swift.common.middleware.memcache:filter_factory', |
1786 | 'ratelimit=swift.common.middleware.ratelimit:filter_factory', |
1787 | @@ -118,6 +113,7 @@ |
1788 | 'domain_remap=swift.common.middleware.domain_remap:filter_factory', |
1789 | 'swift3=swift.common.middleware.swift3:filter_factory', |
1790 | 'staticweb=swift.common.middleware.staticweb:filter_factory', |
1791 | + 'tempauth=swift.common.middleware.tempauth:filter_factory', |
1792 | ], |
1793 | }, |
1794 | ) |
1795 | |
1796 | === modified file 'swift/common/client.py' |
1797 | --- swift/common/client.py 2011-05-14 02:31:47 +0000 |
1798 | +++ swift/common/client.py 2011-06-03 00:13:27 +0000 |
1799 | @@ -565,9 +565,9 @@ |
1800 | return resp_headers |
1801 | |
1802 | |
1803 | -def put_object(url, token, container, name, contents, content_length=None, |
1804 | - etag=None, chunk_size=65536, content_type=None, headers=None, |
1805 | - http_conn=None): |
1806 | +def put_object(url, token=None, container=None, name=None, contents=None, |
1807 | + content_length=None, etag=None, chunk_size=65536, |
1808 | + content_type=None, headers=None, http_conn=None): |
1809 | """ |
1810 | Put an object |
1811 | |
1812 | @@ -591,10 +591,17 @@ |
1813 | parsed, conn = http_conn |
1814 | else: |
1815 | parsed, conn = http_connection(url) |
1816 | - path = '%s/%s/%s' % (parsed.path, quote(container), quote(name)) |
1817 | - if not headers: |
1818 | + path = parsed.path |
1819 | + if container: |
1820 | + path = '%s/%s' % (path.rstrip('/'), quote(container)) |
1821 | + if name: |
1822 | + path = '%s/%s' % (path.rstrip('/'), quote(name)) |
1823 | + if headers: |
1824 | + headers = dict(headers) |
1825 | + else: |
1826 | headers = {} |
1827 | - headers['X-Auth-Token'] = token |
1828 | + if token: |
1829 | + headers['X-Auth-Token'] = token |
1830 | if etag: |
1831 | headers['ETag'] = etag.strip('"') |
1832 | if content_length is not None: |
1833 | @@ -633,7 +640,7 @@ |
1834 | raise ClientException('Object PUT failed', http_scheme=parsed.scheme, |
1835 | http_host=conn.host, http_port=conn.port, http_path=path, |
1836 | http_status=resp.status, http_reason=resp.reason) |
1837 | - return resp.getheader('etag').strip('"') |
1838 | + return resp.getheader('etag', '').strip('"') |
1839 | |
1840 | |
1841 | def post_object(url, token, container, name, headers, http_conn=None): |
1842 | @@ -664,7 +671,8 @@ |
1843 | http_status=resp.status, http_reason=resp.reason) |
1844 | |
1845 | |
1846 | -def delete_object(url, token, container, name, http_conn=None): |
1847 | +def delete_object(url, token=None, container=None, name=None, http_conn=None, |
1848 | + headers=None): |
1849 | """ |
1850 | Delete object |
1851 | |
1852 | @@ -680,8 +688,18 @@ |
1853 | parsed, conn = http_conn |
1854 | else: |
1855 | parsed, conn = http_connection(url) |
1856 | - path = '%s/%s/%s' % (parsed.path, quote(container), quote(name)) |
1857 | - conn.request('DELETE', path, '', {'X-Auth-Token': token}) |
1858 | + path = parsed.path |
1859 | + if container: |
1860 | + path = '%s/%s' % (path.rstrip('/'), quote(container)) |
1861 | + if name: |
1862 | + path = '%s/%s' % (path.rstrip('/'), quote(name)) |
1863 | + if headers: |
1864 | + headers = dict(headers) |
1865 | + else: |
1866 | + headers = {} |
1867 | + if token: |
1868 | + headers['X-Auth-Token'] = token |
1869 | + conn.request('DELETE', path, '', headers) |
1870 | resp = conn.getresponse() |
1871 | resp.read() |
1872 | if resp.status < 200 or resp.status >= 300: |
1873 | |
1874 | === modified file 'swift/common/db.py' |
1875 | --- swift/common/db.py 2011-05-05 01:47:56 +0000 |
1876 | +++ swift/common/db.py 2011-06-03 00:13:27 +0000 |
1877 | @@ -666,7 +666,9 @@ |
1878 | id TEXT, |
1879 | status TEXT DEFAULT '', |
1880 | status_changed_at TEXT DEFAULT '0', |
1881 | - metadata TEXT DEFAULT '' |
1882 | + metadata TEXT DEFAULT '', |
1883 | + x_container_sync_point1 INTEGER DEFAULT -1, |
1884 | + x_container_sync_point2 INTEGER DEFAULT -1 |
1885 | ); |
1886 | |
1887 | INSERT INTO container_stat (object_count, bytes_used) |
1888 | @@ -886,7 +888,8 @@ |
1889 | :returns: sqlite.row of (account, container, created_at, put_timestamp, |
1890 | delete_timestamp, object_count, bytes_used, |
1891 | reported_put_timestamp, reported_delete_timestamp, |
1892 | - reported_object_count, reported_bytes_used, hash, id) |
1893 | + reported_object_count, reported_bytes_used, hash, id, |
1894 | + x_container_sync_point1, x_container_sync_point2) |
1895 | """ |
1896 | try: |
1897 | self._commit_puts() |
1898 | @@ -894,13 +897,65 @@ |
1899 | if not self.stale_reads_ok: |
1900 | raise |
1901 | with self.get() as conn: |
1902 | - return conn.execute(''' |
1903 | - SELECT account, container, created_at, put_timestamp, |
1904 | - delete_timestamp, object_count, bytes_used, |
1905 | - reported_put_timestamp, reported_delete_timestamp, |
1906 | - reported_object_count, reported_bytes_used, hash, id |
1907 | - FROM container_stat |
1908 | - ''').fetchone() |
1909 | + try: |
1910 | + return conn.execute(''' |
1911 | + SELECT account, container, created_at, put_timestamp, |
1912 | + delete_timestamp, object_count, bytes_used, |
1913 | + reported_put_timestamp, reported_delete_timestamp, |
1914 | + reported_object_count, reported_bytes_used, hash, id, |
1915 | + x_container_sync_point1, x_container_sync_point2 |
1916 | + FROM container_stat |
1917 | + ''').fetchone() |
1918 | + except sqlite3.OperationalError, err: |
1919 | + if 'no such column: x_container_sync_point' not in str(err): |
1920 | + raise |
1921 | + return conn.execute(''' |
1922 | + SELECT account, container, created_at, put_timestamp, |
1923 | + delete_timestamp, object_count, bytes_used, |
1924 | + reported_put_timestamp, reported_delete_timestamp, |
1925 | + reported_object_count, reported_bytes_used, hash, id, |
1926 | + -1 AS x_container_sync_point1, |
1927 | + -1 AS x_container_sync_point2 |
1928 | + FROM container_stat |
1929 | + ''').fetchone() |
1930 | + |
1931 | + def set_x_container_sync_points(self, sync_point1, sync_point2): |
1932 | + with self.get() as conn: |
1933 | + try: |
1934 | + self._set_x_container_sync_points(conn, sync_point1, |
1935 | + sync_point2) |
1936 | + except sqlite3.OperationalError, err: |
1937 | + if 'no such column: x_container_sync_point' not in str(err): |
1938 | + raise |
1939 | + conn.execute(''' |
1940 | + ALTER TABLE container_stat |
1941 | + ADD COLUMN x_container_sync_point1 INTEGER DEFAULT -1 |
1942 | + ''') |
1943 | + conn.execute(''' |
1944 | + ALTER TABLE container_stat |
1945 | + ADD COLUMN x_container_sync_point2 INTEGER DEFAULT -1 |
1946 | + ''') |
1947 | + self._set_x_container_sync_points(conn, sync_point1, |
1948 | + sync_point2) |
1949 | + conn.commit() |
1950 | + |
1951 | + def _set_x_container_sync_points(self, conn, sync_point1, sync_point2): |
1952 | + if sync_point1 is not None and sync_point2 is not None: |
1953 | + conn.execute(''' |
1954 | + UPDATE container_stat |
1955 | + SET x_container_sync_point1 = ?, |
1956 | + x_container_sync_point2 = ? |
1957 | + ''', (sync_point1, sync_point2)) |
1958 | + elif sync_point1 is not None: |
1959 | + conn.execute(''' |
1960 | + UPDATE container_stat |
1961 | + SET x_container_sync_point1 = ? |
1962 | + ''', (sync_point1,)) |
1963 | + elif sync_point2 is not None: |
1964 | + conn.execute(''' |
1965 | + UPDATE container_stat |
1966 | + SET x_container_sync_point2 = ? |
1967 | + ''', (sync_point2,)) |
1968 | |
1969 | def reported(self, put_timestamp, delete_timestamp, object_count, |
1970 | bytes_used): |
1971 | |
1972 | === modified file 'swift/common/manager.py' |
1973 | --- swift/common/manager.py 2011-03-30 20:04:15 +0000 |
1974 | +++ swift/common/manager.py 2011-06-03 00:13:27 +0000 |
1975 | @@ -31,9 +31,10 @@ |
1976 | |
1977 | # auth-server has been removed from ALL_SERVERS, start it explicitly |
1978 | ALL_SERVERS = ['account-auditor', 'account-server', 'container-auditor', |
1979 | - 'container-replicator', 'container-server', 'container-updater', |
1980 | - 'object-auditor', 'object-server', 'object-replicator', 'object-updater', |
1981 | - 'proxy-server', 'account-replicator', 'account-reaper'] |
1982 | + 'container-replicator', 'container-server', 'container-sync', |
1983 | + 'container-updater', 'object-auditor', 'object-server', |
1984 | + 'object-replicator', 'object-updater', 'proxy-server', |
1985 | + 'account-replicator', 'account-reaper'] |
1986 | MAIN_SERVERS = ['proxy-server', 'account-server', 'container-server', |
1987 | 'object-server'] |
1988 | REST_SERVERS = [s for s in ALL_SERVERS if s not in MAIN_SERVERS] |
1989 | |
1990 | === modified file 'swift/common/middleware/staticweb.py' |
1991 | --- swift/common/middleware/staticweb.py 2011-03-25 19:21:35 +0000 |
1992 | +++ swift/common/middleware/staticweb.py 2011-06-03 00:13:27 +0000 |
1993 | @@ -28,7 +28,7 @@ |
1994 | ... |
1995 | |
1996 | [pipeline:main] |
1997 | - pipeline = healthcheck cache swauth staticweb proxy-server |
1998 | + pipeline = healthcheck cache tempauth staticweb proxy-server |
1999 | |
2000 | ... |
2001 | |
2002 | |
2003 | === removed file 'swift/common/middleware/swauth.py' |
2004 | --- swift/common/middleware/swauth.py 2011-05-09 20:21:34 +0000 |
2005 | +++ swift/common/middleware/swauth.py 1970-01-01 00:00:00 +0000 |
2006 | @@ -1,1374 +0,0 @@ |
2007 | -# Copyright (c) 2010 OpenStack, LLC. |
2008 | -# |
2009 | -# Licensed under the Apache License, Version 2.0 (the "License"); |
2010 | -# you may not use this file except in compliance with the License. |
2011 | -# You may obtain a copy of the License at |
2012 | -# |
2013 | -# http://www.apache.org/licenses/LICENSE-2.0 |
2014 | -# |
2015 | -# Unless required by applicable law or agreed to in writing, software |
2016 | -# distributed under the License is distributed on an "AS IS" BASIS, |
2017 | -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
2018 | -# implied. |
2019 | -# See the License for the specific language governing permissions and |
2020 | -# limitations under the License. |
2021 | - |
2022 | -try: |
2023 | - import simplejson as json |
2024 | -except ImportError: |
2025 | - import json |
2026 | -from httplib import HTTPConnection, HTTPSConnection |
2027 | -from time import gmtime, strftime, time |
2028 | -from traceback import format_exc |
2029 | -from urllib import quote, unquote |
2030 | -from uuid import uuid4 |
2031 | -from hashlib import md5, sha1 |
2032 | -import hmac |
2033 | -import base64 |
2034 | - |
2035 | -from eventlet.timeout import Timeout |
2036 | -from eventlet import TimeoutError |
2037 | -from webob import Response, Request |
2038 | -from webob.exc import HTTPAccepted, HTTPBadRequest, HTTPConflict, \ |
2039 | - HTTPCreated, HTTPForbidden, HTTPNoContent, HTTPNotFound, \ |
2040 | - HTTPServiceUnavailable, HTTPUnauthorized |
2041 | - |
2042 | -from swift.common.bufferedhttp import http_connect_raw as http_connect |
2043 | -from swift.common.middleware.acl import clean_acl, parse_acl, referrer_allowed |
2044 | -from swift.common.utils import cache_from_env, get_logger, split_path, urlparse |
2045 | - |
2046 | - |
2047 | -class Swauth(object): |
2048 | - """ |
2049 | - Scalable authentication and authorization system that uses Swift as its |
2050 | - backing store. |
2051 | - |
2052 | - :param app: The next WSGI app in the pipeline |
2053 | - :param conf: The dict of configuration values |
2054 | - """ |
2055 | - |
2056 | - def __init__(self, app, conf): |
2057 | - self.app = app |
2058 | - self.conf = conf |
2059 | - self.logger = get_logger(conf, log_route='swauth') |
2060 | - self.log_headers = conf.get('log_headers') == 'True' |
2061 | - self.reseller_prefix = conf.get('reseller_prefix', 'AUTH').strip() |
2062 | - if self.reseller_prefix and self.reseller_prefix[-1] != '_': |
2063 | - self.reseller_prefix += '_' |
2064 | - self.auth_prefix = conf.get('auth_prefix', '/auth/') |
2065 | - if not self.auth_prefix: |
2066 | - self.auth_prefix = '/auth/' |
2067 | - if self.auth_prefix[0] != '/': |
2068 | - self.auth_prefix = '/' + self.auth_prefix |
2069 | - if self.auth_prefix[-1] != '/': |
2070 | - self.auth_prefix += '/' |
2071 | - self.auth_account = '%s.auth' % self.reseller_prefix |
2072 | - self.default_swift_cluster = conf.get('default_swift_cluster', |
2073 | - 'local#http://127.0.0.1:8080/v1') |
2074 | - # This setting is a little messy because of the options it has to |
2075 | - # provide. The basic format is cluster_name#url, such as the default |
2076 | - # value of local#http://127.0.0.1:8080/v1. |
2077 | - # If the URL given to the user needs to differ from the url used by |
2078 | - # Swauth to create/delete accounts, there's a more complex format: |
2079 | - # cluster_name#url#url, such as |
2080 | - # local#https://public.com:8080/v1#http://private.com:8080/v1. |
2081 | - cluster_parts = self.default_swift_cluster.split('#', 2) |
2082 | - self.dsc_name = cluster_parts[0] |
2083 | - if len(cluster_parts) == 3: |
2084 | - self.dsc_url = cluster_parts[1].rstrip('/') |
2085 | - self.dsc_url2 = cluster_parts[2].rstrip('/') |
2086 | - elif len(cluster_parts) == 2: |
2087 | - self.dsc_url = self.dsc_url2 = cluster_parts[1].rstrip('/') |
2088 | - else: |
2089 | - raise Exception('Invalid cluster format') |
2090 | - self.dsc_parsed = urlparse(self.dsc_url) |
2091 | - if self.dsc_parsed.scheme not in ('http', 'https'): |
2092 | - raise Exception('Cannot handle protocol scheme %s for url %s' % |
2093 | - (self.dsc_parsed.scheme, repr(self.dsc_url))) |
2094 | - self.dsc_parsed2 = urlparse(self.dsc_url2) |
2095 | - if self.dsc_parsed2.scheme not in ('http', 'https'): |
2096 | - raise Exception('Cannot handle protocol scheme %s for url %s' % |
2097 | - (self.dsc_parsed2.scheme, repr(self.dsc_url2))) |
2098 | - self.super_admin_key = conf.get('super_admin_key') |
2099 | - if not self.super_admin_key: |
2100 | - msg = _('No super_admin_key set in conf file! Exiting.') |
2101 | - try: |
2102 | - self.logger.critical(msg) |
2103 | - except Exception: |
2104 | - pass |
2105 | - raise ValueError(msg) |
2106 | - self.token_life = int(conf.get('token_life', 86400)) |
2107 | - self.timeout = int(conf.get('node_timeout', 10)) |
2108 | - self.itoken = None |
2109 | - self.itoken_expires = None |
2110 | - |
2111 | - def __call__(self, env, start_response): |
2112 | - """ |
2113 | - Accepts a standard WSGI application call, authenticating the request |
2114 | - and installing callback hooks for authorization and ACL header |
2115 | - validation. For an authenticated request, REMOTE_USER will be set to a |
2116 | - comma separated list of the user's groups. |
2117 | - |
2118 | - With a non-empty reseller prefix, acts as the definitive auth service |
2119 | - for just tokens and accounts that begin with that prefix, but will deny |
2120 | - requests outside this prefix if no other auth middleware overrides it. |
2121 | - |
2122 | - With an empty reseller prefix, acts as the definitive auth service only |
2123 | - for tokens that validate to a non-empty set of groups. For all other |
2124 | - requests, acts as the fallback auth service when no other auth |
2125 | - middleware overrides it. |
2126 | - |
2127 | - Alternatively, if the request matches the self.auth_prefix, the request |
2128 | - will be routed through the internal auth request handler (self.handle). |
2129 | - This is to handle creating users, accounts, granting tokens, etc. |
2130 | - """ |
2131 | - if 'HTTP_X_CF_TRANS_ID' not in env: |
2132 | - env['HTTP_X_CF_TRANS_ID'] = 'tx' + str(uuid4()) |
2133 | - if env.get('PATH_INFO', '').startswith(self.auth_prefix): |
2134 | - return self.handle(env, start_response) |
2135 | - s3 = env.get('HTTP_AUTHORIZATION') |
2136 | - token = env.get('HTTP_X_AUTH_TOKEN', env.get('HTTP_X_STORAGE_TOKEN')) |
2137 | - if s3 or (token and token.startswith(self.reseller_prefix)): |
2138 | - # Note: Empty reseller_prefix will match all tokens. |
2139 | - groups = self.get_groups(env, token) |
2140 | - if groups: |
2141 | - env['REMOTE_USER'] = groups |
2142 | - user = groups and groups.split(',', 1)[0] or '' |
2143 | - # We know the proxy logs the token, so we augment it just a bit |
2144 | - # to also log the authenticated user. |
2145 | - env['HTTP_X_AUTH_TOKEN'] = \ |
2146 | - '%s,%s' % (user, 's3' if s3 else token) |
2147 | - env['swift.authorize'] = self.authorize |
2148 | - env['swift.clean_acl'] = clean_acl |
2149 | - else: |
2150 | - # Unauthorized token |
2151 | - if self.reseller_prefix: |
2152 | - # Because I know I'm the definitive auth for this token, I |
2153 | - # can deny it outright. |
2154 | - return HTTPUnauthorized()(env, start_response) |
2155 | - # Because I'm not certain if I'm the definitive auth for empty |
2156 | - # reseller_prefixed tokens, I won't overwrite swift.authorize. |
2157 | - elif 'swift.authorize' not in env: |
2158 | - env['swift.authorize'] = self.denied_response |
2159 | - else: |
2160 | - if self.reseller_prefix: |
2161 | - # With a non-empty reseller_prefix, I would like to be called |
2162 | - # back for anonymous access to accounts I know I'm the |
2163 | - # definitive auth for. |
2164 | - try: |
2165 | - version, rest = split_path(env.get('PATH_INFO', ''), |
2166 | - 1, 2, True) |
2167 | - except ValueError: |
2168 | - return HTTPNotFound()(env, start_response) |
2169 | - if rest and rest.startswith(self.reseller_prefix): |
2170 | - # Handle anonymous access to accounts I'm the definitive |
2171 | - # auth for. |
2172 | - env['swift.authorize'] = self.authorize |
2173 | - env['swift.clean_acl'] = clean_acl |
2174 | - # Not my token, not my account, I can't authorize this request, |
2175 | - # deny all is a good idea if not already set... |
2176 | - elif 'swift.authorize' not in env: |
2177 | - env['swift.authorize'] = self.denied_response |
2178 | - # Because I'm not certain if I'm the definitive auth for empty |
2179 | - # reseller_prefixed accounts, I won't overwrite swift.authorize. |
2180 | - elif 'swift.authorize' not in env: |
2181 | - env['swift.authorize'] = self.authorize |
2182 | - env['swift.clean_acl'] = clean_acl |
2183 | - return self.app(env, start_response) |
2184 | - |
2185 | - def get_groups(self, env, token): |
2186 | - """ |
2187 | - Get groups for the given token. |
2188 | - |
2189 | - :param env: The current WSGI environment dictionary. |
2190 | - :param token: Token to validate and return a group string for. |
2191 | - |
2192 | - :returns: None if the token is invalid or a string containing a comma |
2193 | - separated list of groups the authenticated user is a member |
2194 | - of. The first group in the list is also considered a unique |
2195 | - identifier for that user. |
2196 | - """ |
2197 | - groups = None |
2198 | - memcache_client = cache_from_env(env) |
2199 | - if memcache_client: |
2200 | - memcache_key = '%s/auth/%s' % (self.reseller_prefix, token) |
2201 | - cached_auth_data = memcache_client.get(memcache_key) |
2202 | - if cached_auth_data: |
2203 | - expires, groups = cached_auth_data |
2204 | - if expires < time(): |
2205 | - groups = None |
2206 | - |
2207 | - if env.get('HTTP_AUTHORIZATION'): |
2208 | - account = env['HTTP_AUTHORIZATION'].split(' ')[1] |
2209 | - account, user, sign = account.split(':') |
2210 | - path = quote('/v1/%s/%s/%s' % (self.auth_account, account, user)) |
2211 | - resp = self.make_request(env, 'GET', path).get_response(self.app) |
2212 | - if resp.status_int // 100 != 2: |
2213 | - return None |
2214 | - |
2215 | - if 'x-object-meta-account-id' in resp.headers: |
2216 | - account_id = resp.headers['x-object-meta-account-id'] |
2217 | - else: |
2218 | - path = quote('/v1/%s/%s' % (self.auth_account, account)) |
2219 | - resp2 = self.make_request(env, 'HEAD', |
2220 | - path).get_response(self.app) |
2221 | - if resp2.status_int // 100 != 2: |
2222 | - return None |
2223 | - account_id = resp2.headers['x-container-meta-account-id'] |
2224 | - |
2225 | - path = env['PATH_INFO'] |
2226 | - env['PATH_INFO'] = path.replace("%s:%s" % (account, user), |
2227 | - account_id, 1) |
2228 | - detail = json.loads(resp.body) |
2229 | - |
2230 | - password = detail['auth'].split(':')[-1] |
2231 | - msg = base64.urlsafe_b64decode(unquote(token)) |
2232 | - s = base64.encodestring(hmac.new(detail['auth'].split(':')[-1], |
2233 | - msg, sha1).digest()).strip() |
2234 | - if s != sign: |
2235 | - return None |
2236 | - groups = [g['name'] for g in detail['groups']] |
2237 | - if '.admin' in groups: |
2238 | - groups.remove('.admin') |
2239 | - groups.append(account_id) |
2240 | - groups = ','.join(groups) |
2241 | - return groups |
2242 | - |
2243 | - if not groups: |
2244 | - path = quote('/v1/%s/.token_%s/%s' % |
2245 | - (self.auth_account, token[-1], token)) |
2246 | - resp = self.make_request(env, 'GET', path).get_response(self.app) |
2247 | - if resp.status_int // 100 != 2: |
2248 | - return None |
2249 | - detail = json.loads(resp.body) |
2250 | - if detail['expires'] < time(): |
2251 | - self.make_request(env, 'DELETE', path).get_response(self.app) |
2252 | - return None |
2253 | - groups = [g['name'] for g in detail['groups']] |
2254 | - if '.admin' in groups: |
2255 | - groups.remove('.admin') |
2256 | - groups.append(detail['account_id']) |
2257 | - groups = ','.join(groups) |
2258 | - if memcache_client: |
2259 | - memcache_client.set(memcache_key, (detail['expires'], groups), |
2260 | - timeout=float(detail['expires'] - time())) |
2261 | - return groups |
2262 | - |
2263 | - def authorize(self, req): |
2264 | - """ |
2265 | - Returns None if the request is authorized to continue or a standard |
2266 | - WSGI response callable if not. |
2267 | - """ |
2268 | - try: |
2269 | - version, account, container, obj = split_path(req.path, 1, 4, True) |
2270 | - except ValueError: |
2271 | - return HTTPNotFound(request=req) |
2272 | - if not account or not account.startswith(self.reseller_prefix): |
2273 | - return self.denied_response(req) |
2274 | - user_groups = (req.remote_user or '').split(',') |
2275 | - if '.reseller_admin' in user_groups and \ |
2276 | - account != self.reseller_prefix and \ |
2277 | - account[len(self.reseller_prefix)] != '.': |
2278 | - return None |
2279 | - if account in user_groups and \ |
2280 | - (req.method not in ('DELETE', 'PUT') or container): |
2281 | - # If the user is admin for the account and is not trying to do an |
2282 | - # account DELETE or PUT... |
2283 | - return None |
2284 | - referrers, groups = parse_acl(getattr(req, 'acl', None)) |
2285 | - if referrer_allowed(req.referer, referrers): |
2286 | - if obj or '.rlistings' in groups: |
2287 | - return None |
2288 | - return self.denied_response(req) |
2289 | - if not req.remote_user: |
2290 | - return self.denied_response(req) |
2291 | - for user_group in user_groups: |
2292 | - if user_group in groups: |
2293 | - return None |
2294 | - return self.denied_response(req) |
2295 | - |
2296 | - def denied_response(self, req): |
2297 | - """ |
2298 | - Returns a standard WSGI response callable with the status of 403 or 401 |
2299 | - depending on whether the REMOTE_USER is set or not. |
2300 | - """ |
2301 | - if req.remote_user: |
2302 | - return HTTPForbidden(request=req) |
2303 | - else: |
2304 | - return HTTPUnauthorized(request=req) |
2305 | - |
2306 | - def handle(self, env, start_response): |
2307 | - """ |
2308 | - WSGI entry point for auth requests (ones that match the |
2309 | - self.auth_prefix). |
2310 | - Wraps env in webob.Request object and passes it down. |
2311 | - |
2312 | - :param env: WSGI environment dictionary |
2313 | - :param start_response: WSGI callable |
2314 | - """ |
2315 | - try: |
2316 | - req = Request(env) |
2317 | - if self.auth_prefix: |
2318 | - req.path_info_pop() |
2319 | - req.bytes_transferred = '-' |
2320 | - req.client_disconnect = False |
2321 | - if 'x-storage-token' in req.headers and \ |
2322 | - 'x-auth-token' not in req.headers: |
2323 | - req.headers['x-auth-token'] = req.headers['x-storage-token'] |
2324 | - if 'eventlet.posthooks' in env: |
2325 | - env['eventlet.posthooks'].append( |
2326 | - (self.posthooklogger, (req,), {})) |
2327 | - return self.handle_request(req)(env, start_response) |
2328 | - else: |
2329 | - # Lack of posthook support means that we have to log on the |
2330 | - # start of the response, rather than after all the data has |
2331 | - # been sent. This prevents logging client disconnects |
2332 | - # differently than full transmissions. |
2333 | - response = self.handle_request(req)(env, start_response) |
2334 | - self.posthooklogger(env, req) |
2335 | - return response |
2336 | - except (Exception, TimeoutError): |
2337 | - print "EXCEPTION IN handle: %s: %s" % (format_exc(), env) |
2338 | - start_response('500 Server Error', |
2339 | - [('Content-Type', 'text/plain')]) |
2340 | - return ['Internal server error.\n'] |
2341 | - |
2342 | - def handle_request(self, req): |
2343 | - """ |
2344 | - Entry point for auth requests (ones that match the self.auth_prefix). |
2345 | - Should return a WSGI-style callable (such as webob.Response). |
2346 | - |
2347 | - :param req: webob.Request object |
2348 | - """ |
2349 | - req.start_time = time() |
2350 | - handler = None |
2351 | - try: |
2352 | - version, account, user, _junk = split_path(req.path_info, |
2353 | - minsegs=1, maxsegs=4, rest_with_last=True) |
2354 | - except ValueError: |
2355 | - return HTTPNotFound(request=req) |
2356 | - if version in ('v1', 'v1.0', 'auth'): |
2357 | - if req.method == 'GET': |
2358 | - handler = self.handle_get_token |
2359 | - elif version == 'v2': |
2360 | - req.path_info_pop() |
2361 | - if req.method == 'GET': |
2362 | - if not account and not user: |
2363 | - handler = self.handle_get_reseller |
2364 | - elif account: |
2365 | - if not user: |
2366 | - handler = self.handle_get_account |
2367 | - elif account == '.token': |
2368 | - req.path_info_pop() |
2369 | - handler = self.handle_validate_token |
2370 | - else: |
2371 | - handler = self.handle_get_user |
2372 | - elif req.method == 'PUT': |
2373 | - if not user: |
2374 | - handler = self.handle_put_account |
2375 | - else: |
2376 | - handler = self.handle_put_user |
2377 | - elif req.method == 'DELETE': |
2378 | - if not user: |
2379 | - handler = self.handle_delete_account |
2380 | - else: |
2381 | - handler = self.handle_delete_user |
2382 | - elif req.method == 'POST': |
2383 | - if account == '.prep': |
2384 | - handler = self.handle_prep |
2385 | - elif user == '.services': |
2386 | - handler = self.handle_set_services |
2387 | - if not handler: |
2388 | - req.response = HTTPBadRequest(request=req) |
2389 | - else: |
2390 | - req.response = handler(req) |
2391 | - return req.response |
2392 | - |
2393 | - def handle_prep(self, req): |
2394 | - """ |
2395 | - Handles the POST v2/.prep call for preparing the backing store Swift |
2396 | - cluster for use with the auth subsystem. Can only be called by |
2397 | - .super_admin. |
2398 | - |
2399 | - :param req: The webob.Request to process. |
2400 | - :returns: webob.Response, 204 on success |
2401 | - """ |
2402 | - if not self.is_super_admin(req): |
2403 | - return HTTPForbidden(request=req) |
2404 | - path = quote('/v1/%s' % self.auth_account) |
2405 | - resp = self.make_request(req.environ, 'PUT', |
2406 | - path).get_response(self.app) |
2407 | - if resp.status_int // 100 != 2: |
2408 | - raise Exception('Could not create the main auth account: %s %s' % |
2409 | - (path, resp.status)) |
2410 | - path = quote('/v1/%s/.account_id' % self.auth_account) |
2411 | - resp = self.make_request(req.environ, 'PUT', |
2412 | - path).get_response(self.app) |
2413 | - if resp.status_int // 100 != 2: |
2414 | - raise Exception('Could not create container: %s %s' % |
2415 | - (path, resp.status)) |
2416 | - for container in xrange(16): |
2417 | - path = quote('/v1/%s/.token_%x' % (self.auth_account, container)) |
2418 | - resp = self.make_request(req.environ, 'PUT', |
2419 | - path).get_response(self.app) |
2420 | - if resp.status_int // 100 != 2: |
2421 | - raise Exception('Could not create container: %s %s' % |
2422 | - (path, resp.status)) |
2423 | - return HTTPNoContent(request=req) |
2424 | - |
2425 | - def handle_get_reseller(self, req): |
2426 | - """ |
2427 | - Handles the GET v2 call for getting general reseller information |
2428 | - (currently just a list of accounts). Can only be called by a |
2429 | - .reseller_admin. |
2430 | - |
2431 | - On success, a JSON dictionary will be returned with a single `accounts` |
2432 | - key whose value is list of dicts. Each dict represents an account and |
2433 | - currently only contains the single key `name`. For example:: |
2434 | - |
2435 | - {"accounts": [{"name": "reseller"}, {"name": "test"}, |
2436 | - {"name": "test2"}]} |
2437 | - |
2438 | - :param req: The webob.Request to process. |
2439 | - :returns: webob.Response, 2xx on success with a JSON dictionary as |
2440 | - explained above. |
2441 | - """ |
2442 | - if not self.is_reseller_admin(req): |
2443 | - return HTTPForbidden(request=req) |
2444 | - listing = [] |
2445 | - marker = '' |
2446 | - while True: |
2447 | - path = '/v1/%s?format=json&marker=%s' % (quote(self.auth_account), |
2448 | - quote(marker)) |
2449 | - resp = self.make_request(req.environ, 'GET', |
2450 | - path).get_response(self.app) |
2451 | - if resp.status_int // 100 != 2: |
2452 | - raise Exception('Could not list main auth account: %s %s' % |
2453 | - (path, resp.status)) |
2454 | - sublisting = json.loads(resp.body) |
2455 | - if not sublisting: |
2456 | - break |
2457 | - for container in sublisting: |
2458 | - if container['name'][0] != '.': |
2459 | - listing.append({'name': container['name']}) |
2460 | - marker = sublisting[-1]['name'] |
2461 | - return Response(body=json.dumps({'accounts': listing})) |
2462 | - |
2463 | - def handle_get_account(self, req): |
2464 | - """ |
2465 | - Handles the GET v2/<account> call for getting account information. |
2466 | - Can only be called by an account .admin. |
2467 | - |
2468 | - On success, a JSON dictionary will be returned containing the keys |
2469 | - `account_id`, `services`, and `users`. The `account_id` is the value |
2470 | - used when creating service accounts. The `services` value is a dict as |
2471 | - described in the :func:`handle_get_token` call. The `users` value is a |
2472 | - list of dicts, each dict representing a user and currently only |
2473 | - containing the single key `name`. For example:: |
2474 | - |
2475 | - {"account_id": "AUTH_018c3946-23f8-4efb-a8fb-b67aae8e4162", |
2476 | - "services": {"storage": {"default": "local", |
2477 | - "local": "http://127.0.0.1:8080/v1/AUTH_018c3946"}}, |
2478 | - "users": [{"name": "tester"}, {"name": "tester3"}]} |
2479 | - |
2480 | - :param req: The webob.Request to process. |
2481 | - :returns: webob.Response, 2xx on success with a JSON dictionary as |
2482 | - explained above. |
2483 | - """ |
2484 | - account = req.path_info_pop() |
2485 | - if req.path_info or not account or account[0] == '.': |
2486 | - return HTTPBadRequest(request=req) |
2487 | - if not self.is_account_admin(req, account): |
2488 | - return HTTPForbidden(request=req) |
2489 | - path = quote('/v1/%s/%s/.services' % (self.auth_account, account)) |
2490 | - resp = self.make_request(req.environ, 'GET', |
2491 | - path).get_response(self.app) |
2492 | - if resp.status_int == 404: |
2493 | - return HTTPNotFound(request=req) |
2494 | - if resp.status_int // 100 != 2: |
2495 | - raise Exception('Could not obtain the .services object: %s %s' % |
2496 | - (path, resp.status)) |
2497 | - services = json.loads(resp.body) |
2498 | - listing = [] |
2499 | - marker = '' |
2500 | - while True: |
2501 | - path = '/v1/%s?format=json&marker=%s' % (quote('%s/%s' % |
2502 | - (self.auth_account, account)), quote(marker)) |
2503 | - resp = self.make_request(req.environ, 'GET', |
2504 | - path).get_response(self.app) |
2505 | - if resp.status_int == 404: |
2506 | - return HTTPNotFound(request=req) |
2507 | - if resp.status_int // 100 != 2: |
2508 | - raise Exception('Could not list in main auth account: %s %s' % |
2509 | - (path, resp.status)) |
2510 | - account_id = resp.headers['X-Container-Meta-Account-Id'] |
2511 | - sublisting = json.loads(resp.body) |
2512 | - if not sublisting: |
2513 | - break |
2514 | - for obj in sublisting: |
2515 | - if obj['name'][0] != '.': |
2516 | - listing.append({'name': obj['name']}) |
2517 | - marker = sublisting[-1]['name'] |
2518 | - return Response(body=json.dumps({'account_id': account_id, |
2519 | - 'services': services, 'users': listing})) |
2520 | - |
2521 | - def handle_set_services(self, req): |
2522 | - """ |
2523 | - Handles the POST v2/<account>/.services call for setting services |
2524 | - information. Can only be called by a reseller .admin. |
2525 | - |
2526 | - In the :func:`handle_get_account` (GET v2/<account>) call, a section of |
2527 | - the returned JSON dict is `services`. This section looks something like |
2528 | - this:: |
2529 | - |
2530 | - "services": {"storage": {"default": "local", |
2531 | - "local": "http://127.0.0.1:8080/v1/AUTH_018c3946"}} |
2532 | - |
2533 | - Making use of this section is described in :func:`handle_get_token`. |
2534 | - |
2535 | - This function allows setting values within this section for the |
2536 | - <account>, allowing the addition of new service end points or updating |
2537 | - existing ones. |
2538 | - |
2539 | - The body of the POST request should contain a JSON dict with the |
2540 | - following format:: |
2541 | - |
2542 | - {"service_name": {"end_point_name": "end_point_value"}} |
2543 | - |
2544 | - There can be multiple services and multiple end points in the same |
2545 | - call. |
2546 | - |
2547 | - Any new services or end points will be added to the existing set of |
2548 | - services and end points. Any existing services with the same service |
2549 | - name will be merged with the new end points. Any existing end points |
2550 | - with the same end point name will have their values updated. |
2551 | - |
2552 | - The updated services dictionary will be returned on success. |
2553 | - |
2554 | - :param req: The webob.Request to process. |
2555 | - :returns: webob.Response, 2xx on success with the udpated services JSON |
2556 | - dict as described above |
2557 | - """ |
2558 | - if not self.is_reseller_admin(req): |
2559 | - return HTTPForbidden(request=req) |
2560 | - account = req.path_info_pop() |
2561 | - if req.path_info != '/.services' or not account or account[0] == '.': |
2562 | - return HTTPBadRequest(request=req) |
2563 | - try: |
2564 | - new_services = json.loads(req.body) |
2565 | - except ValueError, err: |
2566 | - return HTTPBadRequest(body=str(err)) |
2567 | - # Get the current services information |
2568 | - path = quote('/v1/%s/%s/.services' % (self.auth_account, account)) |
2569 | - resp = self.make_request(req.environ, 'GET', |
2570 | - path).get_response(self.app) |
2571 | - if resp.status_int == 404: |
2572 | - return HTTPNotFound(request=req) |
2573 | - if resp.status_int // 100 != 2: |
2574 | - raise Exception('Could not obtain services info: %s %s' % |
2575 | - (path, resp.status)) |
2576 | - services = json.loads(resp.body) |
2577 | - for new_service, value in new_services.iteritems(): |
2578 | - if new_service in services: |
2579 | - services[new_service].update(value) |
2580 | - else: |
2581 | - services[new_service] = value |
2582 | - # Save the new services information |
2583 | - services = json.dumps(services) |
2584 | - resp = self.make_request(req.environ, 'PUT', path, |
2585 | - services).get_response(self.app) |
2586 | - if resp.status_int // 100 != 2: |
2587 | - raise Exception('Could not save .services object: %s %s' % |
2588 | - (path, resp.status)) |
2589 | - return Response(request=req, body=services) |
2590 | - |
2591 | - def handle_put_account(self, req): |
2592 | - """ |
2593 | - Handles the PUT v2/<account> call for adding an account to the auth |
2594 | - system. Can only be called by a .reseller_admin. |
2595 | - |
2596 | - By default, a newly created UUID4 will be used with the reseller prefix |
2597 | - as the account id used when creating corresponding service accounts. |
2598 | - However, you can provide an X-Account-Suffix header to replace the |
2599 | - UUID4 part. |
2600 | - |
2601 | - :param req: The webob.Request to process. |
2602 | - :returns: webob.Response, 2xx on success. |
2603 | - """ |
2604 | - if not self.is_reseller_admin(req): |
2605 | - return HTTPForbidden(request=req) |
2606 | - account = req.path_info_pop() |
2607 | - if req.path_info or not account or account[0] == '.': |
2608 | - return HTTPBadRequest(request=req) |
2609 | - # Ensure the container in the main auth account exists (this |
2610 | - # container represents the new account) |
2611 | - path = quote('/v1/%s/%s' % (self.auth_account, account)) |
2612 | - resp = self.make_request(req.environ, 'HEAD', |
2613 | - path).get_response(self.app) |
2614 | - if resp.status_int == 404: |
2615 | - resp = self.make_request(req.environ, 'PUT', |
2616 | - path).get_response(self.app) |
2617 | - if resp.status_int // 100 != 2: |
2618 | - raise Exception('Could not create account within main auth ' |
2619 | - 'account: %s %s' % (path, resp.status)) |
2620 | - elif resp.status_int // 100 == 2: |
2621 | - if 'x-container-meta-account-id' in resp.headers: |
2622 | - # Account was already created |
2623 | - return HTTPAccepted(request=req) |
2624 | - else: |
2625 | - raise Exception('Could not verify account within main auth ' |
2626 | - 'account: %s %s' % (path, resp.status)) |
2627 | - account_suffix = req.headers.get('x-account-suffix') |
2628 | - if not account_suffix: |
2629 | - account_suffix = str(uuid4()) |
2630 | - # Create the new account in the Swift cluster |
2631 | - path = quote('%s/%s%s' % (self.dsc_parsed2.path, |
2632 | - self.reseller_prefix, account_suffix)) |
2633 | - try: |
2634 | - conn = self.get_conn() |
2635 | - conn.request('PUT', path, |
2636 | - headers={'X-Auth-Token': self.get_itoken(req.environ)}) |
2637 | - resp = conn.getresponse() |
2638 | - resp.read() |
2639 | - if resp.status // 100 != 2: |
2640 | - raise Exception('Could not create account on the Swift ' |
2641 | - 'cluster: %s %s %s' % (path, resp.status, resp.reason)) |
2642 | - except (Exception, TimeoutError): |
2643 | - self.logger.error(_('ERROR: Exception while trying to communicate ' |
2644 | - 'with %(scheme)s://%(host)s:%(port)s/%(path)s'), |
2645 | - {'scheme': self.dsc_parsed2.scheme, |
2646 | - 'host': self.dsc_parsed2.hostname, |
2647 | - 'port': self.dsc_parsed2.port, 'path': path}) |
2648 | - raise |
2649 | - # Record the mapping from account id back to account name |
2650 | - path = quote('/v1/%s/.account_id/%s%s' % |
2651 | - (self.auth_account, self.reseller_prefix, account_suffix)) |
2652 | - resp = self.make_request(req.environ, 'PUT', path, |
2653 | - account).get_response(self.app) |
2654 | - if resp.status_int // 100 != 2: |
2655 | - raise Exception('Could not create account id mapping: %s %s' % |
2656 | - (path, resp.status)) |
2657 | - # Record the cluster url(s) for the account |
2658 | - path = quote('/v1/%s/%s/.services' % (self.auth_account, account)) |
2659 | - services = {'storage': {}} |
2660 | - services['storage'][self.dsc_name] = '%s/%s%s' % (self.dsc_url, |
2661 | - self.reseller_prefix, account_suffix) |
2662 | - services['storage']['default'] = self.dsc_name |
2663 | - resp = self.make_request(req.environ, 'PUT', path, |
2664 | - json.dumps(services)).get_response(self.app) |
2665 | - if resp.status_int // 100 != 2: |
2666 | - raise Exception('Could not create .services object: %s %s' % |
2667 | - (path, resp.status)) |
2668 | - # Record the mapping from account name to the account id |
2669 | - path = quote('/v1/%s/%s' % (self.auth_account, account)) |
2670 | - resp = self.make_request(req.environ, 'POST', path, |
2671 | - headers={'X-Container-Meta-Account-Id': '%s%s' % |
2672 | - (self.reseller_prefix, account_suffix)}).get_response(self.app) |
2673 | - if resp.status_int // 100 != 2: |
2674 | - raise Exception('Could not record the account id on the account: ' |
2675 | - '%s %s' % (path, resp.status)) |
2676 | - return HTTPCreated(request=req) |
2677 | - |
2678 | - def handle_delete_account(self, req): |
2679 | - """ |
2680 | - Handles the DELETE v2/<account> call for removing an account from the |
2681 | - auth system. Can only be called by a .reseller_admin. |
2682 | - |
2683 | - :param req: The webob.Request to process. |
2684 | - :returns: webob.Response, 2xx on success. |
2685 | - """ |
2686 | - if not self.is_reseller_admin(req): |
2687 | - return HTTPForbidden(request=req) |
2688 | - account = req.path_info_pop() |
2689 | - if req.path_info or not account or account[0] == '.': |
2690 | - return HTTPBadRequest(request=req) |
2691 | - # Make sure the account has no users and get the account_id |
2692 | - marker = '' |
2693 | - while True: |
2694 | - path = '/v1/%s?format=json&marker=%s' % (quote('%s/%s' % |
2695 | - (self.auth_account, account)), quote(marker)) |
2696 | - resp = self.make_request(req.environ, 'GET', |
2697 | - path).get_response(self.app) |
2698 | - if resp.status_int == 404: |
2699 | - return HTTPNotFound(request=req) |
2700 | - if resp.status_int // 100 != 2: |
2701 | - raise Exception('Could not list in main auth account: %s %s' % |
2702 | - (path, resp.status)) |
2703 | - account_id = resp.headers['x-container-meta-account-id'] |
2704 | - sublisting = json.loads(resp.body) |
2705 | - if not sublisting: |
2706 | - break |
2707 | - for obj in sublisting: |
2708 | - if obj['name'][0] != '.': |
2709 | - return HTTPConflict(request=req) |
2710 | - marker = sublisting[-1]['name'] |
2711 | - # Obtain the listing of services the account is on. |
2712 | - path = quote('/v1/%s/%s/.services' % (self.auth_account, account)) |
2713 | - resp = self.make_request(req.environ, 'GET', |
2714 | - path).get_response(self.app) |
2715 | - if resp.status_int // 100 != 2 and resp.status_int != 404: |
2716 | - raise Exception('Could not obtain .services object: %s %s' % |
2717 | - (path, resp.status)) |
2718 | - if resp.status_int // 100 == 2: |
2719 | - services = json.loads(resp.body) |
2720 | - # Delete the account on each cluster it is on. |
2721 | - deleted_any = False |
2722 | - for name, url in services['storage'].iteritems(): |
2723 | - if name != 'default': |
2724 | - parsed = urlparse(url) |
2725 | - conn = self.get_conn(parsed) |
2726 | - conn.request('DELETE', parsed.path, |
2727 | - headers={'X-Auth-Token': self.get_itoken(req.environ)}) |
2728 | - resp = conn.getresponse() |
2729 | - resp.read() |
2730 | - if resp.status == 409: |
2731 | - if deleted_any: |
2732 | - raise Exception('Managed to delete one or more ' |
2733 | - 'service end points, but failed with: ' |
2734 | - '%s %s %s' % (url, resp.status, resp.reason)) |
2735 | - else: |
2736 | - return HTTPConflict(request=req) |
2737 | - if resp.status // 100 != 2 and resp.status != 404: |
2738 | - raise Exception('Could not delete account on the ' |
2739 | - 'Swift cluster: %s %s %s' % |
2740 | - (url, resp.status, resp.reason)) |
2741 | - deleted_any = True |
2742 | - # Delete the .services object itself. |
2743 | - path = quote('/v1/%s/%s/.services' % |
2744 | - (self.auth_account, account)) |
2745 | - resp = self.make_request(req.environ, 'DELETE', |
2746 | - path).get_response(self.app) |
2747 | - if resp.status_int // 100 != 2 and resp.status_int != 404: |
2748 | - raise Exception('Could not delete .services object: %s %s' % |
2749 | - (path, resp.status)) |
2750 | - # Delete the account id mapping for the account. |
2751 | - path = quote('/v1/%s/.account_id/%s' % |
2752 | - (self.auth_account, account_id)) |
2753 | - resp = self.make_request(req.environ, 'DELETE', |
2754 | - path).get_response(self.app) |
2755 | - if resp.status_int // 100 != 2 and resp.status_int != 404: |
2756 | - raise Exception('Could not delete account id mapping: %s %s' % |
2757 | - (path, resp.status)) |
2758 | - # Delete the account marker itself. |
2759 | - path = quote('/v1/%s/%s' % (self.auth_account, account)) |
2760 | - resp = self.make_request(req.environ, 'DELETE', |
2761 | - path).get_response(self.app) |
2762 | - if resp.status_int // 100 != 2 and resp.status_int != 404: |
2763 | - raise Exception('Could not delete account marked: %s %s' % |
2764 | - (path, resp.status)) |
2765 | - return HTTPNoContent(request=req) |
2766 | - |
2767 | - def handle_get_user(self, req): |
2768 | - """ |
2769 | - Handles the GET v2/<account>/<user> call for getting user information. |
2770 | - Can only be called by an account .admin. |
2771 | - |
2772 | - On success, a JSON dict will be returned as described:: |
2773 | - |
2774 | - {"groups": [ # List of groups the user is a member of |
2775 | - {"name": "<act>:<usr>"}, |
2776 | - # The first group is a unique user identifier |
2777 | - {"name": "<account>"}, |
2778 | - # The second group is the auth account name |
2779 | - {"name": "<additional-group>"} |
2780 | - # There may be additional groups, .admin being a special |
2781 | - # group indicating an account admin and .reseller_admin |
2782 | - # indicating a reseller admin. |
2783 | - ], |
2784 | - "auth": "plaintext:<key>" |
2785 | - # The auth-type and key for the user; currently only plaintext is |
2786 | - # implemented. |
2787 | - } |
2788 | - |
2789 | - For example:: |
2790 | - |
2791 | - {"groups": [{"name": "test:tester"}, {"name": "test"}, |
2792 | - {"name": ".admin"}], |
2793 | - "auth": "plaintext:testing"} |
2794 | - |
2795 | - If the <user> in the request is the special user `.groups`, the JSON |
2796 | - dict will contain a single key of `groups` whose value is a list of |
2797 | - dicts representing the active groups within the account. Each dict |
2798 | - currently has the single key `name`. For example:: |
2799 | - |
2800 | - {"groups": [{"name": ".admin"}, {"name": "test"}, |
2801 | - {"name": "test:tester"}, {"name": "test:tester3"}]} |
2802 | - |
2803 | - :param req: The webob.Request to process. |
2804 | - :returns: webob.Response, 2xx on success with a JSON dictionary as |
2805 | - explained above. |
2806 | - """ |
2807 | - account = req.path_info_pop() |
2808 | - user = req.path_info_pop() |
2809 | - if req.path_info or not account or account[0] == '.' or not user or \ |
2810 | - (user[0] == '.' and user != '.groups'): |
2811 | - return HTTPBadRequest(request=req) |
2812 | - if not self.is_account_admin(req, account): |
2813 | - return HTTPForbidden(request=req) |
2814 | - if user == '.groups': |
2815 | - # TODO: This could be very slow for accounts with a really large |
2816 | - # number of users. Speed could be improved by concurrently |
2817 | - # requesting user group information. Then again, I don't *know* |
2818 | - # it's slow for `normal` use cases, so testing should be done. |
2819 | - groups = set() |
2820 | - marker = '' |
2821 | - while True: |
2822 | - path = '/v1/%s?format=json&marker=%s' % (quote('%s/%s' % |
2823 | - (self.auth_account, account)), quote(marker)) |
2824 | - resp = self.make_request(req.environ, 'GET', |
2825 | - path).get_response(self.app) |
2826 | - if resp.status_int == 404: |
2827 | - return HTTPNotFound(request=req) |
2828 | - if resp.status_int // 100 != 2: |
2829 | - raise Exception('Could not list in main auth account: ' |
2830 | - '%s %s' % (path, resp.status)) |
2831 | - sublisting = json.loads(resp.body) |
2832 | - if not sublisting: |
2833 | - break |
2834 | - for obj in sublisting: |
2835 | - if obj['name'][0] != '.': |
2836 | - path = quote('/v1/%s/%s/%s' % (self.auth_account, |
2837 | - account, obj['name'])) |
2838 | - resp = self.make_request(req.environ, 'GET', |
2839 | - path).get_response(self.app) |
2840 | - if resp.status_int // 100 != 2: |
2841 | - raise Exception('Could not retrieve user object: ' |
2842 | - '%s %s' % (path, resp.status)) |
2843 | - groups.update(g['name'] |
2844 | - for g in json.loads(resp.body)['groups']) |
2845 | - marker = sublisting[-1]['name'] |
2846 | - body = json.dumps({'groups': |
2847 | - [{'name': g} for g in sorted(groups)]}) |
2848 | - else: |
2849 | - path = quote('/v1/%s/%s/%s' % (self.auth_account, account, user)) |
2850 | - resp = self.make_request(req.environ, 'GET', |
2851 | - path).get_response(self.app) |
2852 | - if resp.status_int == 404: |
2853 | - return HTTPNotFound(request=req) |
2854 | - if resp.status_int // 100 != 2: |
2855 | - raise Exception('Could not retrieve user object: %s %s' % |
2856 | - (path, resp.status)) |
2857 | - body = resp.body |
2858 | - display_groups = [g['name'] for g in json.loads(body)['groups']] |
2859 | - if ('.admin' in display_groups and |
2860 | - not self.is_reseller_admin(req)) or \ |
2861 | - ('.reseller_admin' in display_groups and |
2862 | - not self.is_super_admin(req)): |
2863 | - return HTTPForbidden(request=req) |
2864 | - return Response(body=body) |
2865 | - |
2866 | - def handle_put_user(self, req): |
2867 | - """ |
2868 | - Handles the PUT v2/<account>/<user> call for adding a user to an |
2869 | - account. |
2870 | - |
2871 | - X-Auth-User-Key represents the user's key, X-Auth-User-Admin may be set |
2872 | - to `true` to create an account .admin, and X-Auth-User-Reseller-Admin |
2873 | - may be set to `true` to create a .reseller_admin. |
2874 | - |
2875 | - Can only be called by an account .admin unless the user is to be a |
2876 | - .reseller_admin, in which case the request must be by .super_admin. |
2877 | - |
2878 | - :param req: The webob.Request to process. |
2879 | - :returns: webob.Response, 2xx on success. |
2880 | - """ |
2881 | - # Validate path info |
2882 | - account = req.path_info_pop() |
2883 | - user = req.path_info_pop() |
2884 | - key = req.headers.get('x-auth-user-key') |
2885 | - admin = req.headers.get('x-auth-user-admin') == 'true' |
2886 | - reseller_admin = \ |
2887 | - req.headers.get('x-auth-user-reseller-admin') == 'true' |
2888 | - if reseller_admin: |
2889 | - admin = True |
2890 | - if req.path_info or not account or account[0] == '.' or not user or \ |
2891 | - user[0] == '.' or not key: |
2892 | - return HTTPBadRequest(request=req) |
2893 | - if reseller_admin: |
2894 | - if not self.is_super_admin(req): |
2895 | - return HTTPForbidden(request=req) |
2896 | - elif not self.is_account_admin(req, account): |
2897 | - return HTTPForbidden(request=req) |
2898 | - |
2899 | - path = quote('/v1/%s/%s' % (self.auth_account, account)) |
2900 | - resp = self.make_request(req.environ, 'HEAD', |
2901 | - path).get_response(self.app) |
2902 | - if resp.status_int // 100 != 2: |
2903 | - raise Exception('Could not retrieve account id value: %s %s' % |
2904 | - (path, resp.status)) |
2905 | - headers = {'X-Object-Meta-Account-Id': |
2906 | - resp.headers['x-container-meta-account-id']} |
2907 | - # Create the object in the main auth account (this object represents |
2908 | - # the user) |
2909 | - path = quote('/v1/%s/%s/%s' % (self.auth_account, account, user)) |
2910 | - groups = ['%s:%s' % (account, user), account] |
2911 | - if admin: |
2912 | - groups.append('.admin') |
2913 | - if reseller_admin: |
2914 | - groups.append('.reseller_admin') |
2915 | - resp = self.make_request(req.environ, 'PUT', path, |
2916 | - json.dumps({'auth': 'plaintext:%s' % key, |
2917 | - 'groups': [{'name': g} for g in groups]}), |
2918 | - headers=headers).get_response(self.app) |
2919 | - if resp.status_int == 404: |
2920 | - return HTTPNotFound(request=req) |
2921 | - if resp.status_int // 100 != 2: |
2922 | - raise Exception('Could not create user object: %s %s' % |
2923 | - (path, resp.status)) |
2924 | - return HTTPCreated(request=req) |
2925 | - |
2926 | - def handle_delete_user(self, req): |
2927 | - """ |
2928 | - Handles the DELETE v2/<account>/<user> call for deleting a user from an |
2929 | - account. |
2930 | - |
2931 | - Can only be called by an account .admin. |
2932 | - |
2933 | - :param req: The webob.Request to process. |
2934 | - :returns: webob.Response, 2xx on success. |
2935 | - """ |
2936 | - # Validate path info |
2937 | - account = req.path_info_pop() |
2938 | - user = req.path_info_pop() |
2939 | - if req.path_info or not account or account[0] == '.' or not user or \ |
2940 | - user[0] == '.': |
2941 | - return HTTPBadRequest(request=req) |
2942 | - if not self.is_account_admin(req, account): |
2943 | - return HTTPForbidden(request=req) |
2944 | - # Delete the user's existing token, if any. |
2945 | - path = quote('/v1/%s/%s/%s' % (self.auth_account, account, user)) |
2946 | - resp = self.make_request(req.environ, 'HEAD', |
2947 | - path).get_response(self.app) |
2948 | - if resp.status_int == 404: |
2949 | - return HTTPNotFound(request=req) |
2950 | - elif resp.status_int // 100 != 2: |
2951 | - raise Exception('Could not obtain user details: %s %s' % |
2952 | - (path, resp.status)) |
2953 | - candidate_token = resp.headers.get('x-object-meta-auth-token') |
2954 | - if candidate_token: |
2955 | - path = quote('/v1/%s/.token_%s/%s' % |
2956 | - (self.auth_account, candidate_token[-1], candidate_token)) |
2957 | - resp = self.make_request(req.environ, 'DELETE', |
2958 | - path).get_response(self.app) |
2959 | - if resp.status_int // 100 != 2 and resp.status_int != 404: |
2960 | - raise Exception('Could not delete possibly existing token: ' |
2961 | - '%s %s' % (path, resp.status)) |
2962 | - # Delete the user entry itself. |
2963 | - path = quote('/v1/%s/%s/%s' % (self.auth_account, account, user)) |
2964 | - resp = self.make_request(req.environ, 'DELETE', |
2965 | - path).get_response(self.app) |
2966 | - if resp.status_int // 100 != 2 and resp.status_int != 404: |
2967 | - raise Exception('Could not delete the user object: %s %s' % |
2968 | - (path, resp.status)) |
2969 | - return HTTPNoContent(request=req) |
2970 | - |
2971 | - def handle_get_token(self, req): |
2972 | - """ |
2973 | - Handles the various `request for token and service end point(s)` calls. |
2974 | - There are various formats to support the various auth servers in the |
2975 | - past. Examples:: |
2976 | - |
2977 | - GET <auth-prefix>/v1/<act>/auth |
2978 | - X-Auth-User: <act>:<usr> or X-Storage-User: <usr> |
2979 | - X-Auth-Key: <key> or X-Storage-Pass: <key> |
2980 | - GET <auth-prefix>/auth |
2981 | - X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> |
2982 | - X-Auth-Key: <key> or X-Storage-Pass: <key> |
2983 | - GET <auth-prefix>/v1.0 |
2984 | - X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> |
2985 | - X-Auth-Key: <key> or X-Storage-Pass: <key> |
2986 | - |
2987 | - On successful authentication, the response will have X-Auth-Token and |
2988 | - X-Storage-Token set to the token to use with Swift and X-Storage-URL |
2989 | - set to the URL to the default Swift cluster to use. |
2990 | - |
2991 | - The response body will be set to the account's services JSON object as |
2992 | - described here:: |
2993 | - |
2994 | - {"storage": { # Represents the Swift storage service end points |
2995 | - "default": "cluster1", # Indicates which cluster is the default |
2996 | - "cluster1": "<URL to use with Swift>", |
2997 | - # A Swift cluster that can be used with this account, |
2998 | - # "cluster1" is the name of the cluster which is usually a |
2999 | - # location indicator (like "dfw" for a datacenter region). |
3000 | - "cluster2": "<URL to use with Swift>" |
3001 | - # Another Swift cluster that can be used with this account, |
3002 | - # there will always be at least one Swift cluster to use or |
3003 | - # this whole "storage" dict won't be included at all. |
3004 | - }, |
3005 | - "servers": { # Represents the Nova server service end points |
3006 | - # Expected to be similar to the "storage" dict, but not |
3007 | - # implemented yet. |
3008 | - }, |
3009 | - # Possibly other service dicts, not implemented yet. |
3010 | - } |
3011 | - |
3012 | - :param req: The webob.Request to process. |
3013 | - :returns: webob.Response, 2xx on success with data set as explained |
3014 | - above. |
3015 | - """ |
3016 | - # Validate the request info |
3017 | - try: |
3018 | - pathsegs = split_path(req.path_info, minsegs=1, maxsegs=3, |
3019 | - rest_with_last=True) |
3020 | - except ValueError: |
3021 | - return HTTPNotFound(request=req) |
3022 | - if pathsegs[0] == 'v1' and pathsegs[2] == 'auth': |
3023 | - account = pathsegs[1] |
3024 | - user = req.headers.get('x-storage-user') |
3025 | - if not user: |
3026 | - user = req.headers.get('x-auth-user') |
3027 | - if not user or ':' not in user: |
3028 | - return HTTPUnauthorized(request=req) |
3029 | - account2, user = user.split(':', 1) |
3030 | - if account != account2: |
3031 | - return HTTPUnauthorized(request=req) |
3032 | - key = req.headers.get('x-storage-pass') |
3033 | - if not key: |
3034 | - key = req.headers.get('x-auth-key') |
3035 | - elif pathsegs[0] in ('auth', 'v1.0'): |
3036 | - user = req.headers.get('x-auth-user') |
3037 | - if not user: |
3038 | - user = req.headers.get('x-storage-user') |
3039 | - if not user or ':' not in user: |
3040 | - return HTTPUnauthorized(request=req) |
3041 | - account, user = user.split(':', 1) |
3042 | - key = req.headers.get('x-auth-key') |
3043 | - if not key: |
3044 | - key = req.headers.get('x-storage-pass') |
3045 | - else: |
3046 | - return HTTPBadRequest(request=req) |
3047 | - if not all((account, user, key)): |
3048 | - return HTTPUnauthorized(request=req) |
3049 | - if user == '.super_admin' and key == self.super_admin_key: |
3050 | - token = self.get_itoken(req.environ) |
3051 | - url = '%s/%s.auth' % (self.dsc_url, self.reseller_prefix) |
3052 | - return Response(request=req, |
3053 | - body=json.dumps({'storage': {'default': 'local', 'local': url}}), |
3054 | - headers={'x-auth-token': token, 'x-storage-token': token, |
3055 | - 'x-storage-url': url}) |
3056 | - # Authenticate user |
3057 | - path = quote('/v1/%s/%s/%s' % (self.auth_account, account, user)) |
3058 | - resp = self.make_request(req.environ, 'GET', |
3059 | - path).get_response(self.app) |
3060 | - if resp.status_int == 404: |
3061 | - return HTTPUnauthorized(request=req) |
3062 | - if resp.status_int // 100 != 2: |
3063 | - raise Exception('Could not obtain user details: %s %s' % |
3064 | - (path, resp.status)) |
3065 | - user_detail = json.loads(resp.body) |
3066 | - if not self.credentials_match(user_detail, key): |
3067 | - return HTTPUnauthorized(request=req) |
3068 | - # See if a token already exists and hasn't expired |
3069 | - token = None |
3070 | - candidate_token = resp.headers.get('x-object-meta-auth-token') |
3071 | - if candidate_token: |
3072 | - path = quote('/v1/%s/.token_%s/%s' % |
3073 | - (self.auth_account, candidate_token[-1], candidate_token)) |
3074 | - resp = self.make_request(req.environ, 'GET', |
3075 | - path).get_response(self.app) |
3076 | - if resp.status_int // 100 == 2: |
3077 | - token_detail = json.loads(resp.body) |
3078 | - if token_detail['expires'] > time(): |
3079 | - token = candidate_token |
3080 | - else: |
3081 | - self.make_request(req.environ, 'DELETE', |
3082 | - path).get_response(self.app) |
3083 | - elif resp.status_int != 404: |
3084 | - raise Exception('Could not detect whether a token already ' |
3085 | - 'exists: %s %s' % (path, resp.status)) |
3086 | - # Create a new token if one didn't exist |
3087 | - if not token: |
3088 | - # Retrieve account id, we'll save this in the token |
3089 | - path = quote('/v1/%s/%s' % (self.auth_account, account)) |
3090 | - resp = self.make_request(req.environ, 'HEAD', |
3091 | - path).get_response(self.app) |
3092 | - if resp.status_int // 100 != 2: |
3093 | - raise Exception('Could not retrieve account id value: ' |
3094 | - '%s %s' % (path, resp.status)) |
3095 | - account_id = \ |
3096 | - resp.headers['x-container-meta-account-id'] |
3097 | - # Generate new token |
3098 | - token = '%stk%s' % (self.reseller_prefix, uuid4().hex) |
3099 | - # Save token info |
3100 | - path = quote('/v1/%s/.token_%s/%s' % |
3101 | - (self.auth_account, token[-1], token)) |
3102 | - resp = self.make_request(req.environ, 'PUT', path, |
3103 | - json.dumps({'account': account, 'user': user, |
3104 | - 'account_id': account_id, |
3105 | - 'groups': user_detail['groups'], |
3106 | - 'expires': time() + self.token_life})).get_response(self.app) |
3107 | - if resp.status_int // 100 != 2: |
3108 | - raise Exception('Could not create new token: %s %s' % |
3109 | - (path, resp.status)) |
3110 | - # Record the token with the user info for future use. |
3111 | - path = quote('/v1/%s/%s/%s' % (self.auth_account, account, user)) |
3112 | - resp = self.make_request(req.environ, 'POST', path, |
3113 | - headers={'X-Object-Meta-Auth-Token': token} |
3114 | - ).get_response(self.app) |
3115 | - if resp.status_int // 100 != 2: |
3116 | - raise Exception('Could not save new token: %s %s' % |
3117 | - (path, resp.status)) |
3118 | - # Get the services information |
3119 | - path = quote('/v1/%s/%s/.services' % (self.auth_account, account)) |
3120 | - resp = self.make_request(req.environ, 'GET', |
3121 | - path).get_response(self.app) |
3122 | - if resp.status_int // 100 != 2: |
3123 | - raise Exception('Could not obtain services info: %s %s' % |
3124 | - (path, resp.status)) |
3125 | - detail = json.loads(resp.body) |
3126 | - url = detail['storage'][detail['storage']['default']] |
3127 | - return Response(request=req, body=resp.body, |
3128 | - headers={'x-auth-token': token, 'x-storage-token': token, |
3129 | - 'x-storage-url': url}) |
3130 | - |
3131 | - def handle_validate_token(self, req): |
3132 | - """ |
3133 | - Handles the GET v2/.token/<token> call for validating a token, usually |
3134 | - called by a service like Swift. |
3135 | - |
3136 | - On a successful validation, X-Auth-TTL will be set for how much longer |
3137 | - this token is valid and X-Auth-Groups will contain a comma separated |
3138 | - list of groups the user belongs to. |
3139 | - |
3140 | - The first group listed will be a unique identifier for the user the |
3141 | - token represents. |
3142 | - |
3143 | - .reseller_admin is a special group that indicates the user should be |
3144 | - allowed to do anything on any account. |
3145 | - |
3146 | - :param req: The webob.Request to process. |
3147 | - :returns: webob.Response, 2xx on success with data set as explained |
3148 | - above. |
3149 | - """ |
3150 | - token = req.path_info_pop() |
3151 | - if req.path_info or not token.startswith(self.reseller_prefix): |
3152 | - return HTTPBadRequest(request=req) |
3153 | - expires = groups = None |
3154 | - memcache_client = cache_from_env(req.environ) |
3155 | - if memcache_client: |
3156 | - memcache_key = '%s/auth/%s' % (self.reseller_prefix, token) |
3157 | - cached_auth_data = memcache_client.get(memcache_key) |
3158 | - if cached_auth_data: |
3159 | - expires, groups = cached_auth_data |
3160 | - if expires < time(): |
3161 | - groups = None |
3162 | - if not groups: |
3163 | - path = quote('/v1/%s/.token_%s/%s' % |
3164 | - (self.auth_account, token[-1], token)) |
3165 | - resp = self.make_request(req.environ, 'GET', |
3166 | - path).get_response(self.app) |
3167 | - if resp.status_int // 100 != 2: |
3168 | - return HTTPNotFound(request=req) |
3169 | - detail = json.loads(resp.body) |
3170 | - expires = detail['expires'] |
3171 | - if expires < time(): |
3172 | - self.make_request(req.environ, 'DELETE', |
3173 | - path).get_response(self.app) |
3174 | - return HTTPNotFound(request=req) |
3175 | - groups = [g['name'] for g in detail['groups']] |
3176 | - if '.admin' in groups: |
3177 | - groups.remove('.admin') |
3178 | - groups.append(detail['account_id']) |
3179 | - groups = ','.join(groups) |
3180 | - return HTTPNoContent(headers={'X-Auth-TTL': expires - time(), |
3181 | - 'X-Auth-Groups': groups}) |
3182 | - |
3183 | - def make_request(self, env, method, path, body=None, headers=None): |
3184 | - """ |
3185 | - Makes a new webob.Request based on the current env but with the |
3186 | - parameters specified. |
3187 | - |
3188 | - :param env: Current WSGI environment dictionary |
3189 | - :param method: HTTP method of new request |
3190 | - :param path: HTTP path of new request |
3191 | - :param body: HTTP body of new request; None by default |
3192 | - :param headers: Extra HTTP headers of new request; None by default |
3193 | - |
3194 | - :returns: webob.Request object |
3195 | - """ |
3196 | - newenv = {'REQUEST_METHOD': method, 'HTTP_USER_AGENT': 'Swauth'} |
3197 | - for name in ('swift.cache', 'HTTP_X_CF_TRANS_ID'): |
3198 | - if name in env: |
3199 | - newenv[name] = env[name] |
3200 | - if not headers: |
3201 | - headers = {} |
3202 | - if body: |
3203 | - return Request.blank(path, environ=newenv, body=body, |
3204 | - headers=headers) |
3205 | - else: |
3206 | - return Request.blank(path, environ=newenv, headers=headers) |
3207 | - |
3208 | - def get_conn(self, urlparsed=None): |
3209 | - """ |
3210 | - Returns an HTTPConnection based on the urlparse result given or the |
3211 | - default Swift cluster (internal url) urlparse result. |
3212 | - |
3213 | - :param urlparsed: The result from urlparse.urlparse or None to use the |
3214 | - default Swift cluster's value |
3215 | - """ |
3216 | - if not urlparsed: |
3217 | - urlparsed = self.dsc_parsed2 |
3218 | - if urlparsed.scheme == 'http': |
3219 | - return HTTPConnection(urlparsed.netloc) |
3220 | - else: |
3221 | - return HTTPSConnection(urlparsed.netloc) |
3222 | - |
3223 | - def get_itoken(self, env): |
3224 | - """ |
3225 | - Returns the current internal token to use for the auth system's own |
3226 | - actions with other services. Each process will create its own |
3227 | - itoken and the token will be deleted and recreated based on the |
3228 | - token_life configuration value. The itoken information is stored in |
3229 | - memcache because the auth process that is asked by Swift to validate |
3230 | - the token may not be the same as the auth process that created the |
3231 | - token. |
3232 | - """ |
3233 | - if not self.itoken or self.itoken_expires < time(): |
3234 | - self.itoken = '%sitk%s' % (self.reseller_prefix, uuid4().hex) |
3235 | - memcache_key = '%s/auth/%s' % (self.reseller_prefix, self.itoken) |
3236 | - self.itoken_expires = time() + self.token_life - 60 |
3237 | - memcache_client = cache_from_env(env) |
3238 | - if not memcache_client: |
3239 | - raise Exception( |
3240 | - 'No memcache set up; required for Swauth middleware') |
3241 | - memcache_client.set(memcache_key, (self.itoken_expires, |
3242 | - '.auth,.reseller_admin,%s.auth' % self.reseller_prefix), |
3243 | - timeout=self.token_life) |
3244 | - return self.itoken |
3245 | - |
3246 | - def get_admin_detail(self, req): |
3247 | - """ |
3248 | - Returns the dict for the user specified as the admin in the request |
3249 | - with the addition of an `account` key set to the admin user's account. |
3250 | - |
3251 | - :param req: The webob request to retrieve X-Auth-Admin-User and |
3252 | - X-Auth-Admin-Key from. |
3253 | - :returns: The dict for the admin user with the addition of the |
3254 | - `account` key. |
3255 | - """ |
3256 | - if ':' not in req.headers.get('x-auth-admin-user', ''): |
3257 | - return None |
3258 | - admin_account, admin_user = \ |
3259 | - req.headers.get('x-auth-admin-user').split(':', 1) |
3260 | - path = quote('/v1/%s/%s/%s' % (self.auth_account, admin_account, |
3261 | - admin_user)) |
3262 | - resp = self.make_request(req.environ, 'GET', |
3263 | - path).get_response(self.app) |
3264 | - if resp.status_int == 404: |
3265 | - return None |
3266 | - if resp.status_int // 100 != 2: |
3267 | - raise Exception('Could not get admin user object: %s %s' % |
3268 | - (path, resp.status)) |
3269 | - admin_detail = json.loads(resp.body) |
3270 | - admin_detail['account'] = admin_account |
3271 | - return admin_detail |
3272 | - |
3273 | - def credentials_match(self, user_detail, key): |
3274 | - """ |
3275 | - Returns True if the key is valid for the user_detail. Currently, this |
3276 | - only supports plaintext key matching. |
3277 | - |
3278 | - :param user_detail: The dict for the user. |
3279 | - :param key: The key to validate for the user. |
3280 | - :returns: True if the key is valid for the user, False if not. |
3281 | - """ |
3282 | - return user_detail and user_detail.get('auth') == 'plaintext:%s' % key |
3283 | - |
3284 | - def is_super_admin(self, req): |
3285 | - """ |
3286 | - Returns True if the admin specified in the request represents the |
3287 | - .super_admin. |
3288 | - |
3289 | - :param req: The webob.Request to check. |
3290 | - :param returns: True if .super_admin. |
3291 | - """ |
3292 | - return req.headers.get('x-auth-admin-user') == '.super_admin' and \ |
3293 | - req.headers.get('x-auth-admin-key') == self.super_admin_key |
3294 | - |
3295 | - def is_reseller_admin(self, req, admin_detail=None): |
3296 | - """ |
3297 | - Returns True if the admin specified in the request represents a |
3298 | - .reseller_admin. |
3299 | - |
3300 | - :param req: The webob.Request to check. |
3301 | - :param admin_detail: The previously retrieved dict from |
3302 | - :func:`get_admin_detail` or None for this function |
3303 | - to retrieve the admin_detail itself. |
3304 | - :param returns: True if .reseller_admin. |
3305 | - """ |
3306 | - if self.is_super_admin(req): |
3307 | - return True |
3308 | - if not admin_detail: |
3309 | - admin_detail = self.get_admin_detail(req) |
3310 | - if not self.credentials_match(admin_detail, |
3311 | - req.headers.get('x-auth-admin-key')): |
3312 | - return False |
3313 | - return '.reseller_admin' in (g['name'] for g in admin_detail['groups']) |
3314 | - |
3315 | - def is_account_admin(self, req, account): |
3316 | - """ |
3317 | - Returns True if the admin specified in the request represents a .admin |
3318 | - for the account specified. |
3319 | - |
3320 | - :param req: The webob.Request to check. |
3321 | - :param account: The account to check for .admin against. |
3322 | - :param returns: True if .admin. |
3323 | - """ |
3324 | - if self.is_super_admin(req): |
3325 | - return True |
3326 | - admin_detail = self.get_admin_detail(req) |
3327 | - if admin_detail: |
3328 | - if self.is_reseller_admin(req, admin_detail=admin_detail): |
3329 | - return True |
3330 | - if not self.credentials_match(admin_detail, |
3331 | - req.headers.get('x-auth-admin-key')): |
3332 | - return False |
3333 | - return admin_detail and admin_detail['account'] == account and \ |
3334 | - '.admin' in (g['name'] for g in admin_detail['groups']) |
3335 | - return False |
3336 | - |
3337 | - def posthooklogger(self, env, req): |
3338 | - if not req.path.startswith(self.auth_prefix): |
3339 | - return |
3340 | - response = getattr(req, 'response', None) |
3341 | - if not response: |
3342 | - return |
3343 | - trans_time = '%.4f' % (time() - req.start_time) |
3344 | - the_request = quote(unquote(req.path)) |
3345 | - if req.query_string: |
3346 | - the_request = the_request + '?' + req.query_string |
3347 | - # remote user for zeus |
3348 | - client = req.headers.get('x-cluster-client-ip') |
3349 | - if not client and 'x-forwarded-for' in req.headers: |
3350 | - # remote user for other lbs |
3351 | - client = req.headers['x-forwarded-for'].split(',')[0].strip() |
3352 | - logged_headers = None |
3353 | - if self.log_headers: |
3354 | - logged_headers = '\n'.join('%s: %s' % (k, v) |
3355 | - for k, v in req.headers.items()) |
3356 | - status_int = response.status_int |
3357 | - if getattr(req, 'client_disconnect', False) or \ |
3358 | - getattr(response, 'client_disconnect', False): |
3359 | - status_int = 499 |
3360 | - self.logger.info(' '.join(quote(str(x)) for x in (client or '-', |
3361 | - req.remote_addr or '-', strftime('%d/%b/%Y/%H/%M/%S', gmtime()), |
3362 | - req.method, the_request, req.environ['SERVER_PROTOCOL'], |
3363 | - status_int, req.referer or '-', req.user_agent or '-', |
3364 | - req.headers.get('x-auth-token', |
3365 | - req.headers.get('x-auth-admin-user', '-')), |
3366 | - getattr(req, 'bytes_transferred', 0) or '-', |
3367 | - getattr(response, 'bytes_transferred', 0) or '-', |
3368 | - req.headers.get('etag', '-'), |
3369 | - req.headers.get('x-trans-id', '-'), logged_headers or '-', |
3370 | - trans_time))) |
3371 | - |
3372 | - |
3373 | -def filter_factory(global_conf, **local_conf): |
3374 | - """Returns a WSGI filter app for use with paste.deploy.""" |
3375 | - conf = global_conf.copy() |
3376 | - conf.update(local_conf) |
3377 | - |
3378 | - def auth_filter(app): |
3379 | - return Swauth(app, conf) |
3380 | - return auth_filter |
3381 | |
3382 | === added file 'swift/common/middleware/tempauth.py' |
3383 | --- swift/common/middleware/tempauth.py 1970-01-01 00:00:00 +0000 |
3384 | +++ swift/common/middleware/tempauth.py 2011-06-03 00:13:27 +0000 |
3385 | @@ -0,0 +1,495 @@ |
3386 | +# Copyright (c) 2011 OpenStack, LLC. |
3387 | +# |
3388 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
3389 | +# you may not use this file except in compliance with the License. |
3390 | +# You may obtain a copy of the License at |
3391 | +# |
3392 | +# http://www.apache.org/licenses/LICENSE-2.0 |
3393 | +# |
3394 | +# Unless required by applicable law or agreed to in writing, software |
3395 | +# distributed under the License is distributed on an "AS IS" BASIS, |
3396 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
3397 | +# implied. |
3398 | +# See the License for the specific language governing permissions and |
3399 | +# limitations under the License. |
3400 | + |
3401 | +from time import gmtime, strftime, time |
3402 | +from traceback import format_exc |
3403 | +from urllib import quote, unquote |
3404 | +from uuid import uuid4 |
3405 | +from hashlib import sha1 |
3406 | +import hmac |
3407 | +import base64 |
3408 | + |
3409 | +from eventlet import TimeoutError |
3410 | +from webob import Response, Request |
3411 | +from webob.exc import HTTPBadRequest, HTTPForbidden, HTTPNotFound, \ |
3412 | + HTTPUnauthorized |
3413 | + |
3414 | +from swift.common.middleware.acl import clean_acl, parse_acl, referrer_allowed |
3415 | +from swift.common.utils import cache_from_env, get_logger, get_remote_client, \ |
3416 | + split_path |
3417 | + |
3418 | + |
3419 | +class TempAuth(object): |
3420 | + """ |
3421 | + Test authentication and authorization system. |
3422 | + |
3423 | + Add to your pipeline in proxy-server.conf, such as:: |
3424 | + |
3425 | + [pipeline:main] |
3426 | + pipeline = catch_errors cache tempauth proxy-server |
3427 | + |
3428 | + And add a tempauth filter section, such as:: |
3429 | + |
3430 | + [filter:tempauth] |
3431 | + use = egg:swift#tempauth |
3432 | + user_admin_admin = admin .admin .reseller_admin |
3433 | + user_test_tester = testing .admin |
3434 | + user_test2_tester2 = testing2 .admin |
3435 | + user_test_tester3 = testing3 |
3436 | + |
3437 | + See the proxy-server.conf-sample for more information. |
3438 | + |
3439 | + :param app: The next WSGI app in the pipeline |
3440 | + :param conf: The dict of configuration values |
3441 | + """ |
3442 | + |
3443 | + def __init__(self, app, conf): |
3444 | + self.app = app |
3445 | + self.conf = conf |
3446 | + self.logger = get_logger(conf, log_route='tempauth') |
3447 | + self.log_headers = conf.get('log_headers') == 'True' |
3448 | + self.reseller_prefix = conf.get('reseller_prefix', 'AUTH').strip() |
3449 | + if self.reseller_prefix and self.reseller_prefix[-1] != '_': |
3450 | + self.reseller_prefix += '_' |
3451 | + self.auth_prefix = conf.get('auth_prefix', '/auth/') |
3452 | + if not self.auth_prefix: |
3453 | + self.auth_prefix = '/auth/' |
3454 | + if self.auth_prefix[0] != '/': |
3455 | + self.auth_prefix = '/' + self.auth_prefix |
3456 | + if self.auth_prefix[-1] != '/': |
3457 | + self.auth_prefix += '/' |
3458 | + self.token_life = int(conf.get('token_life', 86400)) |
3459 | + self.allowed_sync_hosts = [h.strip() |
3460 | + for h in conf.get('allowed_sync_hosts', '127.0.0.1').split(',') |
3461 | + if h.strip()] |
3462 | + self.users = {} |
3463 | + for conf_key in conf: |
3464 | + if conf_key.startswith('user_'): |
3465 | + values = conf[conf_key].split() |
3466 | + if not values: |
3467 | + raise ValueError('%s has no key set' % conf_key) |
3468 | + key = values.pop(0) |
3469 | + if values and '://' in values[-1]: |
3470 | + url = values.pop() |
3471 | + else: |
3472 | + url = 'https://' if 'cert_file' in conf else 'http://' |
3473 | + ip = conf.get('bind_ip', '127.0.0.1') |
3474 | + if ip == '0.0.0.0': |
3475 | + ip = '127.0.0.1' |
3476 | + url += ip |
3477 | + url += ':' + conf.get('bind_port', 80) + '/v1/' + \ |
3478 | + self.reseller_prefix + conf_key.split('_')[1] |
3479 | + groups = values |
3480 | + self.users[conf_key.split('_', 1)[1].replace('_', ':')] = { |
3481 | + 'key': key, 'url': url, 'groups': values} |
3482 | + self.created_accounts = False |
3483 | + |
3484 | + def __call__(self, env, start_response): |
3485 | + """ |
3486 | + Accepts a standard WSGI application call, authenticating the request |
3487 | + and installing callback hooks for authorization and ACL header |
3488 | + validation. For an authenticated request, REMOTE_USER will be set to a |
3489 | + comma separated list of the user's groups. |
3490 | + |
3491 | + With a non-empty reseller prefix, acts as the definitive auth service |
3492 | + for just tokens and accounts that begin with that prefix, but will deny |
3493 | + requests outside this prefix if no other auth middleware overrides it. |
3494 | + |
3495 | + With an empty reseller prefix, acts as the definitive auth service only |
3496 | + for tokens that validate to a non-empty set of groups. For all other |
3497 | + requests, acts as the fallback auth service when no other auth |
3498 | + middleware overrides it. |
3499 | + |
3500 | + Alternatively, if the request matches the self.auth_prefix, the request |
3501 | + will be routed through the internal auth request handler (self.handle). |
3502 | + This is to handle granting tokens, etc. |
3503 | + """ |
3504 | + # Ensure the accounts we handle have been created |
3505 | + if not self.created_accounts and self.users: |
3506 | + newenv = {'REQUEST_METHOD': 'GET', 'HTTP_USER_AGENT': 'TempAuth'} |
3507 | + for name in ('swift.cache', 'HTTP_X_TRANS_ID'): |
3508 | + if name in env: |
3509 | + newenv[name] = env[name] |
3510 | + account_id = self.users.values()[0]['url'].rsplit('/', 1)[-1] |
3511 | + resp = Request.blank('/v1/' + account_id, |
3512 | + environ=newenv).get_response(self.app) |
3513 | + if resp.status_int // 100 != 2: |
3514 | + newenv['REQUEST_METHOD'] = 'PUT' |
3515 | + for key, value in self.users.iteritems(): |
3516 | + account_id = value['url'].rsplit('/', 1)[-1] |
3517 | + resp = Request.blank('/v1/' + account_id, |
3518 | + environ=newenv).get_response(self.app) |
3519 | + if resp.status_int // 100 != 2: |
3520 | + raise Exception('Could not create account %s for user ' |
3521 | + '%s' % (account_id, key)) |
3522 | + self.created_accounts = True |
3523 | + |
3524 | + if env.get('PATH_INFO', '').startswith(self.auth_prefix): |
3525 | + return self.handle(env, start_response) |
3526 | + s3 = env.get('HTTP_AUTHORIZATION') |
3527 | + token = env.get('HTTP_X_AUTH_TOKEN', env.get('HTTP_X_STORAGE_TOKEN')) |
3528 | + if s3 or (token and token.startswith(self.reseller_prefix)): |
3529 | + # Note: Empty reseller_prefix will match all tokens. |
3530 | + groups = self.get_groups(env, token) |
3531 | + if groups: |
3532 | + env['REMOTE_USER'] = groups |
3533 | + user = groups and groups.split(',', 1)[0] or '' |
3534 | + # We know the proxy logs the token, so we augment it just a bit |
3535 | + # to also log the authenticated user. |
3536 | + env['HTTP_X_AUTH_TOKEN'] = \ |
3537 | + '%s,%s' % (user, 's3' if s3 else token) |
3538 | + env['swift.authorize'] = self.authorize |
3539 | + env['swift.clean_acl'] = clean_acl |
3540 | + else: |
3541 | + # Unauthorized token |
3542 | + if self.reseller_prefix: |
3543 | + # Because I know I'm the definitive auth for this token, I |
3544 | + # can deny it outright. |
3545 | + return HTTPUnauthorized()(env, start_response) |
3546 | + # Because I'm not certain if I'm the definitive auth for empty |
3547 | + # reseller_prefixed tokens, I won't overwrite swift.authorize. |
3548 | + elif 'swift.authorize' not in env: |
3549 | + env['swift.authorize'] = self.denied_response |
3550 | + else: |
3551 | + if self.reseller_prefix: |
3552 | + # With a non-empty reseller_prefix, I would like to be called |
3553 | + # back for anonymous access to accounts I know I'm the |
3554 | + # definitive auth for. |
3555 | + try: |
3556 | + version, rest = split_path(env.get('PATH_INFO', ''), |
3557 | + 1, 2, True) |
3558 | + except ValueError: |
3559 | + return HTTPNotFound()(env, start_response) |
3560 | + if rest and rest.startswith(self.reseller_prefix): |
3561 | + # Handle anonymous access to accounts I'm the definitive |
3562 | + # auth for. |
3563 | + env['swift.authorize'] = self.authorize |
3564 | + env['swift.clean_acl'] = clean_acl |
3565 | + # Not my token, not my account, I can't authorize this request, |
3566 | + # deny all is a good idea if not already set... |
3567 | + elif 'swift.authorize' not in env: |
3568 | + env['swift.authorize'] = self.denied_response |
3569 | + # Because I'm not certain if I'm the definitive auth for empty |
3570 | + # reseller_prefixed accounts, I won't overwrite swift.authorize. |
3571 | + elif 'swift.authorize' not in env: |
3572 | + env['swift.authorize'] = self.authorize |
3573 | + env['swift.clean_acl'] = clean_acl |
3574 | + return self.app(env, start_response) |
3575 | + |
3576 | + def get_groups(self, env, token): |
3577 | + """ |
3578 | + Get groups for the given token. |
3579 | + |
3580 | + :param env: The current WSGI environment dictionary. |
3581 | + :param token: Token to validate and return a group string for. |
3582 | + |
3583 | + :returns: None if the token is invalid or a string containing a comma |
3584 | + separated list of groups the authenticated user is a member |
3585 | + of. The first group in the list is also considered a unique |
3586 | + identifier for that user. |
3587 | + """ |
3588 | + groups = None |
3589 | + memcache_client = cache_from_env(env) |
3590 | + if not memcache_client: |
3591 | + raise Exception('Memcache required') |
3592 | + memcache_token_key = '%s/token/%s' % (self.reseller_prefix, token) |
3593 | + cached_auth_data = memcache_client.get(memcache_token_key) |
3594 | + if cached_auth_data: |
3595 | + expires, groups = cached_auth_data |
3596 | + if expires < time(): |
3597 | + groups = None |
3598 | + |
3599 | + if env.get('HTTP_AUTHORIZATION'): |
3600 | + account_user, sign = \ |
3601 | + env['HTTP_AUTHORIZATION'].split(' ')[1].rsplit(':', 1) |
3602 | + if account_user not in self.users: |
3603 | + return None |
3604 | + account, user = account_user.split(':', 1) |
3605 | + account_id = self.users[account_user]['url'].rsplit('/', 1)[-1] |
3606 | + path = env['PATH_INFO'] |
3607 | + env['PATH_INFO'] = path.replace(account_user, account_id, 1) |
3608 | + msg = base64.urlsafe_b64decode(unquote(token)) |
3609 | + key = self.users[account_user]['key'] |
3610 | + s = base64.encodestring(hmac.new(key, msg, sha1).digest()).strip() |
3611 | + if s != sign: |
3612 | + return None |
3613 | + groups = [account, account_user] |
3614 | + groups.extend(self.users[account_user]['groups']) |
3615 | + if '.admin' in groups: |
3616 | + groups.remove('.admin') |
3617 | + groups.append(account_id) |
3618 | + groups = ','.join(groups) |
3619 | + |
3620 | + return groups |
3621 | + |
3622 | + def authorize(self, req): |
3623 | + """ |
3624 | + Returns None if the request is authorized to continue or a standard |
3625 | + WSGI response callable if not. |
3626 | + """ |
3627 | + try: |
3628 | + version, account, container, obj = split_path(req.path, 1, 4, True) |
3629 | + except ValueError: |
3630 | + return HTTPNotFound(request=req) |
3631 | + if not account or not account.startswith(self.reseller_prefix): |
3632 | + return self.denied_response(req) |
3633 | + user_groups = (req.remote_user or '').split(',') |
3634 | + if '.reseller_admin' in user_groups and \ |
3635 | + account != self.reseller_prefix and \ |
3636 | + account[len(self.reseller_prefix)] != '.': |
3637 | + req.environ['swift_owner'] = True |
3638 | + return None |
3639 | + if account in user_groups and \ |
3640 | + (req.method not in ('DELETE', 'PUT') or container): |
3641 | + # If the user is admin for the account and is not trying to do an |
3642 | + # account DELETE or PUT... |
3643 | + req.environ['swift_owner'] = True |
3644 | + return None |
3645 | + if (req.environ.get('swift_sync_key') and |
3646 | + req.environ['swift_sync_key'] == |
3647 | + req.headers.get('x-container-sync-key', None) and |
3648 | + 'x-timestamp' in req.headers and |
3649 | + (req.remote_addr in self.allowed_sync_hosts or |
3650 | + get_remote_client(req) in self.allowed_sync_hosts)): |
3651 | + return None |
3652 | + referrers, groups = parse_acl(getattr(req, 'acl', None)) |
3653 | + if referrer_allowed(req.referer, referrers): |
3654 | + if obj or '.rlistings' in groups: |
3655 | + return None |
3656 | + return self.denied_response(req) |
3657 | + if not req.remote_user: |
3658 | + return self.denied_response(req) |
3659 | + for user_group in user_groups: |
3660 | + if user_group in groups: |
3661 | + return None |
3662 | + return self.denied_response(req) |
3663 | + |
3664 | + def denied_response(self, req): |
3665 | + """ |
3666 | + Returns a standard WSGI response callable with the status of 403 or 401 |
3667 | + depending on whether the REMOTE_USER is set or not. |
3668 | + """ |
3669 | + if req.remote_user: |
3670 | + return HTTPForbidden(request=req) |
3671 | + else: |
3672 | + return HTTPUnauthorized(request=req) |
3673 | + |
3674 | + def handle(self, env, start_response): |
3675 | + """ |
3676 | + WSGI entry point for auth requests (ones that match the |
3677 | + self.auth_prefix). |
3678 | + Wraps env in webob.Request object and passes it down. |
3679 | + |
3680 | + :param env: WSGI environment dictionary |
3681 | + :param start_response: WSGI callable |
3682 | + """ |
3683 | + try: |
3684 | + req = Request(env) |
3685 | + if self.auth_prefix: |
3686 | + req.path_info_pop() |
3687 | + req.bytes_transferred = '-' |
3688 | + req.client_disconnect = False |
3689 | + if 'x-storage-token' in req.headers and \ |
3690 | + 'x-auth-token' not in req.headers: |
3691 | + req.headers['x-auth-token'] = req.headers['x-storage-token'] |
3692 | + if 'eventlet.posthooks' in env: |
3693 | + env['eventlet.posthooks'].append( |
3694 | + (self.posthooklogger, (req,), {})) |
3695 | + return self.handle_request(req)(env, start_response) |
3696 | + else: |
3697 | + # Lack of posthook support means that we have to log on the |
3698 | + # start of the response, rather than after all the data has |
3699 | + # been sent. This prevents logging client disconnects |
3700 | + # differently than full transmissions. |
3701 | + response = self.handle_request(req)(env, start_response) |
3702 | + self.posthooklogger(env, req) |
3703 | + return response |
3704 | + except (Exception, TimeoutError): |
3705 | + print "EXCEPTION IN handle: %s: %s" % (format_exc(), env) |
3706 | + start_response('500 Server Error', |
3707 | + [('Content-Type', 'text/plain')]) |
3708 | + return ['Internal server error.\n'] |
3709 | + |
3710 | + def handle_request(self, req): |
3711 | + """ |
3712 | + Entry point for auth requests (ones that match the self.auth_prefix). |
3713 | + Should return a WSGI-style callable (such as webob.Response). |
3714 | + |
3715 | + :param req: webob.Request object |
3716 | + """ |
3717 | + req.start_time = time() |
3718 | + handler = None |
3719 | + try: |
3720 | + version, account, user, _junk = split_path(req.path_info, |
3721 | + minsegs=1, maxsegs=4, rest_with_last=True) |
3722 | + except ValueError: |
3723 | + return HTTPNotFound(request=req) |
3724 | + if version in ('v1', 'v1.0', 'auth'): |
3725 | + if req.method == 'GET': |
3726 | + handler = self.handle_get_token |
3727 | + if not handler: |
3728 | + req.response = HTTPBadRequest(request=req) |
3729 | + else: |
3730 | + req.response = handler(req) |
3731 | + return req.response |
3732 | + |
3733 | + def handle_get_token(self, req): |
3734 | + """ |
3735 | + Handles the various `request for token and service end point(s)` calls. |
3736 | + There are various formats to support the various auth servers in the |
3737 | + past. Examples:: |
3738 | + |
3739 | + GET <auth-prefix>/v1/<act>/auth |
3740 | + X-Auth-User: <act>:<usr> or X-Storage-User: <usr> |
3741 | + X-Auth-Key: <key> or X-Storage-Pass: <key> |
3742 | + GET <auth-prefix>/auth |
3743 | + X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> |
3744 | + X-Auth-Key: <key> or X-Storage-Pass: <key> |
3745 | + GET <auth-prefix>/v1.0 |
3746 | + X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr> |
3747 | + X-Auth-Key: <key> or X-Storage-Pass: <key> |
3748 | + |
3749 | + On successful authentication, the response will have X-Auth-Token and |
3750 | + X-Storage-Token set to the token to use with Swift and X-Storage-URL |
3751 | + set to the URL to the default Swift cluster to use. |
3752 | + |
3753 | + :param req: The webob.Request to process. |
3754 | + :returns: webob.Response, 2xx on success with data set as explained |
3755 | + above. |
3756 | + """ |
3757 | + # Validate the request info |
3758 | + try: |
3759 | + pathsegs = split_path(req.path_info, minsegs=1, maxsegs=3, |
3760 | + rest_with_last=True) |
3761 | + except ValueError: |
3762 | + return HTTPNotFound(request=req) |
3763 | + if pathsegs[0] == 'v1' and pathsegs[2] == 'auth': |
3764 | + account = pathsegs[1] |
3765 | + user = req.headers.get('x-storage-user') |
3766 | + if not user: |
3767 | + user = req.headers.get('x-auth-user') |
3768 | + if not user or ':' not in user: |
3769 | + return HTTPUnauthorized(request=req) |
3770 | + account2, user = user.split(':', 1) |
3771 | + if account != account2: |
3772 | + return HTTPUnauthorized(request=req) |
3773 | + key = req.headers.get('x-storage-pass') |
3774 | + if not key: |
3775 | + key = req.headers.get('x-auth-key') |
3776 | + elif pathsegs[0] in ('auth', 'v1.0'): |
3777 | + user = req.headers.get('x-auth-user') |
3778 | + if not user: |
3779 | + user = req.headers.get('x-storage-user') |
3780 | + if not user or ':' not in user: |
3781 | + return HTTPUnauthorized(request=req) |
3782 | + account, user = user.split(':', 1) |
3783 | + key = req.headers.get('x-auth-key') |
3784 | + if not key: |
3785 | + key = req.headers.get('x-storage-pass') |
3786 | + else: |
3787 | + return HTTPBadRequest(request=req) |
3788 | + if not all((account, user, key)): |
3789 | + return HTTPUnauthorized(request=req) |
3790 | + # Authenticate user |
3791 | + account_user = account + ':' + user |
3792 | + if account_user not in self.users: |
3793 | + return HTTPUnauthorized(request=req) |
3794 | + if self.users[account_user]['key'] != key: |
3795 | + return HTTPUnauthorized(request=req) |
3796 | + # Get memcache client |
3797 | + memcache_client = cache_from_env(req.environ) |
3798 | + if not memcache_client: |
3799 | + raise Exception('Memcache required') |
3800 | + # See if a token already exists and hasn't expired |
3801 | + token = None |
3802 | + memcache_user_key = '%s/user/%s' % (self.reseller_prefix, account_user) |
3803 | + candidate_token = memcache_client.get(memcache_user_key) |
3804 | + if candidate_token: |
3805 | + memcache_token_key = \ |
3806 | + '%s/token/%s' % (self.reseller_prefix, candidate_token) |
3807 | + cached_auth_data = memcache_client.get(memcache_token_key) |
3808 | + if cached_auth_data: |
3809 | + expires, groups = cached_auth_data |
3810 | + if expires > time(): |
3811 | + token = candidate_token |
3812 | + # Create a new token if one didn't exist |
3813 | + if not token: |
3814 | + # Generate new token |
3815 | + token = '%stk%s' % (self.reseller_prefix, uuid4().hex) |
3816 | + expires = time() + self.token_life |
3817 | + groups = [account, account_user] |
3818 | + groups.extend(self.users[account_user]['groups']) |
3819 | + if '.admin' in groups: |
3820 | + groups.remove('.admin') |
3821 | + account_id = self.users[account_user]['url'].rsplit('/', 1)[-1] |
3822 | + groups.append(account_id) |
3823 | + groups = ','.join(groups) |
3824 | + # Save token |
3825 | + memcache_token_key = '%s/token/%s' % (self.reseller_prefix, token) |
3826 | + memcache_client.set(memcache_token_key, (expires, groups), |
3827 | + timeout=float(expires - time())) |
3828 | + # Record the token with the user info for future use. |
3829 | + memcache_user_key = \ |
3830 | + '%s/user/%s' % (self.reseller_prefix, account_user) |
3831 | + memcache_client.set(memcache_user_key, token, |
3832 | + timeout=float(expires - time())) |
3833 | + return Response(request=req, |
3834 | + headers={'x-auth-token': token, 'x-storage-token': token, |
3835 | + 'x-storage-url': self.users[account_user]['url']}) |
3836 | + |
3837 | + def posthooklogger(self, env, req): |
3838 | + if not req.path.startswith(self.auth_prefix): |
3839 | + return |
3840 | + response = getattr(req, 'response', None) |
3841 | + if not response: |
3842 | + return |
3843 | + trans_time = '%.4f' % (time() - req.start_time) |
3844 | + the_request = quote(unquote(req.path)) |
3845 | + if req.query_string: |
3846 | + the_request = the_request + '?' + req.query_string |
3847 | + # remote user for zeus |
3848 | + client = req.headers.get('x-cluster-client-ip') |
3849 | + if not client and 'x-forwarded-for' in req.headers: |
3850 | + # remote user for other lbs |
3851 | + client = req.headers['x-forwarded-for'].split(',')[0].strip() |
3852 | + logged_headers = None |
3853 | + if self.log_headers: |
3854 | + logged_headers = '\n'.join('%s: %s' % (k, v) |
3855 | + for k, v in req.headers.items()) |
3856 | + status_int = response.status_int |
3857 | + if getattr(req, 'client_disconnect', False) or \ |
3858 | + getattr(response, 'client_disconnect', False): |
3859 | + status_int = 499 |
3860 | + self.logger.info(' '.join(quote(str(x)) for x in (client or '-', |
3861 | + req.remote_addr or '-', strftime('%d/%b/%Y/%H/%M/%S', gmtime()), |
3862 | + req.method, the_request, req.environ['SERVER_PROTOCOL'], |
3863 | + status_int, req.referer or '-', req.user_agent or '-', |
3864 | + req.headers.get('x-auth-token', |
3865 | + req.headers.get('x-auth-admin-user', '-')), |
3866 | + getattr(req, 'bytes_transferred', 0) or '-', |
3867 | + getattr(response, 'bytes_transferred', 0) or '-', |
3868 | + req.headers.get('etag', '-'), |
3869 | + req.headers.get('x-trans-id', '-'), logged_headers or '-', |
3870 | + trans_time))) |
3871 | + |
3872 | + |
3873 | +def filter_factory(global_conf, **local_conf): |
3874 | + """Returns a WSGI filter app for use with paste.deploy.""" |
3875 | + conf = global_conf.copy() |
3876 | + conf.update(local_conf) |
3877 | + |
3878 | + def auth_filter(app): |
3879 | + return TempAuth(app, conf) |
3880 | + return auth_filter |
3881 | |
3882 | === modified file 'swift/common/utils.py' |
3883 | --- swift/common/utils.py 2011-04-20 19:54:28 +0000 |
3884 | +++ swift/common/utils.py 2011-06-03 00:13:27 +0000 |
3885 | @@ -972,6 +972,32 @@ |
3886 | return ModifiedParseResult(*stdlib_urlparse(url)) |
3887 | |
3888 | |
3889 | +def validate_sync_to(value, allowed_sync_hosts): |
3890 | + p = urlparse(value) |
3891 | + if p.scheme not in ('http', 'https'): |
3892 | + return _('Invalid scheme %r in X-Container-Sync-To, must be "http" ' |
3893 | + 'or "https".') % p.scheme |
3894 | + if not p.path: |
3895 | + return _('Path required in X-Container-Sync-To') |
3896 | + if p.params or p.query or p.fragment: |
3897 | + return _('Params, queries, and fragments not allowed in ' |
3898 | + 'X-Container-Sync-To') |
3899 | + if p.hostname not in allowed_sync_hosts: |
3900 | + return _('Invalid host %r in X-Container-Sync-To') % p.hostname |
3901 | + return None |
3902 | + |
3903 | + |
3904 | +def get_remote_client(req): |
3905 | + # remote host for zeus |
3906 | + client = req.headers.get('x-cluster-client-ip') |
3907 | + if not client and 'x-forwarded-for' in req.headers: |
3908 | + # remote host for other lbs |
3909 | + client = req.headers['x-forwarded-for'].split(',')[0].strip() |
3910 | + if not client: |
3911 | + client = req.remote_addr |
3912 | + return client |
3913 | + |
3914 | + |
3915 | def human_readable(value): |
3916 | """ |
3917 | Returns the number in a human readable format; for example 1048576 = "1Mi". |
3918 | |
3919 | === modified file 'swift/container/server.py' |
3920 | --- swift/container/server.py 2011-05-27 23:31:58 +0000 |
3921 | +++ swift/container/server.py 2011-06-03 00:13:27 +0000 |
3922 | @@ -32,7 +32,8 @@ |
3923 | |
3924 | from swift.common.db import ContainerBroker |
3925 | from swift.common.utils import get_logger, get_param, hash_path, \ |
3926 | - normalize_timestamp, storage_directory, split_path |
3927 | + normalize_timestamp, storage_directory, split_path, urlparse, \ |
3928 | + validate_sync_to |
3929 | from swift.common.constraints import CONTAINER_LISTING_LIMIT, \ |
3930 | check_mount, check_float, check_utf8 |
3931 | from swift.common.bufferedhttp import http_connect |
3932 | @@ -46,7 +47,8 @@ |
3933 | """WSGI Controller for the container server.""" |
3934 | |
3935 | # Ensure these are all lowercase |
3936 | - save_headers = ['x-container-read', 'x-container-write'] |
3937 | + save_headers = ['x-container-read', 'x-container-write', |
3938 | + 'x-container-sync-key', 'x-container-sync-to'] |
3939 | |
3940 | def __init__(self, conf): |
3941 | self.logger = get_logger(conf, log_route='container-server') |
3942 | @@ -55,6 +57,9 @@ |
3943 | ('true', 't', '1', 'on', 'yes', 'y') |
3944 | self.node_timeout = int(conf.get('node_timeout', 3)) |
3945 | self.conn_timeout = float(conf.get('conn_timeout', 0.5)) |
3946 | + self.allowed_sync_hosts = [h.strip() |
3947 | + for h in conf.get('allowed_sync_hosts', '127.0.0.1').split(',') |
3948 | + if h.strip()] |
3949 | self.replicator_rpc = ReplicatorRpc(self.root, DATADIR, |
3950 | ContainerBroker, self.mount_check, logger=self.logger) |
3951 | |
3952 | @@ -174,6 +179,11 @@ |
3953 | not check_float(req.headers['x-timestamp']): |
3954 | return HTTPBadRequest(body='Missing timestamp', request=req, |
3955 | content_type='text/plain') |
3956 | + if 'x-container-sync-to' in req.headers: |
3957 | + err = validate_sync_to(req.headers['x-container-sync-to'], |
3958 | + self.allowed_sync_hosts) |
3959 | + if err: |
3960 | + return HTTPBadRequest(err) |
3961 | if self.mount_check and not check_mount(self.root, drive): |
3962 | return Response(status='507 %s is not mounted' % drive) |
3963 | timestamp = normalize_timestamp(req.headers['x-timestamp']) |
3964 | @@ -232,7 +242,8 @@ |
3965 | } |
3966 | headers.update((key, value) |
3967 | for key, (value, timestamp) in broker.metadata.iteritems() |
3968 | - if value != '') |
3969 | + if value != '' and (key.lower() in self.save_headers or |
3970 | + key.lower().startswith('x-container-meta-'))) |
3971 | return HTTPNoContent(request=req, headers=headers) |
3972 | |
3973 | def GET(self, req): |
3974 | @@ -259,7 +270,8 @@ |
3975 | } |
3976 | resp_headers.update((key, value) |
3977 | for key, (value, timestamp) in broker.metadata.iteritems() |
3978 | - if value != '') |
3979 | + if value != '' and (key.lower() in self.save_headers or |
3980 | + key.lower().startswith('x-container-meta-'))) |
3981 | try: |
3982 | path = get_param(req, 'path') |
3983 | prefix = get_param(req, 'prefix') |
3984 | @@ -368,6 +380,11 @@ |
3985 | not check_float(req.headers['x-timestamp']): |
3986 | return HTTPBadRequest(body='Missing or bad timestamp', |
3987 | request=req, content_type='text/plain') |
3988 | + if 'x-container-sync-to' in req.headers: |
3989 | + err = validate_sync_to(req.headers['x-container-sync-to'], |
3990 | + self.allowed_sync_hosts) |
3991 | + if err: |
3992 | + return HTTPBadRequest(err) |
3993 | if self.mount_check and not check_mount(self.root, drive): |
3994 | return Response(status='507 %s is not mounted' % drive) |
3995 | broker = self._get_container_broker(drive, part, account, container) |
3996 | |
3997 | === added file 'swift/container/sync.py' |
3998 | --- swift/container/sync.py 1970-01-01 00:00:00 +0000 |
3999 | +++ swift/container/sync.py 2011-06-03 00:13:27 +0000 |
4000 | @@ -0,0 +1,409 @@ |
4001 | +# Copyright (c) 2010-2011 OpenStack, LLC. |
4002 | +# |
4003 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
4004 | +# you may not use this file except in compliance with the License. |
4005 | +# You may obtain a copy of the License at |
4006 | +# |
4007 | +# http://www.apache.org/licenses/LICENSE-2.0 |
4008 | +# |
4009 | +# Unless required by applicable law or agreed to in writing, software |
4010 | +# distributed under the License is distributed on an "AS IS" BASIS, |
4011 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
4012 | +# implied. |
4013 | +# See the License for the specific language governing permissions and |
4014 | +# limitations under the License. |
4015 | + |
4016 | +import os |
4017 | +import time |
4018 | +import random |
4019 | +from struct import unpack_from |
4020 | + |
4021 | +from swift.container import server as container_server |
4022 | +from swift.common import client, direct_client |
4023 | +from swift.common.ring import Ring |
4024 | +from swift.common.db import ContainerBroker |
4025 | +from swift.common.utils import audit_location_generator, get_logger, \ |
4026 | + hash_path, normalize_timestamp, TRUE_VALUES, validate_sync_to, whataremyips |
4027 | +from swift.common.daemon import Daemon |
4028 | + |
4029 | + |
4030 | +class _Iter2FileLikeObject(object): |
4031 | + """ |
4032 | + Returns an iterator's contents via :func:`read`, making it look like a file |
4033 | + object. |
4034 | + """ |
4035 | + |
4036 | + def __init__(self, iterator): |
4037 | + self.iterator = iterator |
4038 | + self._chunk = '' |
4039 | + |
4040 | + def read(self, size=-1): |
4041 | + """ |
4042 | + read([size]) -> read at most size bytes, returned as a string. |
4043 | + |
4044 | + If the size argument is negative or omitted, read until EOF is reached. |
4045 | + Notice that when in non-blocking mode, less data than what was |
4046 | + requested may be returned, even if no size parameter was given. |
4047 | + """ |
4048 | + if size < 0: |
4049 | + chunk = self._chunk |
4050 | + self._chunk = '' |
4051 | + return chunk + ''.join(self.iterator) |
4052 | + chunk = '' |
4053 | + try: |
4054 | + chunk = self.iterator.next() |
4055 | + except StopIteration: |
4056 | + pass |
4057 | + if len(chunk) <= size: |
4058 | + return chunk |
4059 | + self._chunk = chunk[size:] |
4060 | + return chunk[:size] |
4061 | + |
4062 | + |
4063 | +class ContainerSync(Daemon): |
4064 | + """ |
4065 | + Daemon to sync syncable containers. |
4066 | + |
4067 | + This is done by scanning the local devices for container databases and |
4068 | + checking for x-container-sync-to and x-container-sync-key metadata values. |
4069 | + If they exist, newer rows since the last sync will trigger PUTs or DELETEs |
4070 | + to the other container. |
4071 | + |
4072 | + .. note:: |
4073 | + |
4074 | + This does not sync standard object POSTs, as those do not cause |
4075 | + container row updates. A workaround is to do X-Copy-From POSTs. We're |
4076 | + considering solutions to this limitation but leaving it as is for now |
4077 | + since POSTs are fairly uncommon. |
4078 | + |
4079 | + The actual syncing is slightly more complicated to make use of the three |
4080 | + (or number-of-replicas) main nodes for a container without each trying to |
4081 | + do the exact same work but also without missing work if one node happens to |
4082 | + be down. |
4083 | + |
4084 | + Two sync points are kept per container database. All rows between the two |
4085 | + sync points trigger updates. Any rows newer than both sync points cause |
4086 | + updates depending on the node's position for the container (primary nodes |
4087 | + do one third, etc. depending on the replica count of course). After a sync |
4088 | + run, the first sync point is set to the newest ROWID known and the second |
4089 | + sync point is set to newest ROWID for which all updates have been sent. |
4090 | + |
4091 | + An example may help. Assume replica count is 3 and perfectly matching |
4092 | + ROWIDs starting at 1. |
4093 | + |
4094 | + First sync run, database has 6 rows: |
4095 | + |
4096 | + * SyncPoint1 starts as -1. |
4097 | + * SyncPoint2 starts as -1. |
4098 | + * No rows between points, so no "all updates" rows. |
4099 | + * Six rows newer than SyncPoint1, so a third of the rows are sent |
4100 | + by node 1, another third by node 2, remaining third by node 3. |
4101 | + * SyncPoint1 is set as 6 (the newest ROWID known). |
4102 | + * SyncPoint2 is left as -1 since no "all updates" rows were synced. |
4103 | + |
4104 | + Next sync run, database has 12 rows: |
4105 | + |
4106 | + * SyncPoint1 starts as 6. |
4107 | + * SyncPoint2 starts as -1. |
4108 | + * The rows between -1 and 6 all trigger updates (most of which |
4109 | + should short-circuit on the remote end as having already been |
4110 | + done). |
4111 | + * Six more rows newer than SyncPoint1, so a third of the rows are |
4112 | + sent by node 1, another third by node 2, remaining third by node |
4113 | + 3. |
4114 | + * SyncPoint1 is set as 12 (the newest ROWID known). |
4115 | + * SyncPoint2 is set as 6 (the newest "all updates" ROWID). |
4116 | + |
4117 | + In this way, under normal circumstances each node sends its share of |
4118 | + updates each run and just sends a batch of older updates to ensure nothing |
4119 | + was missed. |
4120 | + |
4121 | + :param conf: The dict of configuration values from the [container-sync] |
4122 | + section of the container-server.conf |
4123 | + :param container_ring: If None, the <swift_dir>/container.ring.gz will be |
4124 | + loaded. This is overridden by unit tests. |
4125 | + :param object_ring: If None, the <swift_dir>/object.ring.gz will be loaded. |
4126 | + This is overridden by unit tests. |
4127 | + """ |
4128 | + |
4129 | + def __init__(self, conf, container_ring=None, object_ring=None): |
4130 | + #: The dict of configuration values from the [container-sync] section |
4131 | + #: of the container-server.conf. |
4132 | + self.conf = conf |
4133 | + #: Logger to use for container-sync log lines. |
4134 | + self.logger = get_logger(conf, log_route='container-sync') |
4135 | + #: Path to the local device mount points. |
4136 | + self.devices = conf.get('devices', '/srv/node') |
4137 | + #: Indicates whether mount points should be verified as actual mount |
4138 | + #: points (normally true, false for tests and SAIO). |
4139 | + self.mount_check = \ |
4140 | + conf.get('mount_check', 'true').lower() in TRUE_VALUES |
4141 | + #: Minimum time between full scans. This is to keep the daemon from |
4142 | + #: running wild on near empty systems. |
4143 | + self.interval = int(conf.get('interval', 300)) |
4144 | + #: Maximum amount of time to spend syncing a container before moving on |
4145 | + #: to the next one. If a conatiner sync hasn't finished in this time, |
4146 | + #: it'll just be resumed next scan. |
4147 | + self.container_time = int(conf.get('container_time', 60)) |
4148 | + #: The list of hosts we're allowed to send syncs to. |
4149 | + self.allowed_sync_hosts = [h.strip() |
4150 | + for h in conf.get('allowed_sync_hosts', '127.0.0.1').split(',') |
4151 | + if h.strip()] |
4152 | + #: Number of containers with sync turned on that were successfully |
4153 | + #: synced. |
4154 | + self.container_syncs = 0 |
4155 | + #: Number of successful DELETEs triggered. |
4156 | + self.container_deletes = 0 |
4157 | + #: Number of successful PUTs triggered. |
4158 | + self.container_puts = 0 |
4159 | + #: Number of containers that didn't have sync turned on. |
4160 | + self.container_skips = 0 |
4161 | + #: Number of containers that had a failure of some type. |
4162 | + self.container_failures = 0 |
4163 | + #: Time of last stats report. |
4164 | + self.reported = time.time() |
4165 | + swift_dir = conf.get('swift_dir', '/etc/swift') |
4166 | + #: swift.common.ring.Ring for locating containers. |
4167 | + self.container_ring = container_ring or \ |
4168 | + Ring(os.path.join(swift_dir, 'container.ring.gz')) |
4169 | + #: swift.common.ring.Ring for locating objects. |
4170 | + self.object_ring = object_ring or \ |
4171 | + Ring(os.path.join(swift_dir, 'object.ring.gz')) |
4172 | + self._myips = whataremyips() |
4173 | + self._myport = int(conf.get('bind_port', 6001)) |
4174 | + |
4175 | + def run_forever(self): |
4176 | + """ |
4177 | + Runs container sync scans until stopped. |
4178 | + """ |
4179 | + time.sleep(random.random() * self.interval) |
4180 | + while True: |
4181 | + begin = time.time() |
4182 | + all_locs = audit_location_generator(self.devices, |
4183 | + container_server.DATADIR, |
4184 | + mount_check=self.mount_check, |
4185 | + logger=self.logger) |
4186 | + for path, device, partition in all_locs: |
4187 | + self.container_sync(path) |
4188 | + if time.time() - self.reported >= 3600: # once an hour |
4189 | + self.report() |
4190 | + elapsed = time.time() - begin |
4191 | + if elapsed < self.interval: |
4192 | + time.sleep(self.interval - elapsed) |
4193 | + |
4194 | + def run_once(self): |
4195 | + """ |
4196 | + Runs a single container sync scan. |
4197 | + """ |
4198 | + self.logger.info(_('Begin container sync "once" mode')) |
4199 | + begin = time.time() |
4200 | + all_locs = audit_location_generator(self.devices, |
4201 | + container_server.DATADIR, |
4202 | + mount_check=self.mount_check, |
4203 | + logger=self.logger) |
4204 | + for path, device, partition in all_locs: |
4205 | + self.container_sync(path) |
4206 | + if time.time() - self.reported >= 3600: # once an hour |
4207 | + self.report() |
4208 | + self.report() |
4209 | + elapsed = time.time() - begin |
4210 | + self.logger.info( |
4211 | + _('Container sync "once" mode completed: %.02fs'), elapsed) |
4212 | + |
4213 | + def report(self): |
4214 | + """ |
4215 | + Writes a report of the stats to the logger and resets the stats for the |
4216 | + next report. |
4217 | + """ |
4218 | + self.logger.info( |
4219 | + _('Since %(time)s: %(sync)s synced [%(delete)s deletes, %(put)s ' |
4220 | + 'puts], %(skip)s skipped, %(fail)s failed'), |
4221 | + {'time': time.ctime(self.reported), |
4222 | + 'sync': self.container_syncs, |
4223 | + 'delete': self.container_deletes, |
4224 | + 'put': self.container_puts, |
4225 | + 'skip': self.container_skips, |
4226 | + 'fail': self.container_failures}) |
4227 | + self.reported = time.time() |
4228 | + self.container_syncs = 0 |
4229 | + self.container_deletes = 0 |
4230 | + self.container_puts = 0 |
4231 | + self.container_skips = 0 |
4232 | + self.container_failures = 0 |
4233 | + |
4234 | + def container_sync(self, path): |
4235 | + """ |
4236 | + Checks the given path for a container database, determines if syncing |
4237 | + is turned on for that database and, if so, sends any updates to the |
4238 | + other container. |
4239 | + |
4240 | + :param path: the path to a container db |
4241 | + """ |
4242 | + try: |
4243 | + if not path.endswith('.db'): |
4244 | + return |
4245 | + broker = ContainerBroker(path) |
4246 | + info = broker.get_info() |
4247 | + x, nodes = self.container_ring.get_nodes(info['account'], |
4248 | + info['container']) |
4249 | + for ordinal, node in enumerate(nodes): |
4250 | + if node['ip'] in self._myips and node['port'] == self._myport: |
4251 | + break |
4252 | + else: |
4253 | + return |
4254 | + if not broker.is_deleted(): |
4255 | + sync_to = None |
4256 | + sync_key = None |
4257 | + sync_point1 = info['x_container_sync_point1'] |
4258 | + sync_point2 = info['x_container_sync_point2'] |
4259 | + for key, (value, timestamp) in broker.metadata.iteritems(): |
4260 | + if key.lower() == 'x-container-sync-to': |
4261 | + sync_to = value |
4262 | + elif key.lower() == 'x-container-sync-key': |
4263 | + sync_key = value |
4264 | + if not sync_to or not sync_key: |
4265 | + self.container_skips += 1 |
4266 | + return |
4267 | + sync_to = sync_to.rstrip('/') |
4268 | + err = validate_sync_to(sync_to, self.allowed_sync_hosts) |
4269 | + if err: |
4270 | + self.logger.info( |
4271 | + _('ERROR %(db_file)s: %(validate_sync_to_err)s'), |
4272 | + {'db_file': broker.db_file, |
4273 | + 'validate_sync_to_err': err}) |
4274 | + self.container_failures += 1 |
4275 | + return |
4276 | + stop_at = time.time() + self.container_time |
4277 | + while time.time() < stop_at and sync_point2 < sync_point1: |
4278 | + rows = broker.get_items_since(sync_point2, 1) |
4279 | + if not rows: |
4280 | + break |
4281 | + row = rows[0] |
4282 | + if row['ROWID'] >= sync_point1: |
4283 | + break |
4284 | + key = hash_path(info['account'], info['container'], |
4285 | + row['name'], raw_digest=True) |
4286 | + # This node will only intially sync out one third of the |
4287 | + # objects (if 3 replicas, 1/4 if 4, etc.). This section |
4288 | + # will attempt to sync previously skipped rows in case the |
4289 | + # other nodes didn't succeed. |
4290 | + if unpack_from('>I', key)[0] % \ |
4291 | + self.container_ring.replica_count != ordinal: |
4292 | + if not self.container_sync_row(row, sync_to, sync_key, |
4293 | + broker, info): |
4294 | + return |
4295 | + sync_point2 = row['ROWID'] |
4296 | + broker.set_x_container_sync_points(None, sync_point2) |
4297 | + while time.time() < stop_at: |
4298 | + rows = broker.get_items_since(sync_point1, 1) |
4299 | + if not rows: |
4300 | + break |
4301 | + row = rows[0] |
4302 | + key = hash_path(info['account'], info['container'], |
4303 | + row['name'], raw_digest=True) |
4304 | + # This node will only intially sync out one third of the |
4305 | + # objects (if 3 replicas, 1/4 if 4, etc.). It'll come back |
4306 | + # around to the section above and attempt to sync |
4307 | + # previously skipped rows in case the other nodes didn't |
4308 | + # succeed. |
4309 | + if unpack_from('>I', key)[0] % \ |
4310 | + self.container_ring.replica_count == ordinal: |
4311 | + if not self.container_sync_row(row, sync_to, sync_key, |
4312 | + broker, info): |
4313 | + return |
4314 | + sync_point1 = row['ROWID'] |
4315 | + broker.set_x_container_sync_points(sync_point1, None) |
4316 | + self.container_syncs += 1 |
4317 | + except Exception: |
4318 | + self.container_failures += 1 |
4319 | + self.logger.exception(_('ERROR Syncing %s'), (broker.db_file)) |
4320 | + |
4321 | + def container_sync_row(self, row, sync_to, sync_key, broker, info): |
4322 | + """ |
4323 | + Sends the update the row indicates to the sync_to container. |
4324 | + |
4325 | + :param row: The updated row in the local database triggering the sync |
4326 | + update. |
4327 | + :param sync_to: The URL to the remote container. |
4328 | + :param sync_key: The X-Container-Sync-Key to use when sending requests |
4329 | + to the other container. |
4330 | + :param broker: The local container database broker. |
4331 | + :param info: The get_info result from the local container database |
4332 | + broker. |
4333 | + :returns: True on success |
4334 | + """ |
4335 | + try: |
4336 | + if row['deleted']: |
4337 | + try: |
4338 | + client.delete_object(sync_to, name=row['name'], |
4339 | + headers={'X-Timestamp': row['created_at'], |
4340 | + 'X-Container-Sync-Key': sync_key}) |
4341 | + except client.ClientException, err: |
4342 | + if err.http_status != 404: |
4343 | + raise |
4344 | + self.container_deletes += 1 |
4345 | + else: |
4346 | + part, nodes = self.object_ring.get_nodes( |
4347 | + info['account'], info['container'], |
4348 | + row['name']) |
4349 | + random.shuffle(nodes) |
4350 | + exc = None |
4351 | + for node in nodes: |
4352 | + try: |
4353 | + headers, body = \ |
4354 | + direct_client.direct_get_object(node, part, |
4355 | + info['account'], info['container'], |
4356 | + row['name'], resp_chunk_size=65536) |
4357 | + break |
4358 | + except client.ClientException, err: |
4359 | + exc = err |
4360 | + else: |
4361 | + if exc: |
4362 | + raise exc |
4363 | + raise Exception(_('Unknown exception trying to GET: ' |
4364 | + '%(node)r %(account)r %(container)r %(object)r'), |
4365 | + {'node': node, 'part': part, |
4366 | + 'account': info['account'], |
4367 | + 'container': info['container'], |
4368 | + 'object': row['name']}) |
4369 | + for key in ('date', 'last-modified'): |
4370 | + if key in headers: |
4371 | + del headers[key] |
4372 | + if 'etag' in headers: |
4373 | + headers['etag'] = headers['etag'].strip('"') |
4374 | + headers['X-Timestamp'] = row['created_at'] |
4375 | + headers['X-Container-Sync-Key'] = sync_key |
4376 | + client.put_object(sync_to, name=row['name'], |
4377 | + headers=headers, |
4378 | + contents=_Iter2FileLikeObject(body)) |
4379 | + self.container_puts += 1 |
4380 | + except client.ClientException, err: |
4381 | + if err.http_status == 401: |
4382 | + self.logger.info(_('Unauth %(sync_from)r ' |
4383 | + '=> %(sync_to)r key: %(sync_key)r'), |
4384 | + {'sync_from': '%s/%s' % |
4385 | + (client.quote(info['account']), |
4386 | + client.quote(info['container'])), |
4387 | + 'sync_to': sync_to, |
4388 | + 'sync_key': sync_key}) |
4389 | + elif err.http_status == 404: |
4390 | + self.logger.info(_('Not found %(sync_from)r ' |
4391 | + '=> %(sync_to)r key: %(sync_key)r'), |
4392 | + {'sync_from': '%s/%s' % |
4393 | + (client.quote(info['account']), |
4394 | + client.quote(info['container'])), |
4395 | + 'sync_to': sync_to, |
4396 | + 'sync_key': sync_key}) |
4397 | + else: |
4398 | + self.logger.exception( |
4399 | + _('ERROR Syncing %(db_file)s %(row)s'), |
4400 | + {'db_file': broker.db_file, 'row': row}) |
4401 | + self.container_failures += 1 |
4402 | + return False |
4403 | + except Exception: |
4404 | + self.logger.exception( |
4405 | + _('ERROR Syncing %(db_file)s %(row)s'), |
4406 | + {'db_file': broker.db_file, 'row': row}) |
4407 | + self.container_failures += 1 |
4408 | + return False |
4409 | + return True |
4410 | |
4411 | === modified file 'swift/obj/server.py' |
4412 | --- swift/obj/server.py 2011-05-09 20:21:34 +0000 |
4413 | +++ swift/obj/server.py 2011-06-03 00:13:27 +0000 |
4414 | @@ -500,6 +500,7 @@ |
4415 | return error_response |
4416 | file = DiskFile(self.devices, device, partition, account, container, |
4417 | obj, self.logger, disk_chunk_size=self.disk_chunk_size) |
4418 | + orig_timestamp = file.metadata.get('X-Timestamp') |
4419 | upload_expiration = time.time() + self.max_upload_time |
4420 | etag = md5() |
4421 | upload_size = 0 |
4422 | @@ -544,13 +545,16 @@ |
4423 | metadata[header_caps] = request.headers[header_key] |
4424 | file.put(fd, tmppath, metadata) |
4425 | file.unlinkold(metadata['X-Timestamp']) |
4426 | - self.container_update('PUT', account, container, obj, request.headers, |
4427 | - {'x-size': file.metadata['Content-Length'], |
4428 | - 'x-content-type': file.metadata['Content-Type'], |
4429 | - 'x-timestamp': file.metadata['X-Timestamp'], |
4430 | - 'x-etag': file.metadata['ETag'], |
4431 | - 'x-trans-id': request.headers.get('x-trans-id', '-')}, |
4432 | - device) |
4433 | + if not orig_timestamp or \ |
4434 | + orig_timestamp < request.headers['x-timestamp']: |
4435 | + self.container_update('PUT', account, container, obj, |
4436 | + request.headers, |
4437 | + {'x-size': file.metadata['Content-Length'], |
4438 | + 'x-content-type': file.metadata['Content-Type'], |
4439 | + 'x-timestamp': file.metadata['X-Timestamp'], |
4440 | + 'x-etag': file.metadata['ETag'], |
4441 | + 'x-trans-id': request.headers.get('x-trans-id', '-')}, |
4442 | + device) |
4443 | resp = HTTPCreated(request=request, etag=etag) |
4444 | return resp |
4445 | |
4446 | @@ -654,6 +658,8 @@ |
4447 | response.headers[key] = value |
4448 | response.etag = file.metadata['ETag'] |
4449 | response.last_modified = float(file.metadata['X-Timestamp']) |
4450 | + # Needed for container sync feature |
4451 | + response.headers['X-Timestamp'] = file.metadata['X-Timestamp'] |
4452 | response.content_length = file_size |
4453 | if 'Content-Encoding' in file.metadata: |
4454 | response.content_encoding = file.metadata['Content-Encoding'] |
4455 | @@ -676,6 +682,7 @@ |
4456 | response_class = HTTPNoContent |
4457 | file = DiskFile(self.devices, device, partition, account, container, |
4458 | obj, self.logger, disk_chunk_size=self.disk_chunk_size) |
4459 | + orig_timestamp = file.metadata.get('X-Timestamp') |
4460 | if file.is_deleted(): |
4461 | response_class = HTTPNotFound |
4462 | metadata = { |
4463 | @@ -684,10 +691,12 @@ |
4464 | with file.mkstemp() as (fd, tmppath): |
4465 | file.put(fd, tmppath, metadata, extension='.ts') |
4466 | file.unlinkold(metadata['X-Timestamp']) |
4467 | - self.container_update('DELETE', account, container, obj, |
4468 | - request.headers, {'x-timestamp': metadata['X-Timestamp'], |
4469 | - 'x-trans-id': request.headers.get('x-trans-id', '-')}, |
4470 | - device) |
4471 | + if not orig_timestamp or \ |
4472 | + orig_timestamp < request.headers['x-timestamp']: |
4473 | + self.container_update('DELETE', account, container, obj, |
4474 | + request.headers, {'x-timestamp': metadata['X-Timestamp'], |
4475 | + 'x-trans-id': request.headers.get('x-trans-id', '-')}, |
4476 | + device) |
4477 | resp = response_class(request=request) |
4478 | return resp |
4479 | |
4480 | |
4481 | === modified file 'swift/proxy/server.py' |
4482 | --- swift/proxy/server.py 2011-05-12 15:57:35 +0000 |
4483 | +++ swift/proxy/server.py 2011-06-03 00:13:27 +0000 |
4484 | @@ -33,7 +33,7 @@ |
4485 | |
4486 | from eventlet import sleep, GreenPile, Queue, TimeoutError |
4487 | from eventlet.timeout import Timeout |
4488 | -from webob.exc import HTTPBadRequest, HTTPMethodNotAllowed, \ |
4489 | +from webob.exc import HTTPAccepted, HTTPBadRequest, HTTPMethodNotAllowed, \ |
4490 | HTTPNotFound, HTTPPreconditionFailed, \ |
4491 | HTTPRequestTimeout, HTTPServiceUnavailable, \ |
4492 | HTTPUnprocessableEntity, HTTPRequestEntityTooLarge, HTTPServerError, \ |
4493 | @@ -42,7 +42,7 @@ |
4494 | |
4495 | from swift.common.ring import Ring |
4496 | from swift.common.utils import get_logger, normalize_timestamp, split_path, \ |
4497 | - cache_from_env, ContextPool |
4498 | + cache_from_env, ContextPool, get_remote_client |
4499 | from swift.common.bufferedhttp import http_connect |
4500 | from swift.common.constraints import check_metadata, check_object_creation, \ |
4501 | check_utf8, CONTAINER_LISTING_LIMIT, MAX_ACCOUNT_NAME_LENGTH, \ |
4502 | @@ -406,8 +406,8 @@ |
4503 | :param account: account name for the container |
4504 | :param container: container name to look up |
4505 | :returns: tuple of (container partition, container nodes, container |
4506 | - read acl, container write acl) or (None, None, None, None) if |
4507 | - the container does not exist |
4508 | + read acl, container write acl, container sync key) or (None, |
4509 | + None, None, None, None) if the container does not exist |
4510 | """ |
4511 | partition, nodes = self.app.container_ring.get_nodes( |
4512 | account, container) |
4513 | @@ -419,15 +419,17 @@ |
4514 | status = cache_value['status'] |
4515 | read_acl = cache_value['read_acl'] |
4516 | write_acl = cache_value['write_acl'] |
4517 | + sync_key = cache_value.get('sync_key') |
4518 | if status == 200: |
4519 | - return partition, nodes, read_acl, write_acl |
4520 | + return partition, nodes, read_acl, write_acl, sync_key |
4521 | elif status == 404: |
4522 | - return None, None, None, None |
4523 | + return None, None, None, None, None |
4524 | if not self.account_info(account)[1]: |
4525 | - return None, None, None, None |
4526 | + return None, None, None, None, None |
4527 | result_code = 0 |
4528 | read_acl = None |
4529 | write_acl = None |
4530 | + sync_key = None |
4531 | container_size = None |
4532 | attempts_left = self.app.container_ring.replica_count |
4533 | headers = {'x-trans-id': self.trans_id} |
4534 | @@ -443,6 +445,7 @@ |
4535 | result_code = 200 |
4536 | read_acl = resp.getheader('x-container-read') |
4537 | write_acl = resp.getheader('x-container-write') |
4538 | + sync_key = resp.getheader('x-container-sync-key') |
4539 | container_size = \ |
4540 | resp.getheader('X-Container-Object-Count') |
4541 | break |
4542 | @@ -471,11 +474,12 @@ |
4543 | {'status': result_code, |
4544 | 'read_acl': read_acl, |
4545 | 'write_acl': write_acl, |
4546 | + 'sync_key': sync_key, |
4547 | 'container_size': container_size}, |
4548 | timeout=cache_timeout) |
4549 | if result_code == 200: |
4550 | - return partition, nodes, read_acl, write_acl |
4551 | - return None, None, None, None |
4552 | + return partition, nodes, read_acl, write_acl, sync_key |
4553 | + return None, None, None, None, None |
4554 | |
4555 | def iter_nodes(self, partition, nodes, ring): |
4556 | """ |
4557 | @@ -645,6 +649,9 @@ |
4558 | raise |
4559 | res.app_iter = file_iter() |
4560 | update_headers(res, source.getheaders()) |
4561 | + # Used by container sync feature |
4562 | + res.environ['swift_x_timestamp'] = \ |
4563 | + source.getheader('x-timestamp') |
4564 | update_headers(res, {'accept-ranges': 'bytes'}) |
4565 | res.status = source.status |
4566 | res.content_length = source.getheader('Content-Length') |
4567 | @@ -655,6 +662,9 @@ |
4568 | elif 200 <= source.status <= 399: |
4569 | res = status_map[source.status](request=req) |
4570 | update_headers(res, source.getheaders()) |
4571 | + # Used by container sync feature |
4572 | + res.environ['swift_x_timestamp'] = \ |
4573 | + source.getheader('x-timestamp') |
4574 | update_headers(res, {'accept-ranges': 'bytes'}) |
4575 | if req.method == 'HEAD': |
4576 | res.content_length = source.getheader('Content-Length') |
4577 | @@ -853,7 +863,7 @@ |
4578 | error_response = check_metadata(req, 'object') |
4579 | if error_response: |
4580 | return error_response |
4581 | - container_partition, containers, _junk, req.acl = \ |
4582 | + container_partition, containers, _junk, req.acl, _junk = \ |
4583 | self.container_info(self.account_name, self.container_name) |
4584 | if 'swift.authorize' in req.environ: |
4585 | aresp = req.environ['swift.authorize'](req) |
4586 | @@ -910,7 +920,8 @@ |
4587 | @delay_denial |
4588 | def PUT(self, req): |
4589 | """HTTP PUT request handler.""" |
4590 | - container_partition, containers, _junk, req.acl = \ |
4591 | + (container_partition, containers, _junk, req.acl, |
4592 | + req.environ['swift_sync_key']) = \ |
4593 | self.container_info(self.account_name, self.container_name) |
4594 | if 'swift.authorize' in req.environ: |
4595 | aresp = req.environ['swift.authorize'](req) |
4596 | @@ -920,7 +931,27 @@ |
4597 | return HTTPNotFound(request=req) |
4598 | partition, nodes = self.app.object_ring.get_nodes( |
4599 | self.account_name, self.container_name, self.object_name) |
4600 | - req.headers['X-Timestamp'] = normalize_timestamp(time.time()) |
4601 | + # Used by container sync feature |
4602 | + if 'x-timestamp' in req.headers: |
4603 | + try: |
4604 | + req.headers['X-Timestamp'] = \ |
4605 | + normalize_timestamp(float(req.headers['x-timestamp'])) |
4606 | + # For container sync PUTs, do a HEAD to see if we can |
4607 | + # shortcircuit |
4608 | + hreq = Request.blank(req.path_info, |
4609 | + environ={'REQUEST_METHOD': 'HEAD'}) |
4610 | + self.GETorHEAD_base(hreq, _('Object'), partition, nodes, |
4611 | + hreq.path_info, self.app.object_ring.replica_count) |
4612 | + if 'swift_x_timestamp' in hreq.environ and \ |
4613 | + float(hreq.environ['swift_x_timestamp']) >= \ |
4614 | + float(req.headers['x-timestamp']): |
4615 | + return HTTPAccepted(request=req) |
4616 | + except ValueError: |
4617 | + return HTTPBadRequest(request=req, content_type='text/plain', |
4618 | + body='X-Timestamp should be a UNIX timestamp float value; ' |
4619 | + 'was %r' % req.headers['x-timestamp']) |
4620 | + else: |
4621 | + req.headers['X-Timestamp'] = normalize_timestamp(time.time()) |
4622 | # Sometimes the 'content-type' header exists, but is set to None. |
4623 | content_type_manually_set = True |
4624 | if not req.headers.get('content-type'): |
4625 | @@ -1093,7 +1124,8 @@ |
4626 | @delay_denial |
4627 | def DELETE(self, req): |
4628 | """HTTP DELETE request handler.""" |
4629 | - container_partition, containers, _junk, req.acl = \ |
4630 | + (container_partition, containers, _junk, req.acl, |
4631 | + req.environ['swift_sync_key']) = \ |
4632 | self.container_info(self.account_name, self.container_name) |
4633 | if 'swift.authorize' in req.environ: |
4634 | aresp = req.environ['swift.authorize'](req) |
4635 | @@ -1103,7 +1135,17 @@ |
4636 | return HTTPNotFound(request=req) |
4637 | partition, nodes = self.app.object_ring.get_nodes( |
4638 | self.account_name, self.container_name, self.object_name) |
4639 | - req.headers['X-Timestamp'] = normalize_timestamp(time.time()) |
4640 | + # Used by container sync feature |
4641 | + if 'x-timestamp' in req.headers: |
4642 | + try: |
4643 | + req.headers['X-Timestamp'] = \ |
4644 | + normalize_timestamp(float(req.headers['x-timestamp'])) |
4645 | + except ValueError: |
4646 | + return HTTPBadRequest(request=req, content_type='text/plain', |
4647 | + body='X-Timestamp should be a UNIX timestamp float value; ' |
4648 | + 'was %r' % req.headers['x-timestamp']) |
4649 | + else: |
4650 | + req.headers['X-Timestamp'] = normalize_timestamp(time.time()) |
4651 | headers = [] |
4652 | for container in containers: |
4653 | nheaders = dict(req.headers.iteritems()) |
4654 | @@ -1149,7 +1191,8 @@ |
4655 | server_type = _('Container') |
4656 | |
4657 | # Ensure these are all lowercase |
4658 | - pass_through_headers = ['x-container-read', 'x-container-write'] |
4659 | + pass_through_headers = ['x-container-read', 'x-container-write', |
4660 | + 'x-container-sync-key', 'x-container-sync-to'] |
4661 | |
4662 | def __init__(self, app, account_name, container_name, **kwargs): |
4663 | Controller.__init__(self, app) |
4664 | @@ -1185,6 +1228,7 @@ |
4665 | {'status': resp.status_int, |
4666 | 'read_acl': resp.headers.get('x-container-read'), |
4667 | 'write_acl': resp.headers.get('x-container-write'), |
4668 | + 'sync_key': resp.headers.get('x-container-sync-key'), |
4669 | 'container_size': resp.headers.get('x-container-object-count')}, |
4670 | timeout=self.app.recheck_container_existence) |
4671 | |
4672 | @@ -1193,6 +1237,11 @@ |
4673 | aresp = req.environ['swift.authorize'](req) |
4674 | if aresp: |
4675 | return aresp |
4676 | + if not req.environ.get('swift_owner', False): |
4677 | + for key in ('x-container-read', 'x-container-write', |
4678 | + 'x-container-sync-key', 'x-container-sync-to'): |
4679 | + if key in resp.headers: |
4680 | + del resp.headers[key] |
4681 | return resp |
4682 | |
4683 | @public |
4684 | @@ -1548,13 +1597,7 @@ |
4685 | the_request = quote(unquote(req.path)) |
4686 | if req.query_string: |
4687 | the_request = the_request + '?' + req.query_string |
4688 | - # remote user for zeus |
4689 | - client = req.headers.get('x-cluster-client-ip') |
4690 | - if not client and 'x-forwarded-for' in req.headers: |
4691 | - # remote user for other lbs |
4692 | - client = req.headers['x-forwarded-for'].split(',')[0].strip() |
4693 | - if not client: |
4694 | - client = req.remote_addr |
4695 | + client = get_remote_client(req) |
4696 | logged_headers = None |
4697 | if self.log_headers: |
4698 | logged_headers = '\n'.join('%s: %s' % (k, v) |
4699 | |
4700 | === modified file 'test/probe/common.py' |
4701 | --- test/probe/common.py 2011-03-14 02:56:37 +0000 |
4702 | +++ test/probe/common.py 2011-06-03 00:13:27 +0000 |
4703 | @@ -13,29 +13,16 @@ |
4704 | # See the License for the specific language governing permissions and |
4705 | # limitations under the License. |
4706 | |
4707 | -from os import environ, kill |
4708 | +from os import kill |
4709 | from signal import SIGTERM |
4710 | from subprocess import call, Popen |
4711 | from time import sleep |
4712 | -from ConfigParser import ConfigParser |
4713 | |
4714 | from swift.common.bufferedhttp import http_connect_raw as http_connect |
4715 | from swift.common.client import get_auth |
4716 | from swift.common.ring import Ring |
4717 | |
4718 | |
4719 | -SUPER_ADMIN_KEY = None |
4720 | - |
4721 | -c = ConfigParser() |
4722 | -PROXY_SERVER_CONF_FILE = environ.get('SWIFT_PROXY_SERVER_CONF_FILE', |
4723 | - '/etc/swift/proxy-server.conf') |
4724 | -if c.read(PROXY_SERVER_CONF_FILE): |
4725 | - conf = dict(c.items('filter:swauth')) |
4726 | - SUPER_ADMIN_KEY = conf.get('super_admin_key', 'swauthkey') |
4727 | -else: |
4728 | - exit('Unable to read config file: %s' % PROXY_SERVER_CONF_FILE) |
4729 | - |
4730 | - |
4731 | def kill_pids(pids): |
4732 | for pid in pids.values(): |
4733 | try: |
4734 | @@ -48,8 +35,6 @@ |
4735 | call(['resetswift']) |
4736 | pids = {} |
4737 | try: |
4738 | - pids['proxy'] = Popen(['swift-proxy-server', |
4739 | - '/etc/swift/proxy-server.conf']).pid |
4740 | port2server = {} |
4741 | for s, p in (('account', 6002), ('container', 6001), ('object', 6000)): |
4742 | for n in xrange(1, 5): |
4743 | @@ -57,14 +42,27 @@ |
4744 | Popen(['swift-%s-server' % s, |
4745 | '/etc/swift/%s-server/%d.conf' % (s, n)]).pid |
4746 | port2server[p + (n * 10)] = '%s%d' % (s, n) |
4747 | + pids['proxy'] = Popen(['swift-proxy-server', |
4748 | + '/etc/swift/proxy-server.conf']).pid |
4749 | account_ring = Ring('/etc/swift/account.ring.gz') |
4750 | container_ring = Ring('/etc/swift/container.ring.gz') |
4751 | object_ring = Ring('/etc/swift/object.ring.gz') |
4752 | - sleep(5) |
4753 | - call(['recreateaccounts']) |
4754 | - url, token = get_auth('http://127.0.0.1:8080/auth/v1.0', |
4755 | - 'test:tester', 'testing') |
4756 | - account = url.split('/')[-1] |
4757 | + attempt = 0 |
4758 | + while True: |
4759 | + attempt += 1 |
4760 | + try: |
4761 | + url, token = get_auth('http://127.0.0.1:8080/auth/v1.0', |
4762 | + 'test:tester', 'testing') |
4763 | + account = url.split('/')[-1] |
4764 | + break |
4765 | + except Exception, err: |
4766 | + if attempt > 9: |
4767 | + print err |
4768 | + print 'Giving up after %s retries.' % attempt |
4769 | + raise err |
4770 | + print err |
4771 | + print 'Retrying in 2 seconds...' |
4772 | + sleep(2) |
4773 | except BaseException, err: |
4774 | kill_pids(pids) |
4775 | raise err |
4776 | |
4777 | === renamed file 'test/unit/common/middleware/test_swauth.py' => 'test/unit/common/middleware/test_tempauth.py' |
4778 | --- test/unit/common/middleware/test_swauth.py 2011-04-01 21:17:47 +0000 |
4779 | +++ test/unit/common/middleware/test_tempauth.py 2011-06-03 00:13:27 +0000 |
4780 | @@ -1,4 +1,4 @@ |
4781 | -# Copyright (c) 2010 OpenStack, LLC. |
4782 | +# Copyright (c) 2011 OpenStack, LLC. |
4783 | # |
4784 | # Licensed under the Apache License, Version 2.0 (the "License"); |
4785 | # you may not use this file except in compliance with the License. |
4786 | @@ -23,7 +23,7 @@ |
4787 | |
4788 | from webob import Request, Response |
4789 | |
4790 | -from swift.common.middleware import swauth as auth |
4791 | +from swift.common.middleware import tempauth as auth |
4792 | |
4793 | |
4794 | class FakeMemcache(object): |
4795 | @@ -56,15 +56,21 @@ |
4796 | |
4797 | class FakeApp(object): |
4798 | |
4799 | - def __init__(self, status_headers_body_iter=None): |
4800 | + def __init__(self, status_headers_body_iter=None, acl=None, sync_key=None): |
4801 | self.calls = 0 |
4802 | self.status_headers_body_iter = status_headers_body_iter |
4803 | if not self.status_headers_body_iter: |
4804 | self.status_headers_body_iter = iter([('404 Not Found', {}, '')]) |
4805 | + self.acl = acl |
4806 | + self.sync_key = sync_key |
4807 | |
4808 | def __call__(self, env, start_response): |
4809 | self.calls += 1 |
4810 | self.request = Request.blank('', environ=env) |
4811 | + if self.acl: |
4812 | + self.request.acl = self.acl |
4813 | + if self.sync_key: |
4814 | + self.request.environ['swift_sync_key'] = self.sync_key |
4815 | if 'swift.authorize' in env: |
4816 | resp = env['swift.authorize'](self.request) |
4817 | if resp: |
4818 | @@ -102,85 +108,50 @@ |
4819 | class TestAuth(unittest.TestCase): |
4820 | |
4821 | def setUp(self): |
4822 | - self.test_auth = \ |
4823 | - auth.filter_factory({'super_admin_key': 'supertest'})(FakeApp()) |
4824 | + self.test_auth = auth.filter_factory({})(FakeApp()) |
4825 | |
4826 | - def test_super_admin_key_required(self): |
4827 | - app = FakeApp() |
4828 | - exc = None |
4829 | - try: |
4830 | - auth.filter_factory({})(app) |
4831 | - except ValueError, err: |
4832 | - exc = err |
4833 | - self.assertEquals(str(exc), |
4834 | - 'No super_admin_key set in conf file! Exiting.') |
4835 | - auth.filter_factory({'super_admin_key': 'supertest'})(app) |
4836 | + def _make_request(self, path, **kwargs): |
4837 | + req = Request.blank(path, **kwargs) |
4838 | + req.environ['swift.cache'] = FakeMemcache() |
4839 | + return req |
4840 | |
4841 | def test_reseller_prefix_init(self): |
4842 | app = FakeApp() |
4843 | - ath = auth.filter_factory({'super_admin_key': 'supertest'})(app) |
4844 | + ath = auth.filter_factory({})(app) |
4845 | self.assertEquals(ath.reseller_prefix, 'AUTH_') |
4846 | - ath = auth.filter_factory({'super_admin_key': 'supertest', |
4847 | - 'reseller_prefix': 'TEST'})(app) |
4848 | + ath = auth.filter_factory({'reseller_prefix': 'TEST'})(app) |
4849 | self.assertEquals(ath.reseller_prefix, 'TEST_') |
4850 | - ath = auth.filter_factory({'super_admin_key': 'supertest', |
4851 | - 'reseller_prefix': 'TEST_'})(app) |
4852 | + ath = auth.filter_factory({'reseller_prefix': 'TEST_'})(app) |
4853 | self.assertEquals(ath.reseller_prefix, 'TEST_') |
4854 | |
4855 | def test_auth_prefix_init(self): |
4856 | app = FakeApp() |
4857 | - ath = auth.filter_factory({'super_admin_key': 'supertest'})(app) |
4858 | - self.assertEquals(ath.auth_prefix, '/auth/') |
4859 | - ath = auth.filter_factory({'super_admin_key': 'supertest', |
4860 | - 'auth_prefix': ''})(app) |
4861 | - self.assertEquals(ath.auth_prefix, '/auth/') |
4862 | - ath = auth.filter_factory({'super_admin_key': 'supertest', |
4863 | - 'auth_prefix': '/test/'})(app) |
4864 | - self.assertEquals(ath.auth_prefix, '/test/') |
4865 | - ath = auth.filter_factory({'super_admin_key': 'supertest', |
4866 | - 'auth_prefix': '/test'})(app) |
4867 | - self.assertEquals(ath.auth_prefix, '/test/') |
4868 | - ath = auth.filter_factory({'super_admin_key': 'supertest', |
4869 | - 'auth_prefix': 'test/'})(app) |
4870 | - self.assertEquals(ath.auth_prefix, '/test/') |
4871 | - ath = auth.filter_factory({'super_admin_key': 'supertest', |
4872 | - 'auth_prefix': 'test'})(app) |
4873 | - self.assertEquals(ath.auth_prefix, '/test/') |
4874 | - |
4875 | - def test_default_swift_cluster_init(self): |
4876 | - app = FakeApp() |
4877 | - self.assertRaises(Exception, auth.filter_factory({ |
4878 | - 'super_admin_key': 'supertest', |
4879 | - 'default_swift_cluster': 'local#badscheme://host/path'}), app) |
4880 | - ath = auth.filter_factory({'super_admin_key': 'supertest'})(app) |
4881 | - self.assertEquals(ath.default_swift_cluster, |
4882 | - 'local#http://127.0.0.1:8080/v1') |
4883 | - ath = auth.filter_factory({'super_admin_key': 'supertest', |
4884 | - 'default_swift_cluster': 'local#http://host/path'})(app) |
4885 | - self.assertEquals(ath.default_swift_cluster, |
4886 | - 'local#http://host/path') |
4887 | - ath = auth.filter_factory({'super_admin_key': 'supertest', |
4888 | - 'default_swift_cluster': 'local#https://host/path/'})(app) |
4889 | - self.assertEquals(ath.dsc_url, 'https://host/path') |
4890 | - self.assertEquals(ath.dsc_url2, 'https://host/path') |
4891 | - ath = auth.filter_factory({'super_admin_key': 'supertest', |
4892 | - 'default_swift_cluster': |
4893 | - 'local#https://host/path/#http://host2/path2/'})(app) |
4894 | - self.assertEquals(ath.dsc_url, 'https://host/path') |
4895 | - self.assertEquals(ath.dsc_url2, 'http://host2/path2') |
4896 | + ath = auth.filter_factory({})(app) |
4897 | + self.assertEquals(ath.auth_prefix, '/auth/') |
4898 | + ath = auth.filter_factory({'auth_prefix': ''})(app) |
4899 | + self.assertEquals(ath.auth_prefix, '/auth/') |
4900 | + ath = auth.filter_factory({'auth_prefix': '/test/'})(app) |
4901 | + self.assertEquals(ath.auth_prefix, '/test/') |
4902 | + ath = auth.filter_factory({'auth_prefix': '/test'})(app) |
4903 | + self.assertEquals(ath.auth_prefix, '/test/') |
4904 | + ath = auth.filter_factory({'auth_prefix': 'test/'})(app) |
4905 | + self.assertEquals(ath.auth_prefix, '/test/') |
4906 | + ath = auth.filter_factory({'auth_prefix': 'test'})(app) |
4907 | + self.assertEquals(ath.auth_prefix, '/test/') |
4908 | |
4909 | def test_top_level_ignore(self): |
4910 | - resp = Request.blank('/').get_response(self.test_auth) |
4911 | + resp = self._make_request('/').get_response(self.test_auth) |
4912 | self.assertEquals(resp.status_int, 404) |
4913 | |
4914 | def test_anon(self): |
4915 | - resp = Request.blank('/v1/AUTH_account').get_response(self.test_auth) |
4916 | + resp = \ |
4917 | + self._make_request('/v1/AUTH_account').get_response(self.test_auth) |
4918 | self.assertEquals(resp.status_int, 401) |
4919 | self.assertEquals(resp.environ['swift.authorize'], |
4920 | self.test_auth.authorize) |
4921 | |
4922 | def test_auth_deny_non_reseller_prefix(self): |
4923 | - resp = Request.blank('/v1/BLAH_account', |
4924 | + resp = self._make_request('/v1/BLAH_account', |
4925 | headers={'X-Auth-Token': 'BLAH_t'}).get_response(self.test_auth) |
4926 | self.assertEquals(resp.status_int, 401) |
4927 | self.assertEquals(resp.environ['swift.authorize'], |
4928 | @@ -188,7 +159,7 @@ |
4929 | |
4930 | def test_auth_deny_non_reseller_prefix_no_override(self): |
4931 | fake_authorize = lambda x: Response(status='500 Fake') |
4932 | - resp = Request.blank('/v1/BLAH_account', |
4933 | + resp = self._make_request('/v1/BLAH_account', |
4934 | headers={'X-Auth-Token': 'BLAH_t'}, |
4935 | environ={'swift.authorize': fake_authorize} |
4936 | ).get_response(self.test_auth) |
4937 | @@ -200,192 +171,74 @@ |
4938 | # outright but set up a denial swift.authorize and pass the request on |
4939 | # down the chain. |
4940 | local_app = FakeApp() |
4941 | - local_auth = auth.filter_factory({'super_admin_key': 'supertest', |
4942 | - 'reseller_prefix': ''})(local_app) |
4943 | - resp = Request.blank('/v1/account', |
4944 | + local_auth = auth.filter_factory({'reseller_prefix': ''})(local_app) |
4945 | + resp = self._make_request('/v1/account', |
4946 | headers={'X-Auth-Token': 't'}).get_response(local_auth) |
4947 | self.assertEquals(resp.status_int, 401) |
4948 | - # one for checking auth, two for request passed along |
4949 | - self.assertEquals(local_app.calls, 2) |
4950 | + self.assertEquals(local_app.calls, 1) |
4951 | self.assertEquals(resp.environ['swift.authorize'], |
4952 | local_auth.denied_response) |
4953 | |
4954 | - def test_auth_no_reseller_prefix_allow(self): |
4955 | - # Ensures that when we have no reseller prefix, we can still allow |
4956 | - # access if our auth server accepts requests |
4957 | - local_app = FakeApp(iter([ |
4958 | - ('200 Ok', {}, |
4959 | - json.dumps({'account': 'act', 'user': 'act:usr', |
4960 | - 'account_id': 'AUTH_cfa', |
4961 | - 'groups': [{'name': 'act:usr'}, {'name': 'act'}, |
4962 | - {'name': '.admin'}], |
4963 | - 'expires': time() + 60})), |
4964 | - ('204 No Content', {}, '')])) |
4965 | - local_auth = auth.filter_factory({'super_admin_key': 'supertest', |
4966 | - 'reseller_prefix': ''})(local_app) |
4967 | - resp = Request.blank('/v1/act', |
4968 | - headers={'X-Auth-Token': 't'}).get_response(local_auth) |
4969 | - self.assertEquals(resp.status_int, 204) |
4970 | - self.assertEquals(local_app.calls, 2) |
4971 | - self.assertEquals(resp.environ['swift.authorize'], |
4972 | - local_auth.authorize) |
4973 | - |
4974 | def test_auth_no_reseller_prefix_no_token(self): |
4975 | # Check that normally we set up a call back to our authorize. |
4976 | local_auth = \ |
4977 | - auth.filter_factory({'super_admin_key': 'supertest', |
4978 | - 'reseller_prefix': ''})(FakeApp(iter([]))) |
4979 | - resp = Request.blank('/v1/account').get_response(local_auth) |
4980 | + auth.filter_factory({'reseller_prefix': ''})(FakeApp(iter([]))) |
4981 | + resp = self._make_request('/v1/account').get_response(local_auth) |
4982 | self.assertEquals(resp.status_int, 401) |
4983 | self.assertEquals(resp.environ['swift.authorize'], |
4984 | local_auth.authorize) |
4985 | # Now make sure we don't override an existing swift.authorize when we |
4986 | # have no reseller prefix. |
4987 | local_auth = \ |
4988 | - auth.filter_factory({'super_admin_key': 'supertest', |
4989 | - 'reseller_prefix': ''})(FakeApp()) |
4990 | + auth.filter_factory({'reseller_prefix': ''})(FakeApp()) |
4991 | local_authorize = lambda req: Response('test') |
4992 | - resp = Request.blank('/v1/account', environ={'swift.authorize': |
4993 | + resp = self._make_request('/v1/account', environ={'swift.authorize': |
4994 | local_authorize}).get_response(local_auth) |
4995 | self.assertEquals(resp.status_int, 200) |
4996 | self.assertEquals(resp.environ['swift.authorize'], local_authorize) |
4997 | |
4998 | def test_auth_fail(self): |
4999 | - resp = Request.blank('/v1/AUTH_cfa', |
5000 | - headers={'X-Auth-Token': 'AUTH_t'}).get_response(self.test_auth) |