Merge lp:~mterry/duplicity/require-2.6 into lp:duplicity/0.6
- require-2.6
- Merge into 0.6-series
Status: | Merged |
---|---|
Merged at revision: | 971 |
Proposed branch: | lp:~mterry/duplicity/require-2.6 |
Merge into: | lp:duplicity/0.6 |
Prerequisite: | lp:~mterry/duplicity/modern-testing |
Diff against target: |
4735 lines (+235/-3624) 24 files modified
README (+1/-2) bin/duplicity.1 (+1/-1) dist/duplicity.spec.template (+2/-2) duplicity/__init__.py (+1/-8) duplicity/_librsyncmodule.c (+0/-9) duplicity/backend.py (+40/-50) duplicity/backends/botobackend.py (+0/-4) duplicity/backends/webdavbackend.py (+2/-2) duplicity/commandline.py (+1/-3) duplicity/log.py (+5/-24) duplicity/tarfile.py (+33/-2592) duplicity/urlparse_2_5.py (+0/-385) po/POTFILES.in (+0/-1) po/duplicity.pot (+125/-123) setup.py (+2/-4) tarfile-CHANGES (+0/-3) tarfile-LICENSE (+0/-92) testing/__init__.py (+0/-3) testing/run-tests (+1/-1) testing/run-tests-ve (+1/-1) testing/tests/__init__.py (+0/-9) testing/tests/test_parsedurl.py (+10/-1) testing/tests/test_tarfile.py (+8/-300) testing/tests/test_unicode.py (+2/-4) |
To merge this branch: | bzr merge lp:~mterry/duplicity/require-2.6 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
duplicity-team | Pending | ||
Review via email: mp+216210@code.launchpad.net |
Commit message
Description of the change
Require at least Python 2.6.
Our code base already requires 2.6, because 2.6-isms have crept in. Usually because we or a contributor didn't think to test with 2.4. And frankly, I'm not even sure how to test with 2.4 on a modern system [1].
You know I've been pushing for this change for a while, but it seems that at this point, it's just moving from de facto to de jure.
Benefits of this:
- We can start using newer syntax and features
- We can drop a bunch of code (notably our internal copies of urlparse and tarfile)
Most of this branch is just removing code that we kept around only for 2.4. I didn't start using any new 2.6-isms. Those can be separate branches if this is accepted.
[1] https:/
Michael Terry (mterry) wrote : | # |
Preview Diff
1 | === modified file 'README' |
2 | --- README 2014-02-21 17:35:24 +0000 |
3 | +++ README 2014-04-16 20:51:42 +0000 |
4 | @@ -19,7 +19,7 @@ |
5 | |
6 | REQUIREMENTS: |
7 | |
8 | - * Python v2.4 or later |
9 | + * Python v2.6 or later |
10 | * librsync v0.9.6 or later |
11 | * GnuPG v1.x for encryption |
12 | * python-lockfile for concurrency locking |
13 | @@ -28,7 +28,6 @@ |
14 | * for ftp over SSL -- lftp version 3.7.15 or later |
15 | * Boto 2.0 or later for single-processing S3 or GCS access (default) |
16 | * Boto 2.1.1 or later for multi-processing S3 access |
17 | - * Python v2.6 or later for multi-processing S3 access |
18 | * Boto 2.7.0 or later for Glacier S3 access |
19 | |
20 | If you install from the source package, you will also need: |
21 | |
22 | === modified file 'bin/duplicity.1' |
23 | --- bin/duplicity.1 2014-03-09 20:37:24 +0000 |
24 | +++ bin/duplicity.1 2014-04-16 20:51:42 +0000 |
25 | @@ -51,7 +51,7 @@ |
26 | .SH REQUIREMENTS |
27 | Duplicity requires a POSIX-like operating system with a |
28 | .B python |
29 | -interpreter version 2.4+ installed. |
30 | +interpreter version 2.6+ installed. |
31 | It is best used under GNU/Linux. |
32 | |
33 | Some backends also require additional components (probably available as packages for your specific platform): |
34 | |
35 | === modified file 'dist/duplicity.spec.template' |
36 | --- dist/duplicity.spec.template 2011-11-25 17:47:57 +0000 |
37 | +++ dist/duplicity.spec.template 2014-04-16 20:51:42 +0000 |
38 | @@ -10,8 +10,8 @@ |
39 | License: GPL |
40 | Group: Applications/Archiving |
41 | BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n) |
42 | -requires: librsync >= 0.9.6, %{PYTHON_NAME} >= 2.4, gnupg >= 1.0.6 |
43 | -BuildPrereq: %{PYTHON_NAME}-devel >= 2.4, librsync-devel >= 0.9.6 |
44 | +requires: librsync >= 0.9.6, %{PYTHON_NAME} >= 2.6, gnupg >= 1.0.6 |
45 | +BuildPrereq: %{PYTHON_NAME}-devel >= 2.6, librsync-devel >= 0.9.6 |
46 | |
47 | %description |
48 | Duplicity incrementally backs up files and directory by encrypting |
49 | |
50 | === modified file 'duplicity/__init__.py' |
51 | --- duplicity/__init__.py 2013-12-27 06:39:00 +0000 |
52 | +++ duplicity/__init__.py 2014-04-16 20:51:42 +0000 |
53 | @@ -19,12 +19,5 @@ |
54 | # along with duplicity; if not, write to the Free Software Foundation, |
55 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
56 | |
57 | -import __builtin__ |
58 | import gettext |
59 | - |
60 | -t = gettext.translation('duplicity', fallback=True) |
61 | -t.install(unicode=True) |
62 | - |
63 | -# Once we can depend on python >=2.5, we can just use names='ngettext' above. |
64 | -# But for now, do the install manually. |
65 | -__builtin__.__dict__['ngettext'] = t.ungettext |
66 | +gettext.install('duplicity', unicode=True, names=['ngettext']) |
67 | |
68 | === modified file 'duplicity/_librsyncmodule.c' |
69 | --- duplicity/_librsyncmodule.c 2013-01-17 16:17:42 +0000 |
70 | +++ duplicity/_librsyncmodule.c 2014-04-16 20:51:42 +0000 |
71 | @@ -26,15 +26,6 @@ |
72 | #include <librsync.h> |
73 | #define RS_JOB_BLOCKSIZE 65536 |
74 | |
75 | -/* Support Python 2.4 and 2.5 */ |
76 | -#ifndef PyVarObject_HEAD_INIT |
77 | - #define PyVarObject_HEAD_INIT(type, size) \ |
78 | - PyObject_HEAD_INIT(type) size, |
79 | -#endif |
80 | -#ifndef Py_TYPE |
81 | - #define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) |
82 | -#endif |
83 | - |
84 | static PyObject *librsyncError; |
85 | |
86 | /* Sets python error string from result */ |
87 | |
88 | === modified file 'duplicity/backend.py' |
89 | --- duplicity/backend.py 2014-02-07 19:04:51 +0000 |
90 | +++ duplicity/backend.py 2014-04-16 20:51:42 +0000 |
91 | @@ -32,13 +32,12 @@ |
92 | import getpass |
93 | import gettext |
94 | import urllib |
95 | +import urlparse |
96 | |
97 | from duplicity import dup_temp |
98 | -from duplicity import dup_threading |
99 | from duplicity import file_naming |
100 | from duplicity import globals |
101 | from duplicity import log |
102 | -from duplicity import urlparse_2_5 as urlparser |
103 | from duplicity import progress |
104 | |
105 | from duplicity.util import exception_traceback |
106 | @@ -58,6 +57,28 @@ |
107 | _forced_backend = None |
108 | _backends = {} |
109 | |
110 | +# These URL schemes have a backend with a notion of an RFC "network location". |
111 | +# The 'file' and 's3+http' schemes should not be in this list. |
112 | +# 'http' and 'https' are not actually used for duplicity backend urls, but are needed |
113 | +# in order to properly support urls returned from some webdav servers. adding them here |
114 | +# is a hack. we should instead not stomp on the url parsing module to begin with. |
115 | +# |
116 | +# This looks similar to urlparse's 'uses_netloc' list, but urlparse doesn't use |
117 | +# that list for parsing, only creating urls. And doesn't include our custom |
118 | +# schemes anyway. So we keep our own here for our own use. |
119 | +uses_netloc = ['ftp', |
120 | + 'ftps', |
121 | + 'hsi', |
122 | + 'rsync', |
123 | + 's3', |
124 | + 'u1', |
125 | + 'scp', 'ssh', 'sftp', |
126 | + 'webdav', 'webdavs', |
127 | + 'gdocs', |
128 | + 'http', 'https', |
129 | + 'imap', 'imaps', |
130 | + 'mega'] |
131 | + |
132 | |
133 | def import_backends(): |
134 | """ |
135 | @@ -165,47 +186,6 @@ |
136 | raise BackendException(_("Could not initialize backend: %s") % str(sys.exc_info()[1])) |
137 | |
138 | |
139 | -_urlparser_initialized = False |
140 | -_urlparser_initialized_lock = dup_threading.threading_module().Lock() |
141 | - |
142 | -def _ensure_urlparser_initialized(): |
143 | - """ |
144 | - Ensure that the appropriate clobbering of variables in the |
145 | - urlparser module has been done. In the future, the need for this |
146 | - clobbering to begin with should preferably be eliminated. |
147 | - """ |
148 | - def init(): |
149 | - global _urlparser_initialized |
150 | - |
151 | - if not _urlparser_initialized: |
152 | - # These URL schemes have a backend with a notion of an RFC "network location". |
153 | - # The 'file' and 's3+http' schemes should not be in this list. |
154 | - # 'http' and 'https' are not actually used for duplicity backend urls, but are needed |
155 | - # in order to properly support urls returned from some webdav servers. adding them here |
156 | - # is a hack. we should instead not stomp on the url parsing module to begin with. |
157 | - # |
158 | - # todo: eliminate the need for backend specific hacking here completely. |
159 | - urlparser.uses_netloc = ['ftp', |
160 | - 'ftps', |
161 | - 'hsi', |
162 | - 'rsync', |
163 | - 's3', |
164 | - 'u1', |
165 | - 'scp', 'ssh', 'sftp', |
166 | - 'webdav', 'webdavs', |
167 | - 'gdocs', |
168 | - 'http', 'https', |
169 | - 'imap', 'imaps', |
170 | - 'mega'] |
171 | - |
172 | - # Do not transform or otherwise parse the URL path component. |
173 | - urlparser.uses_query = [] |
174 | - urlparser.uses_fragm = [] |
175 | - |
176 | - _urlparser_initialized = True |
177 | - |
178 | - dup_threading.with_lock(_urlparser_initialized_lock, init) |
179 | - |
180 | class ParsedUrl: |
181 | """ |
182 | Parse the given URL as a duplicity backend URL. |
183 | @@ -219,7 +199,6 @@ |
184 | """ |
185 | def __init__(self, url_string): |
186 | self.url_string = url_string |
187 | - _ensure_urlparser_initialized() |
188 | |
189 | # While useful in some cases, the fact is that the urlparser makes |
190 | # all the properties in the URL deferred or lazy. This means that |
191 | @@ -227,7 +206,7 @@ |
192 | # problems here, so they will be caught early. |
193 | |
194 | try: |
195 | - pu = urlparser.urlparse(url_string) |
196 | + pu = urlparse.urlparse(url_string) |
197 | except Exception: |
198 | raise InvalidBackendURL("Syntax error in: %s" % url_string) |
199 | |
200 | @@ -273,26 +252,37 @@ |
201 | self.port = None |
202 | try: |
203 | self.port = pu.port |
204 | - except Exception: |
205 | + except Exception: # not raised in python2.7+, just returns None |
206 | # old style rsync://host::[/]dest, are still valid, though they contain no port |
207 | if not ( self.scheme in ['rsync'] and re.search('::[^:]*$', self.url_string)): |
208 | raise InvalidBackendURL("Syntax error (port) in: %s A%s B%s C%s" % (url_string, (self.scheme in ['rsync']), re.search('::[^:]+$', self.netloc), self.netloc ) ) |
209 | |
210 | + # Our URL system uses two slashes more than urlparse's does when using |
211 | + # non-netloc URLs. And we want to make sure that if urlparse assuming |
212 | + # a netloc where we don't want one, that we correct it. |
213 | + if self.scheme not in uses_netloc: |
214 | + if self.netloc: |
215 | + self.path = '//' + self.netloc + self.path |
216 | + self.netloc = '' |
217 | + self.hostname = None |
218 | + elif self.path.startswith('/'): |
219 | + self.path = '//' + self.path |
220 | + |
221 | # This happens for implicit local paths. |
222 | - if not pu.scheme: |
223 | + if not self.scheme: |
224 | return |
225 | |
226 | # Our backends do not handle implicit hosts. |
227 | - if pu.scheme in urlparser.uses_netloc and not pu.hostname: |
228 | + if self.scheme in uses_netloc and not self.hostname: |
229 | raise InvalidBackendURL("Missing hostname in a backend URL which " |
230 | "requires an explicit hostname: %s" |
231 | "" % (url_string)) |
232 | |
233 | # Our backends do not handle implicit relative paths. |
234 | - if pu.scheme not in urlparser.uses_netloc and not pu.path.startswith('//'): |
235 | + if self.scheme not in uses_netloc and not self.path.startswith('//'): |
236 | raise InvalidBackendURL("missing // - relative paths not supported " |
237 | "for scheme %s: %s" |
238 | - "" % (pu.scheme, url_string)) |
239 | + "" % (self.scheme, url_string)) |
240 | |
241 | def geturl(self): |
242 | return self.url_string |
243 | |
244 | === modified file 'duplicity/backends/botobackend.py' |
245 | --- duplicity/backends/botobackend.py 2014-02-21 17:14:37 +0000 |
246 | +++ duplicity/backends/botobackend.py 2014-04-16 20:51:42 +0000 |
247 | @@ -22,14 +22,10 @@ |
248 | |
249 | import duplicity.backend |
250 | from duplicity import globals |
251 | -import sys |
252 | from _boto_multi import BotoBackend as BotoMultiUploadBackend |
253 | from _boto_single import BotoBackend as BotoSingleUploadBackend |
254 | |
255 | if globals.s3_use_multiprocessing: |
256 | - if sys.version_info[:2] < (2, 6): |
257 | - print "Sorry, S3 multiprocessing requires version 2.6 or later of python" |
258 | - sys.exit(1) |
259 | duplicity.backend.register_backend("gs", BotoMultiUploadBackend) |
260 | duplicity.backend.register_backend("s3", BotoMultiUploadBackend) |
261 | duplicity.backend.register_backend("s3+http", BotoMultiUploadBackend) |
262 | |
263 | === modified file 'duplicity/backends/webdavbackend.py' |
264 | --- duplicity/backends/webdavbackend.py 2013-12-30 16:01:49 +0000 |
265 | +++ duplicity/backends/webdavbackend.py 2014-04-16 20:51:42 +0000 |
266 | @@ -26,13 +26,13 @@ |
267 | import re |
268 | import urllib |
269 | import urllib2 |
270 | +import urlparse |
271 | import xml.dom.minidom |
272 | |
273 | import duplicity.backend |
274 | from duplicity import globals |
275 | from duplicity import log |
276 | from duplicity.errors import * #@UnusedWildImport |
277 | -from duplicity import urlparse_2_5 as urlparser |
278 | from duplicity.backend import retry_fatal |
279 | |
280 | class CustomMethodRequest(urllib2.Request): |
281 | @@ -332,7 +332,7 @@ |
282 | @return: A matching filename, or None if the href did not match. |
283 | """ |
284 | raw_filename = self._getText(href.childNodes).strip() |
285 | - parsed_url = urlparser.urlparse(urllib.unquote(raw_filename)) |
286 | + parsed_url = urlparse.urlparse(urllib.unquote(raw_filename)) |
287 | filename = parsed_url.path |
288 | log.Debug("webdav path decoding and translation: " |
289 | "%s -> %s" % (raw_filename, filename)) |
290 | |
291 | === modified file 'duplicity/commandline.py' |
292 | --- duplicity/commandline.py 2014-03-09 20:37:24 +0000 |
293 | +++ duplicity/commandline.py 2014-04-16 20:51:42 +0000 |
294 | @@ -507,9 +507,7 @@ |
295 | parser.add_option("--s3_multipart_max_timeout", type="int", metavar=_("number")) |
296 | |
297 | # Option to allow the s3/boto backend use the multiprocessing version. |
298 | - # By default it is off since it does not work for Python 2.4 or 2.5. |
299 | - if sys.version_info[:2] >= (2, 6): |
300 | - parser.add_option("--s3-use-multiprocessing", action = "store_true") |
301 | + parser.add_option("--s3-use-multiprocessing", action = "store_true") |
302 | |
303 | # scp command to use (ssh pexpect backend) |
304 | parser.add_option("--scp-command", metavar = _("command")) |
305 | |
306 | === modified file 'duplicity/log.py' |
307 | --- duplicity/log.py 2013-12-27 06:39:00 +0000 |
308 | +++ duplicity/log.py 2014-04-16 20:51:42 +0000 |
309 | @@ -49,7 +49,6 @@ |
310 | return DupToLoggerLevel(verb) |
311 | |
312 | def LevelName(level): |
313 | - level = LoggerToDupLevel(level) |
314 | if level >= 9: return "DEBUG" |
315 | elif level >= 5: return "INFO" |
316 | elif level >= 3: return "NOTICE" |
317 | @@ -59,12 +58,10 @@ |
318 | def Log(s, verb_level, code=1, extra=None, force_print=False): |
319 | """Write s to stderr if verbosity level low enough""" |
320 | global _logger |
321 | - # controlLine is a terrible hack until duplicity depends on Python 2.5 |
322 | - # and its logging 'extra' keyword that allows a custom record dictionary. |
323 | if extra: |
324 | - _logger.controlLine = '%d %s' % (code, extra) |
325 | + controlLine = '%d %s' % (code, extra) |
326 | else: |
327 | - _logger.controlLine = '%d' % (code) |
328 | + controlLine = '%d' % (code) |
329 | if not s: |
330 | s = '' # If None is passed, standard logging would render it as 'None' |
331 | |
332 | @@ -79,8 +76,9 @@ |
333 | if not isinstance(s, unicode): |
334 | s = s.decode("utf8", "replace") |
335 | |
336 | - _logger.log(DupToLoggerLevel(verb_level), s) |
337 | - _logger.controlLine = None |
338 | + _logger.log(DupToLoggerLevel(verb_level), s, |
339 | + extra={'levelName': LevelName(verb_level), |
340 | + 'controlLine': controlLine}) |
341 | |
342 | if force_print: |
343 | _logger.setLevel(initial_level) |
344 | @@ -305,22 +303,6 @@ |
345 | shutdown() |
346 | sys.exit(code) |
347 | |
348 | -class DupLogRecord(logging.LogRecord): |
349 | - """Custom log record that holds a message code""" |
350 | - def __init__(self, controlLine, *args, **kwargs): |
351 | - global _logger |
352 | - logging.LogRecord.__init__(self, *args, **kwargs) |
353 | - self.controlLine = controlLine |
354 | - self.levelName = LevelName(self.levelno) |
355 | - |
356 | -class DupLogger(logging.Logger): |
357 | - """Custom logger that creates special code-bearing records""" |
358 | - # controlLine is a terrible hack until duplicity depends on Python 2.5 |
359 | - # and its logging 'extra' keyword that allows a custom record dictionary. |
360 | - controlLine = None |
361 | - def makeRecord(self, name, lvl, fn, lno, msg, args, exc_info, *argv, **kwargs): |
362 | - return DupLogRecord(self.controlLine, name, lvl, fn, lno, msg, args, exc_info) |
363 | - |
364 | class OutFilter(logging.Filter): |
365 | """Filter that only allows warning or less important messages""" |
366 | def filter(self, record): |
367 | @@ -337,7 +319,6 @@ |
368 | if _logger: |
369 | return |
370 | |
371 | - logging.setLoggerClass(DupLogger) |
372 | _logger = logging.getLogger("duplicity") |
373 | |
374 | # Default verbosity allows notices and above |
375 | |
376 | === modified file 'duplicity/tarfile.py' |
377 | --- duplicity/tarfile.py 2013-10-05 15:11:55 +0000 |
378 | +++ duplicity/tarfile.py 2014-04-16 20:51:42 +0000 |
379 | @@ -1,2594 +1,35 @@ |
380 | -#! /usr/bin/python2.7 |
381 | -# -*- coding: iso-8859-1 -*- |
382 | -#------------------------------------------------------------------- |
383 | -# tarfile.py |
384 | -#------------------------------------------------------------------- |
385 | -# Copyright (C) 2002 Lars Gustäbel <lars@gustaebel.de> |
386 | -# All rights reserved. |
387 | -# |
388 | -# Permission is hereby granted, free of charge, to any person |
389 | -# obtaining a copy of this software and associated documentation |
390 | -# files (the "Software"), to deal in the Software without |
391 | -# restriction, including without limitation the rights to use, |
392 | -# copy, modify, merge, publish, distribute, sublicense, and/or sell |
393 | -# copies of the Software, and to permit persons to whom the |
394 | -# Software is furnished to do so, subject to the following |
395 | -# conditions: |
396 | -# |
397 | -# The above copyright notice and this permission notice shall be |
398 | -# included in all copies or substantial portions of the Software. |
399 | -# |
400 | -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, |
401 | -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES |
402 | -# OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND |
403 | -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT |
404 | -# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, |
405 | -# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING |
406 | -# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR |
407 | -# OTHER DEALINGS IN THE SOFTWARE. |
408 | -# |
409 | -"""Read from and write to tar format archives. |
410 | -""" |
411 | - |
412 | -__version__ = "$Revision: 85213 $" |
413 | -# $Source$ |
414 | - |
415 | -version = "0.9.0" |
416 | -__author__ = "Lars Gustäbel (lars@gustaebel.de)" |
417 | -__date__ = "$Date: 2010-10-04 10:37:53 -0500 (Mon, 04 Oct 2010) $" |
418 | -__cvsid__ = "$Id: tarfile.py 85213 2010-10-04 15:37:53Z lars.gustaebel $" |
419 | -__credits__ = "Gustavo Niemeyer, Niels Gustäbel, Richard Townsend." |
420 | - |
421 | -#--------- |
422 | -# Imports |
423 | -#--------- |
424 | -import sys |
425 | -import os |
426 | -import shutil |
427 | -import stat |
428 | -import errno |
429 | -import time |
430 | -import struct |
431 | -import copy |
432 | -import re |
433 | -import operator |
434 | - |
435 | +# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*- |
436 | +# |
437 | +# Copyright 2013 Michael Terry <mike@mterry.name> |
438 | +# |
439 | +# This file is part of duplicity. |
440 | +# |
441 | +# Duplicity is free software; you can redistribute it and/or modify it |
442 | +# under the terms of the GNU General Public License as published by the |
443 | +# Free Software Foundation; either version 2 of the License, or (at your |
444 | +# option) any later version. |
445 | +# |
446 | +# Duplicity is distributed in the hope that it will be useful, but |
447 | +# WITHOUT ANY WARRANTY; without even the implied warranty of |
448 | +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU |
449 | +# General Public License for more details. |
450 | +# |
451 | +# You should have received a copy of the GNU General Public License |
452 | +# along with duplicity; if not, write to the Free Software Foundation, |
453 | +# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
454 | + |
455 | +"""Like system tarfile but with caching.""" |
456 | + |
457 | +from __future__ import absolute_import |
458 | + |
459 | +import tarfile |
460 | + |
461 | +# Grab all symbols in tarfile, to try to reproduce its API exactly. |
462 | +# from <> import * wouldn't get everything we want, since tarfile defines |
463 | +# __all__. So we do it ourselves. |
464 | +for sym in dir(tarfile): |
465 | + globals()[sym] = getattr(tarfile, sym) |
466 | + |
467 | +# Now make sure that we cache the grp/pwd ops |
468 | from duplicity import cached_ops |
469 | grp = pwd = cached_ops |
470 | - |
471 | -# from tarfile import * |
472 | -__all__ = ["TarFile", "TarInfo", "is_tarfile", "TarError"] |
473 | - |
474 | -#--------------------------------------------------------- |
475 | -# tar constants |
476 | -#--------------------------------------------------------- |
477 | -NUL = "\0" # the null character |
478 | -BLOCKSIZE = 512 # length of processing blocks |
479 | -RECORDSIZE = BLOCKSIZE * 20 # length of records |
480 | -GNU_MAGIC = "ustar \0" # magic gnu tar string |
481 | -POSIX_MAGIC = "ustar\x0000" # magic posix tar string |
482 | - |
483 | -LENGTH_NAME = 100 # maximum length of a filename |
484 | -LENGTH_LINK = 100 # maximum length of a linkname |
485 | -LENGTH_PREFIX = 155 # maximum length of the prefix field |
486 | - |
487 | -REGTYPE = "0" # regular file |
488 | -AREGTYPE = "\0" # regular file |
489 | -LNKTYPE = "1" # link (inside tarfile) |
490 | -SYMTYPE = "2" # symbolic link |
491 | -CHRTYPE = "3" # character special device |
492 | -BLKTYPE = "4" # block special device |
493 | -DIRTYPE = "5" # directory |
494 | -FIFOTYPE = "6" # fifo special device |
495 | -CONTTYPE = "7" # contiguous file |
496 | - |
497 | -GNUTYPE_LONGNAME = "L" # GNU tar longname |
498 | -GNUTYPE_LONGLINK = "K" # GNU tar longlink |
499 | -GNUTYPE_SPARSE = "S" # GNU tar sparse file |
500 | - |
501 | -XHDTYPE = "x" # POSIX.1-2001 extended header |
502 | -XGLTYPE = "g" # POSIX.1-2001 global header |
503 | -SOLARIS_XHDTYPE = "X" # Solaris extended header |
504 | - |
505 | -USTAR_FORMAT = 0 # POSIX.1-1988 (ustar) format |
506 | -GNU_FORMAT = 1 # GNU tar format |
507 | -PAX_FORMAT = 2 # POSIX.1-2001 (pax) format |
508 | -DEFAULT_FORMAT = GNU_FORMAT |
509 | - |
510 | -#--------------------------------------------------------- |
511 | -# tarfile constants |
512 | -#--------------------------------------------------------- |
513 | -# File types that tarfile supports: |
514 | -SUPPORTED_TYPES = (REGTYPE, AREGTYPE, LNKTYPE, |
515 | - SYMTYPE, DIRTYPE, FIFOTYPE, |
516 | - CONTTYPE, CHRTYPE, BLKTYPE, |
517 | - GNUTYPE_LONGNAME, GNUTYPE_LONGLINK, |
518 | - GNUTYPE_SPARSE) |
519 | - |
520 | -# File types that will be treated as a regular file. |
521 | -REGULAR_TYPES = (REGTYPE, AREGTYPE, |
522 | - CONTTYPE, GNUTYPE_SPARSE) |
523 | - |
524 | -# File types that are part of the GNU tar format. |
525 | -GNU_TYPES = (GNUTYPE_LONGNAME, GNUTYPE_LONGLINK, |
526 | - GNUTYPE_SPARSE) |
527 | - |
528 | -# Fields from a pax header that override a TarInfo attribute. |
529 | -PAX_FIELDS = ("path", "linkpath", "size", "mtime", |
530 | - "uid", "gid", "uname", "gname") |
531 | - |
532 | -# Fields in a pax header that are numbers, all other fields |
533 | -# are treated as strings. |
534 | -PAX_NUMBER_FIELDS = { |
535 | - "atime": float, |
536 | - "ctime": float, |
537 | - "mtime": float, |
538 | - "uid": int, |
539 | - "gid": int, |
540 | - "size": int |
541 | -} |
542 | - |
543 | -#--------------------------------------------------------- |
544 | -# Bits used in the mode field, values in octal. |
545 | -#--------------------------------------------------------- |
546 | -S_IFLNK = 0120000 # symbolic link |
547 | -S_IFREG = 0100000 # regular file |
548 | -S_IFBLK = 0060000 # block device |
549 | -S_IFDIR = 0040000 # directory |
550 | -S_IFCHR = 0020000 # character device |
551 | -S_IFIFO = 0010000 # fifo |
552 | - |
553 | -TSUID = 04000 # set UID on execution |
554 | -TSGID = 02000 # set GID on execution |
555 | -TSVTX = 01000 # reserved |
556 | - |
557 | -TUREAD = 0400 # read by owner |
558 | -TUWRITE = 0200 # write by owner |
559 | -TUEXEC = 0100 # execute/search by owner |
560 | -TGREAD = 0040 # read by group |
561 | -TGWRITE = 0020 # write by group |
562 | -TGEXEC = 0010 # execute/search by group |
563 | -TOREAD = 0004 # read by other |
564 | -TOWRITE = 0002 # write by other |
565 | -TOEXEC = 0001 # execute/search by other |
566 | - |
567 | -#--------------------------------------------------------- |
568 | -# initialization |
569 | -#--------------------------------------------------------- |
570 | -ENCODING = sys.getfilesystemencoding() |
571 | -if ENCODING is None: |
572 | - ENCODING = sys.getdefaultencoding() |
573 | - |
574 | -#--------------------------------------------------------- |
575 | -# Some useful functions |
576 | -#--------------------------------------------------------- |
577 | - |
578 | -def stn(s, length): |
579 | - """Convert a python string to a null-terminated string buffer. |
580 | - """ |
581 | - return s[:length] + (length - len(s)) * NUL |
582 | - |
583 | -def nts(s): |
584 | - """Convert a null-terminated string field to a python string. |
585 | - """ |
586 | - # Use the string up to the first null char. |
587 | - p = s.find("\0") |
588 | - if p == -1: |
589 | - return s |
590 | - return s[:p] |
591 | - |
592 | -def nti(s): |
593 | - """Convert a number field to a python number. |
594 | - """ |
595 | - # There are two possible encodings for a number field, see |
596 | - # itn() below. |
597 | - if s[0] != chr(0200): |
598 | - try: |
599 | - n = int(nts(s) or "0", 8) |
600 | - except ValueError: |
601 | - raise InvalidHeaderError("invalid header") |
602 | - else: |
603 | - n = 0L |
604 | - for i in xrange(len(s) - 1): |
605 | - n <<= 8 |
606 | - n += ord(s[i + 1]) |
607 | - return n |
608 | - |
609 | -def itn(n, digits=8, format=DEFAULT_FORMAT): |
610 | - """Convert a python number to a number field. |
611 | - """ |
612 | - # POSIX 1003.1-1988 requires numbers to be encoded as a string of |
613 | - # octal digits followed by a null-byte, this allows values up to |
614 | - # (8**(digits-1))-1. GNU tar allows storing numbers greater than |
615 | - # that if necessary. A leading 0200 byte indicates this particular |
616 | - # encoding, the following digits-1 bytes are a big-endian |
617 | - # representation. This allows values up to (256**(digits-1))-1. |
618 | - if 0 <= n < 8 ** (digits - 1): |
619 | - s = "%0*o" % (digits - 1, n) + NUL |
620 | - else: |
621 | - if format != GNU_FORMAT or n >= 256 ** (digits - 1): |
622 | - raise ValueError("overflow in number field") |
623 | - |
624 | - if n < 0: |
625 | - # XXX We mimic GNU tar's behaviour with negative numbers, |
626 | - # this could raise OverflowError. |
627 | - n = struct.unpack("L", struct.pack("l", n))[0] |
628 | - |
629 | - s = "" |
630 | - for i in xrange(digits - 1): |
631 | - s = chr(n & 0377) + s |
632 | - n >>= 8 |
633 | - s = chr(0200) + s |
634 | - return s |
635 | - |
636 | -def uts(s, encoding, errors): |
637 | - """Convert a unicode object to a string. |
638 | - """ |
639 | - if errors == "utf-8": |
640 | - # An extra error handler similar to the -o invalid=UTF-8 option |
641 | - # in POSIX.1-2001. Replace untranslatable characters with their |
642 | - # UTF-8 representation. |
643 | - try: |
644 | - return s.encode(encoding, "strict") |
645 | - except UnicodeEncodeError: |
646 | - x = [] |
647 | - for c in s: |
648 | - try: |
649 | - x.append(c.encode(encoding, "strict")) |
650 | - except UnicodeEncodeError: |
651 | - x.append(c.encode("utf8")) |
652 | - return "".join(x) |
653 | - else: |
654 | - return s.encode(encoding, errors) |
655 | - |
656 | -def calc_chksums(buf): |
657 | - """Calculate the checksum for a member's header by summing up all |
658 | - characters except for the chksum field which is treated as if |
659 | - it was filled with spaces. According to the GNU tar sources, |
660 | - some tars (Sun and NeXT) calculate chksum with signed char, |
661 | - which will be different if there are chars in the buffer with |
662 | - the high bit set. So we calculate two checksums, unsigned and |
663 | - signed. |
664 | - """ |
665 | - unsigned_chksum = 256 + sum(struct.unpack("148B", buf[:148]) + struct.unpack("356B", buf[156:512])) |
666 | - signed_chksum = 256 + sum(struct.unpack("148b", buf[:148]) + struct.unpack("356b", buf[156:512])) |
667 | - return unsigned_chksum, signed_chksum |
668 | - |
669 | -def copyfileobj(src, dst, length=None): |
670 | - """Copy length bytes from fileobj src to fileobj dst. |
671 | - If length is None, copy the entire content. |
672 | - """ |
673 | - if length == 0: |
674 | - return |
675 | - if length is None: |
676 | - shutil.copyfileobj(src, dst) |
677 | - return |
678 | - |
679 | - BUFSIZE = 16 * 1024 |
680 | - blocks, remainder = divmod(length, BUFSIZE) |
681 | - for b in xrange(blocks): |
682 | - buf = src.read(BUFSIZE) |
683 | - if len(buf) < BUFSIZE: |
684 | - raise IOError("end of file reached") |
685 | - dst.write(buf) |
686 | - |
687 | - if remainder != 0: |
688 | - buf = src.read(remainder) |
689 | - if len(buf) < remainder: |
690 | - raise IOError("end of file reached") |
691 | - dst.write(buf) |
692 | - return |
693 | - |
694 | -filemode_table = ( |
695 | - ((S_IFLNK, "l"), |
696 | - (S_IFREG, "-"), |
697 | - (S_IFBLK, "b"), |
698 | - (S_IFDIR, "d"), |
699 | - (S_IFCHR, "c"), |
700 | - (S_IFIFO, "p")), |
701 | - |
702 | - ((TUREAD, "r"),), |
703 | - ((TUWRITE, "w"),), |
704 | - ((TUEXEC|TSUID, "s"), |
705 | - (TSUID, "S"), |
706 | - (TUEXEC, "x")), |
707 | - |
708 | - ((TGREAD, "r"),), |
709 | - ((TGWRITE, "w"),), |
710 | - ((TGEXEC|TSGID, "s"), |
711 | - (TSGID, "S"), |
712 | - (TGEXEC, "x")), |
713 | - |
714 | - ((TOREAD, "r"),), |
715 | - ((TOWRITE, "w"),), |
716 | - ((TOEXEC|TSVTX, "t"), |
717 | - (TSVTX, "T"), |
718 | - (TOEXEC, "x")) |
719 | -) |
720 | - |
721 | -def filemode(mode): |
722 | - """Convert a file's mode to a string of the form |
723 | - -rwxrwxrwx. |
724 | - Used by TarFile.list() |
725 | - """ |
726 | - perm = [] |
727 | - for table in filemode_table: |
728 | - for bit, char in table: |
729 | - if mode & bit == bit: |
730 | - perm.append(char) |
731 | - break |
732 | - else: |
733 | - perm.append("-") |
734 | - return "".join(perm) |
735 | - |
736 | -class TarError(Exception): |
737 | - """Base exception.""" |
738 | - pass |
739 | -class ExtractError(TarError): |
740 | - """General exception for extract errors.""" |
741 | - pass |
742 | -class ReadError(TarError): |
743 | - """Exception for unreadble tar archives.""" |
744 | - pass |
745 | -class CompressionError(TarError): |
746 | - """Exception for unavailable compression methods.""" |
747 | - pass |
748 | -class StreamError(TarError): |
749 | - """Exception for unsupported operations on stream-like TarFiles.""" |
750 | - pass |
751 | -class HeaderError(TarError): |
752 | - """Base exception for header errors.""" |
753 | - pass |
754 | -class EmptyHeaderError(HeaderError): |
755 | - """Exception for empty headers.""" |
756 | - pass |
757 | -class TruncatedHeaderError(HeaderError): |
758 | - """Exception for truncated headers.""" |
759 | - pass |
760 | -class EOFHeaderError(HeaderError): |
761 | - """Exception for end of file headers.""" |
762 | - pass |
763 | -class InvalidHeaderError(HeaderError): |
764 | - """Exception for invalid headers.""" |
765 | - pass |
766 | -class SubsequentHeaderError(HeaderError): |
767 | - """Exception for missing and invalid extended headers.""" |
768 | - pass |
769 | - |
770 | -#--------------------------- |
771 | -# internal stream interface |
772 | -#--------------------------- |
773 | -class _LowLevelFile: |
774 | - """Low-level file object. Supports reading and writing. |
775 | - It is used instead of a regular file object for streaming |
776 | - access. |
777 | - """ |
778 | - |
779 | - def __init__(self, name, mode): |
780 | - mode = { |
781 | - "r": os.O_RDONLY, |
782 | - "w": os.O_WRONLY | os.O_CREAT | os.O_TRUNC, |
783 | - }[mode] |
784 | - if hasattr(os, "O_BINARY"): |
785 | - mode |= os.O_BINARY |
786 | - self.fd = os.open(name, mode, 0666) |
787 | - |
788 | - def close(self): |
789 | - os.close(self.fd) |
790 | - |
791 | - def read(self, size): |
792 | - return os.read(self.fd, size) |
793 | - |
794 | - def write(self, s): |
795 | - os.write(self.fd, s) |
796 | - |
797 | -class _Stream: |
798 | - """Class that serves as an adapter between TarFile and |
799 | - a stream-like object. The stream-like object only |
800 | - needs to have a read() or write() method and is accessed |
801 | - blockwise. Use of gzip or bzip2 compression is possible. |
802 | - A stream-like object could be for example: sys.stdin, |
803 | - sys.stdout, a socket, a tape device etc. |
804 | - |
805 | - _Stream is intended to be used only internally. |
806 | - """ |
807 | - |
808 | - def __init__(self, name, mode, comptype, fileobj, bufsize): |
809 | - """Construct a _Stream object. |
810 | - """ |
811 | - self._extfileobj = True |
812 | - if fileobj is None: |
813 | - fileobj = _LowLevelFile(name, mode) |
814 | - self._extfileobj = False |
815 | - |
816 | - if comptype == '*': |
817 | - # Enable transparent compression detection for the |
818 | - # stream interface |
819 | - fileobj = _StreamProxy(fileobj) |
820 | - comptype = fileobj.getcomptype() |
821 | - |
822 | - self.name = name or "" |
823 | - self.mode = mode |
824 | - self.comptype = comptype |
825 | - self.fileobj = fileobj |
826 | - self.bufsize = bufsize |
827 | - self.buf = "" |
828 | - self.pos = 0L |
829 | - self.closed = False |
830 | - |
831 | - if comptype == "gz": |
832 | - try: |
833 | - import zlib |
834 | - except ImportError: |
835 | - raise CompressionError("zlib module is not available") |
836 | - self.zlib = zlib |
837 | - self.crc = zlib.crc32("") & 0xffffffffL |
838 | - if mode == "r": |
839 | - self._init_read_gz() |
840 | - else: |
841 | - self._init_write_gz() |
842 | - |
843 | - if comptype == "bz2": |
844 | - try: |
845 | - import bz2 |
846 | - except ImportError: |
847 | - raise CompressionError("bz2 module is not available") |
848 | - if mode == "r": |
849 | - self.dbuf = "" |
850 | - self.cmp = bz2.BZ2Decompressor() |
851 | - else: |
852 | - self.cmp = bz2.BZ2Compressor() |
853 | - |
854 | - def __del__(self): |
855 | - if hasattr(self, "closed") and not self.closed: |
856 | - self.close() |
857 | - |
858 | - def _init_write_gz(self): |
859 | - """Initialize for writing with gzip compression. |
860 | - """ |
861 | - self.cmp = self.zlib.compressobj(9, self.zlib.DEFLATED, |
862 | - -self.zlib.MAX_WBITS, |
863 | - self.zlib.DEF_MEM_LEVEL, |
864 | - 0) |
865 | - timestamp = struct.pack("<L", long(time.time())) |
866 | - self.__write("\037\213\010\010%s\002\377" % timestamp) |
867 | - if self.name.endswith(".gz"): |
868 | - self.name = self.name[:-3] |
869 | - self.__write(self.name + NUL) |
870 | - |
871 | - def write(self, s): |
872 | - """Write string s to the stream. |
873 | - """ |
874 | - if self.comptype == "gz": |
875 | - self.crc = self.zlib.crc32(s, self.crc) & 0xffffffffL |
876 | - self.pos += len(s) |
877 | - if self.comptype != "tar": |
878 | - s = self.cmp.compress(s) |
879 | - self.__write(s) |
880 | - |
881 | - def __write(self, s): |
882 | - """Write string s to the stream if a whole new block |
883 | - is ready to be written. |
884 | - """ |
885 | - self.buf += s |
886 | - while len(self.buf) > self.bufsize: |
887 | - self.fileobj.write(self.buf[:self.bufsize]) |
888 | - self.buf = self.buf[self.bufsize:] |
889 | - |
890 | - def close(self): |
891 | - """Close the _Stream object. No operation should be |
892 | - done on it afterwards. |
893 | - """ |
894 | - if self.closed: |
895 | - return |
896 | - |
897 | - if self.mode == "w" and self.comptype != "tar": |
898 | - self.buf += self.cmp.flush() |
899 | - |
900 | - if self.mode == "w" and self.buf: |
901 | - self.fileobj.write(self.buf) |
902 | - self.buf = "" |
903 | - if self.comptype == "gz": |
904 | - # The native zlib crc is an unsigned 32-bit integer, but |
905 | - # the Python wrapper implicitly casts that to a signed C |
906 | - # long. So, on a 32-bit box self.crc may "look negative", |
907 | - # while the same crc on a 64-bit box may "look positive". |
908 | - # To avoid irksome warnings from the `struct` module, force |
909 | - # it to look positive on all boxes. |
910 | - self.fileobj.write(struct.pack("<L", self.crc & 0xffffffffL)) |
911 | - self.fileobj.write(struct.pack("<L", self.pos & 0xffffFFFFL)) |
912 | - |
913 | - if not self._extfileobj: |
914 | - self.fileobj.close() |
915 | - |
916 | - self.closed = True |
917 | - |
918 | - def _init_read_gz(self): |
919 | - """Initialize for reading a gzip compressed fileobj. |
920 | - """ |
921 | - self.cmp = self.zlib.decompressobj(-self.zlib.MAX_WBITS) |
922 | - self.dbuf = "" |
923 | - |
924 | - # taken from gzip.GzipFile with some alterations |
925 | - if self.__read(2) != "\037\213": |
926 | - raise ReadError("not a gzip file") |
927 | - if self.__read(1) != "\010": |
928 | - raise CompressionError("unsupported compression method") |
929 | - |
930 | - flag = ord(self.__read(1)) |
931 | - self.__read(6) |
932 | - |
933 | - if flag & 4: |
934 | - xlen = ord(self.__read(1)) + 256 * ord(self.__read(1)) |
935 | - self.read(xlen) |
936 | - if flag & 8: |
937 | - while True: |
938 | - s = self.__read(1) |
939 | - if not s or s == NUL: |
940 | - break |
941 | - if flag & 16: |
942 | - while True: |
943 | - s = self.__read(1) |
944 | - if not s or s == NUL: |
945 | - break |
946 | - if flag & 2: |
947 | - self.__read(2) |
948 | - |
949 | - def tell(self): |
950 | - """Return the stream's file pointer position. |
951 | - """ |
952 | - return self.pos |
953 | - |
954 | - def seek(self, pos=0): |
955 | - """Set the stream's file pointer to pos. Negative seeking |
956 | - is forbidden. |
957 | - """ |
958 | - if pos - self.pos >= 0: |
959 | - blocks, remainder = divmod(pos - self.pos, self.bufsize) |
960 | - for i in xrange(blocks): |
961 | - self.read(self.bufsize) |
962 | - self.read(remainder) |
963 | - else: |
964 | - raise StreamError("seeking backwards is not allowed") |
965 | - return self.pos |
966 | - |
967 | - def read(self, size=None): |
968 | - """Return the next size number of bytes from the stream. |
969 | - If size is not defined, return all bytes of the stream |
970 | - up to EOF. |
971 | - """ |
972 | - if size is None: |
973 | - t = [] |
974 | - while True: |
975 | - buf = self._read(self.bufsize) |
976 | - if not buf: |
977 | - break |
978 | - t.append(buf) |
979 | - buf = "".join(t) |
980 | - else: |
981 | - buf = self._read(size) |
982 | - self.pos += len(buf) |
983 | - return buf |
984 | - |
985 | - def _read(self, size): |
986 | - """Return size bytes from the stream. |
987 | - """ |
988 | - if self.comptype == "tar": |
989 | - return self.__read(size) |
990 | - |
991 | - c = len(self.dbuf) |
992 | - t = [self.dbuf] |
993 | - while c < size: |
994 | - buf = self.__read(self.bufsize) |
995 | - if not buf: |
996 | - break |
997 | - try: |
998 | - buf = self.cmp.decompress(buf) |
999 | - except IOError: |
1000 | - raise ReadError("invalid compressed data") |
1001 | - t.append(buf) |
1002 | - c += len(buf) |
1003 | - t = "".join(t) |
1004 | - self.dbuf = t[size:] |
1005 | - return t[:size] |
1006 | - |
1007 | - def __read(self, size): |
1008 | - """Return size bytes from stream. If internal buffer is empty, |
1009 | - read another block from the stream. |
1010 | - """ |
1011 | - c = len(self.buf) |
1012 | - t = [self.buf] |
1013 | - while c < size: |
1014 | - buf = self.fileobj.read(self.bufsize) |
1015 | - if not buf: |
1016 | - break |
1017 | - t.append(buf) |
1018 | - c += len(buf) |
1019 | - t = "".join(t) |
1020 | - self.buf = t[size:] |
1021 | - return t[:size] |
1022 | -# class _Stream |
1023 | - |
1024 | -class _StreamProxy(object): |
1025 | - """Small proxy class that enables transparent compression |
1026 | - detection for the Stream interface (mode 'r|*'). |
1027 | - """ |
1028 | - |
1029 | - def __init__(self, fileobj): |
1030 | - self.fileobj = fileobj |
1031 | - self.buf = self.fileobj.read(BLOCKSIZE) |
1032 | - |
1033 | - def read(self, size): |
1034 | - self.read = self.fileobj.read |
1035 | - return self.buf |
1036 | - |
1037 | - def getcomptype(self): |
1038 | - if self.buf.startswith("\037\213\010"): |
1039 | - return "gz" |
1040 | - if self.buf.startswith("BZh91"): |
1041 | - return "bz2" |
1042 | - return "tar" |
1043 | - |
1044 | - def close(self): |
1045 | - self.fileobj.close() |
1046 | -# class StreamProxy |
1047 | - |
1048 | -class _BZ2Proxy(object): |
1049 | - """Small proxy class that enables external file object |
1050 | - support for "r:bz2" and "w:bz2" modes. This is actually |
1051 | - a workaround for a limitation in bz2 module's BZ2File |
1052 | - class which (unlike gzip.GzipFile) has no support for |
1053 | - a file object argument. |
1054 | - """ |
1055 | - |
1056 | - blocksize = 16 * 1024 |
1057 | - |
1058 | - def __init__(self, fileobj, mode): |
1059 | - self.fileobj = fileobj |
1060 | - self.mode = mode |
1061 | - self.name = getattr(self.fileobj, "name", None) |
1062 | - self.init() |
1063 | - |
1064 | - def init(self): |
1065 | - import bz2 |
1066 | - self.pos = 0 |
1067 | - if self.mode == "r": |
1068 | - self.bz2obj = bz2.BZ2Decompressor() |
1069 | - self.fileobj.seek(0) |
1070 | - self.buf = "" |
1071 | - else: |
1072 | - self.bz2obj = bz2.BZ2Compressor() |
1073 | - |
1074 | - def read(self, size): |
1075 | - b = [self.buf] |
1076 | - x = len(self.buf) |
1077 | - while x < size: |
1078 | - raw = self.fileobj.read(self.blocksize) |
1079 | - if not raw: |
1080 | - break |
1081 | - data = self.bz2obj.decompress(raw) |
1082 | - b.append(data) |
1083 | - x += len(data) |
1084 | - self.buf = "".join(b) |
1085 | - |
1086 | - buf = self.buf[:size] |
1087 | - self.buf = self.buf[size:] |
1088 | - self.pos += len(buf) |
1089 | - return buf |
1090 | - |
1091 | - def seek(self, pos): |
1092 | - if pos < self.pos: |
1093 | - self.init() |
1094 | - self.read(pos - self.pos) |
1095 | - |
1096 | - def tell(self): |
1097 | - return self.pos |
1098 | - |
1099 | - def write(self, data): |
1100 | - self.pos += len(data) |
1101 | - raw = self.bz2obj.compress(data) |
1102 | - self.fileobj.write(raw) |
1103 | - |
1104 | - def close(self): |
1105 | - if self.mode == "w": |
1106 | - raw = self.bz2obj.flush() |
1107 | - self.fileobj.write(raw) |
1108 | -# class _BZ2Proxy |
1109 | - |
1110 | -#------------------------ |
1111 | -# Extraction file object |
1112 | -#------------------------ |
1113 | -class _FileInFile(object): |
1114 | - """A thin wrapper around an existing file object that |
1115 | - provides a part of its data as an individual file |
1116 | - object. |
1117 | - """ |
1118 | - |
1119 | - def __init__(self, fileobj, offset, size, sparse=None): |
1120 | - self.fileobj = fileobj |
1121 | - self.offset = offset |
1122 | - self.size = size |
1123 | - self.sparse = sparse |
1124 | - self.position = 0 |
1125 | - |
1126 | - def tell(self): |
1127 | - """Return the current file position. |
1128 | - """ |
1129 | - return self.position |
1130 | - |
1131 | - def seek(self, position): |
1132 | - """Seek to a position in the file. |
1133 | - """ |
1134 | - self.position = position |
1135 | - |
1136 | - def read(self, size=None): |
1137 | - """Read data from the file. |
1138 | - """ |
1139 | - if size is None: |
1140 | - size = self.size - self.position |
1141 | - else: |
1142 | - size = min(size, self.size - self.position) |
1143 | - |
1144 | - if self.sparse is None: |
1145 | - return self.readnormal(size) |
1146 | - else: |
1147 | - return self.readsparse(size) |
1148 | - |
1149 | - def readnormal(self, size): |
1150 | - """Read operation for regular files. |
1151 | - """ |
1152 | - self.fileobj.seek(self.offset + self.position) |
1153 | - self.position += size |
1154 | - return self.fileobj.read(size) |
1155 | - |
1156 | - def readsparse(self, size): |
1157 | - """Read operation for sparse files. |
1158 | - """ |
1159 | - data = [] |
1160 | - while size > 0: |
1161 | - buf = self.readsparsesection(size) |
1162 | - if not buf: |
1163 | - break |
1164 | - size -= len(buf) |
1165 | - data.append(buf) |
1166 | - return "".join(data) |
1167 | - |
1168 | - def readsparsesection(self, size): |
1169 | - """Read a single section of a sparse file. |
1170 | - """ |
1171 | - section = self.sparse.find(self.position) |
1172 | - |
1173 | - if section is None: |
1174 | - return "" |
1175 | - |
1176 | - size = min(size, section.offset + section.size - self.position) |
1177 | - |
1178 | - if isinstance(section, _data): |
1179 | - realpos = section.realpos + self.position - section.offset |
1180 | - self.fileobj.seek(self.offset + realpos) |
1181 | - self.position += size |
1182 | - return self.fileobj.read(size) |
1183 | - else: |
1184 | - self.position += size |
1185 | - return NUL * size |
1186 | -#class _FileInFile |
1187 | - |
1188 | - |
1189 | -class ExFileObject(object): |
1190 | - """File-like object for reading an archive member. |
1191 | - Is returned by TarFile.extractfile(). |
1192 | - """ |
1193 | - blocksize = 1024 |
1194 | - |
1195 | - def __init__(self, tarfile, tarinfo): |
1196 | - self.fileobj = _FileInFile(tarfile.fileobj, |
1197 | - tarinfo.offset_data, |
1198 | - tarinfo.size, |
1199 | - getattr(tarinfo, "sparse", None)) |
1200 | - self.name = tarinfo.name |
1201 | - self.mode = "r" |
1202 | - self.closed = False |
1203 | - self.size = tarinfo.size |
1204 | - |
1205 | - self.position = 0 |
1206 | - self.buffer = "" |
1207 | - |
1208 | - def read(self, size=None): |
1209 | - """Read at most size bytes from the file. If size is not |
1210 | - present or None, read all data until EOF is reached. |
1211 | - """ |
1212 | - if self.closed: |
1213 | - raise ValueError("I/O operation on closed file") |
1214 | - |
1215 | - buf = "" |
1216 | - if self.buffer: |
1217 | - if size is None: |
1218 | - buf = self.buffer |
1219 | - self.buffer = "" |
1220 | - else: |
1221 | - buf = self.buffer[:size] |
1222 | - self.buffer = self.buffer[size:] |
1223 | - |
1224 | - if size is None: |
1225 | - buf += self.fileobj.read() |
1226 | - else: |
1227 | - buf += self.fileobj.read(size - len(buf)) |
1228 | - |
1229 | - self.position += len(buf) |
1230 | - return buf |
1231 | - |
1232 | - def readline(self, size=-1): |
1233 | - """Read one entire line from the file. If size is present |
1234 | - and non-negative, return a string with at most that |
1235 | - size, which may be an incomplete line. |
1236 | - """ |
1237 | - if self.closed: |
1238 | - raise ValueError("I/O operation on closed file") |
1239 | - |
1240 | - if "\n" in self.buffer: |
1241 | - pos = self.buffer.find("\n") + 1 |
1242 | - else: |
1243 | - buffers = [self.buffer] |
1244 | - while True: |
1245 | - buf = self.fileobj.read(self.blocksize) |
1246 | - buffers.append(buf) |
1247 | - if not buf or "\n" in buf: |
1248 | - self.buffer = "".join(buffers) |
1249 | - pos = self.buffer.find("\n") + 1 |
1250 | - if pos == 0: |
1251 | - # no newline found. |
1252 | - pos = len(self.buffer) |
1253 | - break |
1254 | - |
1255 | - if size != -1: |
1256 | - pos = min(size, pos) |
1257 | - |
1258 | - buf = self.buffer[:pos] |
1259 | - self.buffer = self.buffer[pos:] |
1260 | - self.position += len(buf) |
1261 | - return buf |
1262 | - |
1263 | - def readlines(self): |
1264 | - """Return a list with all remaining lines. |
1265 | - """ |
1266 | - result = [] |
1267 | - while True: |
1268 | - line = self.readline() |
1269 | - if not line: break |
1270 | - result.append(line) |
1271 | - return result |
1272 | - |
1273 | - def tell(self): |
1274 | - """Return the current file position. |
1275 | - """ |
1276 | - if self.closed: |
1277 | - raise ValueError("I/O operation on closed file") |
1278 | - |
1279 | - return self.position |
1280 | - |
1281 | - def seek(self, pos, whence=0): |
1282 | - """Seek to a position in the file. |
1283 | - """ |
1284 | - if self.closed: |
1285 | - raise ValueError("I/O operation on closed file") |
1286 | - |
1287 | - if whence == 0: |
1288 | - self.position = min(max(pos, 0), self.size) |
1289 | - elif whence == 1: |
1290 | - if pos < 0: |
1291 | - self.position = max(self.position + pos, 0) |
1292 | - else: |
1293 | - self.position = min(self.position + pos, self.size) |
1294 | - elif whence == 2: |
1295 | - self.position = max(min(self.size + pos, self.size), 0) |
1296 | - else: |
1297 | - raise ValueError("Invalid argument") |
1298 | - |
1299 | - self.buffer = "" |
1300 | - self.fileobj.seek(self.position) |
1301 | - |
1302 | - def close(self): |
1303 | - """Close the file object. |
1304 | - """ |
1305 | - self.closed = True |
1306 | - |
1307 | - def __iter__(self): |
1308 | - """Get an iterator over the file's lines. |
1309 | - """ |
1310 | - while True: |
1311 | - line = self.readline() |
1312 | - if not line: |
1313 | - break |
1314 | - yield line |
1315 | -#class ExFileObject |
1316 | - |
1317 | -#------------------ |
1318 | -# Exported Classes |
1319 | -#------------------ |
1320 | -class TarInfo(object): |
1321 | - """Informational class which holds the details about an |
1322 | - archive member given by a tar header block. |
1323 | - TarInfo objects are returned by TarFile.getmember(), |
1324 | - TarFile.getmembers() and TarFile.gettarinfo() and are |
1325 | - usually created internally. |
1326 | - """ |
1327 | - |
1328 | - def __init__(self, name=""): |
1329 | - """Construct a TarInfo object. name is the optional name |
1330 | - of the member. |
1331 | - """ |
1332 | - self.name = name # member name |
1333 | - self.mode = 0644 # file permissions |
1334 | - self.uid = 0 # user id |
1335 | - self.gid = 0 # group id |
1336 | - self.size = 0 # file size |
1337 | - self.mtime = 0 # modification time |
1338 | - self.chksum = 0 # header checksum |
1339 | - self.type = REGTYPE # member type |
1340 | - self.linkname = "" # link name |
1341 | - self.uname = "" # user name |
1342 | - self.gname = "" # group name |
1343 | - self.devmajor = 0 # device major number |
1344 | - self.devminor = 0 # device minor number |
1345 | - |
1346 | - self.offset = 0 # the tar header starts here |
1347 | - self.offset_data = 0 # the file's data starts here |
1348 | - |
1349 | - self.pax_headers = {} # pax header information |
1350 | - |
1351 | - # In pax headers the "name" and "linkname" field are called |
1352 | - # "path" and "linkpath". |
1353 | - def _getpath(self): |
1354 | - return self.name |
1355 | - def _setpath(self, name): |
1356 | - self.name = name |
1357 | - path = property(_getpath, _setpath) |
1358 | - |
1359 | - def _getlinkpath(self): |
1360 | - return self.linkname |
1361 | - def _setlinkpath(self, linkname): |
1362 | - self.linkname = linkname |
1363 | - linkpath = property(_getlinkpath, _setlinkpath) |
1364 | - |
1365 | - def __repr__(self): |
1366 | - return "<%s %r at %#x>" % (self.__class__.__name__,self.name,id(self)) |
1367 | - |
1368 | - def get_info(self, encoding, errors): |
1369 | - """Return the TarInfo's attributes as a dictionary. |
1370 | - """ |
1371 | - info = { |
1372 | - "name": self.name, |
1373 | - "mode": self.mode & 07777, |
1374 | - "uid": self.uid, |
1375 | - "gid": self.gid, |
1376 | - "size": self.size, |
1377 | - "mtime": self.mtime, |
1378 | - "chksum": self.chksum, |
1379 | - "type": self.type, |
1380 | - "linkname": self.linkname, |
1381 | - "uname": self.uname, |
1382 | - "gname": self.gname, |
1383 | - "devmajor": self.devmajor, |
1384 | - "devminor": self.devminor |
1385 | - } |
1386 | - |
1387 | - if info["type"] == DIRTYPE and not info["name"].endswith("/"): |
1388 | - info["name"] += "/" |
1389 | - |
1390 | - for key in ("name", "linkname", "uname", "gname"): |
1391 | - if type(info[key]) is unicode: |
1392 | - info[key] = info[key].encode(encoding, errors) |
1393 | - |
1394 | - return info |
1395 | - |
1396 | - def tobuf(self, format=DEFAULT_FORMAT, encoding=ENCODING, errors="strict"): |
1397 | - """Return a tar header as a string of 512 byte blocks. |
1398 | - """ |
1399 | - info = self.get_info(encoding, errors) |
1400 | - |
1401 | - if format == USTAR_FORMAT: |
1402 | - return self.create_ustar_header(info) |
1403 | - elif format == GNU_FORMAT: |
1404 | - return self.create_gnu_header(info) |
1405 | - elif format == PAX_FORMAT: |
1406 | - return self.create_pax_header(info, encoding, errors) |
1407 | - else: |
1408 | - raise ValueError("invalid format") |
1409 | - |
1410 | - def create_ustar_header(self, info): |
1411 | - """Return the object as a ustar header block. |
1412 | - """ |
1413 | - info["magic"] = POSIX_MAGIC |
1414 | - |
1415 | - if len(info["linkname"]) > LENGTH_LINK: |
1416 | - raise ValueError("linkname is too long") |
1417 | - |
1418 | - if len(info["name"]) > LENGTH_NAME: |
1419 | - info["prefix"], info["name"] = self._posix_split_name(info["name"]) |
1420 | - |
1421 | - return self._create_header(info, USTAR_FORMAT) |
1422 | - |
1423 | - def create_gnu_header(self, info): |
1424 | - """Return the object as a GNU header block sequence. |
1425 | - """ |
1426 | - info["magic"] = GNU_MAGIC |
1427 | - |
1428 | - buf = "" |
1429 | - if len(info["linkname"]) > LENGTH_LINK: |
1430 | - buf += self._create_gnu_long_header(info["linkname"], GNUTYPE_LONGLINK) |
1431 | - |
1432 | - if len(info["name"]) > LENGTH_NAME: |
1433 | - buf += self._create_gnu_long_header(info["name"], GNUTYPE_LONGNAME) |
1434 | - |
1435 | - return buf + self._create_header(info, GNU_FORMAT) |
1436 | - |
1437 | - def create_pax_header(self, info, encoding, errors): |
1438 | - """Return the object as a ustar header block. If it cannot be |
1439 | - represented this way, prepend a pax extended header sequence |
1440 | - with supplement information. |
1441 | - """ |
1442 | - info["magic"] = POSIX_MAGIC |
1443 | - pax_headers = self.pax_headers.copy() |
1444 | - |
1445 | - # Test string fields for values that exceed the field length or cannot |
1446 | - # be represented in ASCII encoding. |
1447 | - for name, hname, length in ( |
1448 | - ("name", "path", LENGTH_NAME), ("linkname", "linkpath", LENGTH_LINK), |
1449 | - ("uname", "uname", 32), ("gname", "gname", 32)): |
1450 | - |
1451 | - if hname in pax_headers: |
1452 | - # The pax header has priority. |
1453 | - continue |
1454 | - |
1455 | - val = info[name].decode(encoding, errors) |
1456 | - |
1457 | - # Try to encode the string as ASCII. |
1458 | - try: |
1459 | - val.encode("ascii") |
1460 | - except UnicodeEncodeError: |
1461 | - pax_headers[hname] = val |
1462 | - continue |
1463 | - |
1464 | - if len(info[name]) > length: |
1465 | - pax_headers[hname] = val |
1466 | - |
1467 | - # Test number fields for values that exceed the field limit or values |
1468 | - # that like to be stored as float. |
1469 | - for name, digits in (("uid", 8), ("gid", 8), ("size", 12), ("mtime", 12)): |
1470 | - if name in pax_headers: |
1471 | - # The pax header has priority. Avoid overflow. |
1472 | - info[name] = 0 |
1473 | - continue |
1474 | - |
1475 | - val = info[name] |
1476 | - if not 0 <= val < 8 ** (digits - 1) or isinstance(val, float): |
1477 | - pax_headers[name] = unicode(val) |
1478 | - info[name] = 0 |
1479 | - |
1480 | - # Create a pax extended header if necessary. |
1481 | - if pax_headers: |
1482 | - buf = self._create_pax_generic_header(pax_headers) |
1483 | - else: |
1484 | - buf = "" |
1485 | - |
1486 | - return buf + self._create_header(info, USTAR_FORMAT) |
1487 | - |
1488 | - @classmethod |
1489 | - def create_pax_global_header(cls, pax_headers): |
1490 | - """Return the object as a pax global header block sequence. |
1491 | - """ |
1492 | - return cls._create_pax_generic_header(pax_headers, type=XGLTYPE) |
1493 | - |
1494 | - def _posix_split_name(self, name): |
1495 | - """Split a name longer than 100 chars into a prefix |
1496 | - and a name part. |
1497 | - """ |
1498 | - prefix = name[:LENGTH_PREFIX + 1] |
1499 | - while prefix and prefix[-1] != "/": |
1500 | - prefix = prefix[:-1] |
1501 | - |
1502 | - name = name[len(prefix):] |
1503 | - prefix = prefix[:-1] |
1504 | - |
1505 | - if not prefix or len(name) > LENGTH_NAME: |
1506 | - raise ValueError("name is too long") |
1507 | - return prefix, name |
1508 | - |
1509 | - @staticmethod |
1510 | - def _create_header(info, format): |
1511 | - """Return a header block. info is a dictionary with file |
1512 | - information, format must be one of the *_FORMAT constants. |
1513 | - """ |
1514 | - parts = [ |
1515 | - stn(info.get("name", ""), 100), |
1516 | - itn(info.get("mode", 0) & 07777, 8, format), |
1517 | - itn(info.get("uid", 0), 8, format), |
1518 | - itn(info.get("gid", 0), 8, format), |
1519 | - itn(info.get("size", 0), 12, format), |
1520 | - itn(info.get("mtime", 0), 12, format), |
1521 | - " ", # checksum field |
1522 | - info.get("type", REGTYPE), |
1523 | - stn(info.get("linkname", ""), 100), |
1524 | - stn(info.get("magic", POSIX_MAGIC), 8), |
1525 | - stn(info.get("uname", ""), 32), |
1526 | - stn(info.get("gname", ""), 32), |
1527 | - itn(info.get("devmajor", 0), 8, format), |
1528 | - itn(info.get("devminor", 0), 8, format), |
1529 | - stn(info.get("prefix", ""), 155) |
1530 | - ] |
1531 | - |
1532 | - buf = struct.pack("%ds" % BLOCKSIZE, "".join(parts)) |
1533 | - chksum = calc_chksums(buf[-BLOCKSIZE:])[0] |
1534 | - buf = buf[:-364] + "%06o\0" % chksum + buf[-357:] |
1535 | - return buf |
1536 | - |
1537 | - @staticmethod |
1538 | - def _create_payload(payload): |
1539 | - """Return the string payload filled with zero bytes |
1540 | - up to the next 512 byte border. |
1541 | - """ |
1542 | - blocks, remainder = divmod(len(payload), BLOCKSIZE) |
1543 | - if remainder > 0: |
1544 | - payload += (BLOCKSIZE - remainder) * NUL |
1545 | - return payload |
1546 | - |
1547 | - @classmethod |
1548 | - def _create_gnu_long_header(cls, name, type): |
1549 | - """Return a GNUTYPE_LONGNAME or GNUTYPE_LONGLINK sequence |
1550 | - for name. |
1551 | - """ |
1552 | - name += NUL |
1553 | - |
1554 | - info = {} |
1555 | - info["name"] = "././@LongLink" |
1556 | - info["type"] = type |
1557 | - info["size"] = len(name) |
1558 | - info["magic"] = GNU_MAGIC |
1559 | - |
1560 | - # create extended header + name blocks. |
1561 | - return cls._create_header(info, USTAR_FORMAT) + \ |
1562 | - cls._create_payload(name) |
1563 | - |
1564 | - @classmethod |
1565 | - def _create_pax_generic_header(cls, pax_headers, type=XHDTYPE): |
1566 | - """Return a POSIX.1-2001 extended or global header sequence |
1567 | - that contains a list of keyword, value pairs. The values |
1568 | - must be unicode objects. |
1569 | - """ |
1570 | - records = [] |
1571 | - for keyword, value in pax_headers.iteritems(): |
1572 | - keyword = keyword.encode("utf8") |
1573 | - value = value.encode("utf8") |
1574 | - l = len(keyword) + len(value) + 3 # ' ' + '=' + '\n' |
1575 | - n = p = 0 |
1576 | - while True: |
1577 | - n = l + len(str(p)) |
1578 | - if n == p: |
1579 | - break |
1580 | - p = n |
1581 | - records.append("%d %s=%s\n" % (p, keyword, value)) |
1582 | - records = "".join(records) |
1583 | - |
1584 | - # We use a hardcoded "././@PaxHeader" name like star does |
1585 | - # instead of the one that POSIX recommends. |
1586 | - info = {} |
1587 | - info["name"] = "././@PaxHeader" |
1588 | - info["type"] = type |
1589 | - info["size"] = len(records) |
1590 | - info["magic"] = POSIX_MAGIC |
1591 | - |
1592 | - # Create pax header + record blocks. |
1593 | - return cls._create_header(info, USTAR_FORMAT) + \ |
1594 | - cls._create_payload(records) |
1595 | - |
1596 | - @classmethod |
1597 | - def frombuf(cls, buf): |
1598 | - """Construct a TarInfo object from a 512 byte string buffer. |
1599 | - """ |
1600 | - if len(buf) == 0: |
1601 | - raise EmptyHeaderError("empty header") |
1602 | - if len(buf) != BLOCKSIZE: |
1603 | - raise TruncatedHeaderError("truncated header") |
1604 | - if buf.count(NUL) == BLOCKSIZE: |
1605 | - raise EOFHeaderError("end of file header") |
1606 | - |
1607 | - chksum = nti(buf[148:156]) |
1608 | - if chksum not in calc_chksums(buf): |
1609 | - raise InvalidHeaderError("bad checksum") |
1610 | - |
1611 | - obj = cls() |
1612 | - obj.buf = buf |
1613 | - obj.name = nts(buf[0:100]) |
1614 | - obj.mode = nti(buf[100:108]) |
1615 | - obj.uid = nti(buf[108:116]) |
1616 | - obj.gid = nti(buf[116:124]) |
1617 | - obj.size = nti(buf[124:136]) |
1618 | - obj.mtime = nti(buf[136:148]) |
1619 | - obj.chksum = chksum |
1620 | - obj.type = buf[156:157] |
1621 | - obj.linkname = nts(buf[157:257]) |
1622 | - obj.uname = nts(buf[265:297]) |
1623 | - obj.gname = nts(buf[297:329]) |
1624 | - obj.devmajor = nti(buf[329:337]) |
1625 | - obj.devminor = nti(buf[337:345]) |
1626 | - prefix = nts(buf[345:500]) |
1627 | - |
1628 | - # Old V7 tar format represents a directory as a regular |
1629 | - # file with a trailing slash. |
1630 | - if obj.type == AREGTYPE and obj.name.endswith("/"): |
1631 | - obj.type = DIRTYPE |
1632 | - |
1633 | - # Remove redundant slashes from directories. |
1634 | - if obj.isdir(): |
1635 | - obj.name = obj.name.rstrip("/") |
1636 | - |
1637 | - # Reconstruct a ustar longname. |
1638 | - if prefix and obj.type not in GNU_TYPES: |
1639 | - obj.name = prefix + "/" + obj.name |
1640 | - return obj |
1641 | - |
1642 | - @classmethod |
1643 | - def fromtarfile(cls, tarfile): |
1644 | - """Return the next TarInfo object from TarFile object |
1645 | - tarfile. |
1646 | - """ |
1647 | - buf = tarfile.fileobj.read(BLOCKSIZE) |
1648 | - obj = cls.frombuf(buf) |
1649 | - obj.offset = tarfile.fileobj.tell() - BLOCKSIZE |
1650 | - return obj._proc_member(tarfile) |
1651 | - |
1652 | - #-------------------------------------------------------------------------- |
1653 | - # The following are methods that are called depending on the type of a |
1654 | - # member. The entry point is _proc_member() which can be overridden in a |
1655 | - # subclass to add custom _proc_*() methods. A _proc_*() method MUST |
1656 | - # implement the following |
1657 | - # operations: |
1658 | - # 1. Set self.offset_data to the position where the data blocks begin, |
1659 | - # if there is data that follows. |
1660 | - # 2. Set tarfile.offset to the position where the next member's header will |
1661 | - # begin. |
1662 | - # 3. Return self or another valid TarInfo object. |
1663 | - def _proc_member(self, tarfile): |
1664 | - """Choose the right processing method depending on |
1665 | - the type and call it. |
1666 | - """ |
1667 | - if self.type in (GNUTYPE_LONGNAME, GNUTYPE_LONGLINK): |
1668 | - return self._proc_gnulong(tarfile) |
1669 | - elif self.type == GNUTYPE_SPARSE: |
1670 | - return self._proc_sparse(tarfile) |
1671 | - elif self.type in (XHDTYPE, XGLTYPE, SOLARIS_XHDTYPE): |
1672 | - return self._proc_pax(tarfile) |
1673 | - else: |
1674 | - return self._proc_builtin(tarfile) |
1675 | - |
1676 | - def _proc_builtin(self, tarfile): |
1677 | - """Process a builtin type or an unknown type which |
1678 | - will be treated as a regular file. |
1679 | - """ |
1680 | - self.offset_data = tarfile.fileobj.tell() |
1681 | - offset = self.offset_data |
1682 | - if self.isreg() or self.type not in SUPPORTED_TYPES: |
1683 | - # Skip the following data blocks. |
1684 | - offset += self._block(self.size) |
1685 | - tarfile.offset = offset |
1686 | - |
1687 | - # Patch the TarInfo object with saved global |
1688 | - # header information. |
1689 | - self._apply_pax_info(tarfile.pax_headers, tarfile.encoding, tarfile.errors) |
1690 | - |
1691 | - return self |
1692 | - |
1693 | - def _proc_gnulong(self, tarfile): |
1694 | - """Process the blocks that hold a GNU longname |
1695 | - or longlink member. |
1696 | - """ |
1697 | - buf = tarfile.fileobj.read(self._block(self.size)) |
1698 | - |
1699 | - # Fetch the next header and process it. |
1700 | - try: |
1701 | - next = self.fromtarfile(tarfile) |
1702 | - except HeaderError: |
1703 | - raise SubsequentHeaderError("missing or bad subsequent header") |
1704 | - |
1705 | - # Patch the TarInfo object from the next header with |
1706 | - # the longname information. |
1707 | - next.offset = self.offset |
1708 | - if self.type == GNUTYPE_LONGNAME: |
1709 | - next.name = nts(buf) |
1710 | - elif self.type == GNUTYPE_LONGLINK: |
1711 | - next.linkname = nts(buf) |
1712 | - |
1713 | - return next |
1714 | - |
1715 | - def _proc_sparse(self, tarfile): |
1716 | - """Process a GNU sparse header plus extra headers. |
1717 | - """ |
1718 | - buf = self.buf |
1719 | - sp = _ringbuffer() |
1720 | - pos = 386 |
1721 | - lastpos = 0L |
1722 | - realpos = 0L |
1723 | - # There are 4 possible sparse structs in the |
1724 | - # first header. |
1725 | - for i in xrange(4): |
1726 | - try: |
1727 | - offset = nti(buf[pos:pos + 12]) |
1728 | - numbytes = nti(buf[pos + 12:pos + 24]) |
1729 | - except ValueError: |
1730 | - break |
1731 | - if offset > lastpos: |
1732 | - sp.append(_hole(lastpos, offset - lastpos)) |
1733 | - sp.append(_data(offset, numbytes, realpos)) |
1734 | - realpos += numbytes |
1735 | - lastpos = offset + numbytes |
1736 | - pos += 24 |
1737 | - |
1738 | - isextended = ord(buf[482]) |
1739 | - origsize = nti(buf[483:495]) |
1740 | - |
1741 | - # If the isextended flag is given, |
1742 | - # there are extra headers to process. |
1743 | - while isextended == 1: |
1744 | - buf = tarfile.fileobj.read(BLOCKSIZE) |
1745 | - pos = 0 |
1746 | - for i in xrange(21): |
1747 | - try: |
1748 | - offset = nti(buf[pos:pos + 12]) |
1749 | - numbytes = nti(buf[pos + 12:pos + 24]) |
1750 | - except ValueError: |
1751 | - break |
1752 | - if offset > lastpos: |
1753 | - sp.append(_hole(lastpos, offset - lastpos)) |
1754 | - sp.append(_data(offset, numbytes, realpos)) |
1755 | - realpos += numbytes |
1756 | - lastpos = offset + numbytes |
1757 | - pos += 24 |
1758 | - isextended = ord(buf[504]) |
1759 | - |
1760 | - if lastpos < origsize: |
1761 | - sp.append(_hole(lastpos, origsize - lastpos)) |
1762 | - |
1763 | - self.sparse = sp |
1764 | - |
1765 | - self.offset_data = tarfile.fileobj.tell() |
1766 | - tarfile.offset = self.offset_data + self._block(self.size) |
1767 | - self.size = origsize |
1768 | - |
1769 | - return self |
1770 | - |
1771 | - def _proc_pax(self, tarfile): |
1772 | - """Process an extended or global header as described in |
1773 | - POSIX.1-2001. |
1774 | - """ |
1775 | - # Read the header information. |
1776 | - buf = tarfile.fileobj.read(self._block(self.size)) |
1777 | - |
1778 | - # A pax header stores supplemental information for either |
1779 | - # the following file (extended) or all following files |
1780 | - # (global). |
1781 | - if self.type == XGLTYPE: |
1782 | - pax_headers = tarfile.pax_headers |
1783 | - else: |
1784 | - pax_headers = tarfile.pax_headers.copy() |
1785 | - |
1786 | - # Parse pax header information. A record looks like that: |
1787 | - # "%d %s=%s\n" % (length, keyword, value). length is the size |
1788 | - # of the complete record including the length field itself and |
1789 | - # the newline. keyword and value are both UTF-8 encoded strings. |
1790 | - regex = re.compile(r"(\d+) ([^=]+)=", re.U) |
1791 | - pos = 0 |
1792 | - while True: |
1793 | - match = regex.match(buf, pos) |
1794 | - if not match: |
1795 | - break |
1796 | - |
1797 | - length, keyword = match.groups() |
1798 | - length = int(length) |
1799 | - value = buf[match.end(2) + 1:match.start(1) + length - 1] |
1800 | - |
1801 | - keyword = keyword.decode("utf8") |
1802 | - value = value.decode("utf8") |
1803 | - |
1804 | - pax_headers[keyword] = value |
1805 | - pos += length |
1806 | - |
1807 | - # Fetch the next header. |
1808 | - try: |
1809 | - next = self.fromtarfile(tarfile) |
1810 | - except HeaderError: |
1811 | - raise SubsequentHeaderError("missing or bad subsequent header") |
1812 | - |
1813 | - if self.type in (XHDTYPE, SOLARIS_XHDTYPE): |
1814 | - # Patch the TarInfo object with the extended header info. |
1815 | - next._apply_pax_info(pax_headers, tarfile.encoding, tarfile.errors) |
1816 | - next.offset = self.offset |
1817 | - |
1818 | - if "size" in pax_headers: |
1819 | - # If the extended header replaces the size field, |
1820 | - # we need to recalculate the offset where the next |
1821 | - # header starts. |
1822 | - offset = next.offset_data |
1823 | - if next.isreg() or next.type not in SUPPORTED_TYPES: |
1824 | - offset += next._block(next.size) |
1825 | - tarfile.offset = offset |
1826 | - |
1827 | - return next |
1828 | - |
1829 | - def _apply_pax_info(self, pax_headers, encoding, errors): |
1830 | - """Replace fields with supplemental information from a previous |
1831 | - pax extended or global header. |
1832 | - """ |
1833 | - for keyword, value in pax_headers.iteritems(): |
1834 | - if keyword not in PAX_FIELDS: |
1835 | - continue |
1836 | - |
1837 | - if keyword == "path": |
1838 | - value = value.rstrip("/") |
1839 | - |
1840 | - if keyword in PAX_NUMBER_FIELDS: |
1841 | - try: |
1842 | - value = PAX_NUMBER_FIELDS[keyword](value) |
1843 | - except ValueError: |
1844 | - value = 0 |
1845 | - else: |
1846 | - value = uts(value, encoding, errors) |
1847 | - |
1848 | - setattr(self, keyword, value) |
1849 | - |
1850 | - self.pax_headers = pax_headers.copy() |
1851 | - |
1852 | - def _block(self, count): |
1853 | - """Round up a byte count by BLOCKSIZE and return it, |
1854 | - e.g. _block(834) => 1024. |
1855 | - """ |
1856 | - blocks, remainder = divmod(count, BLOCKSIZE) |
1857 | - if remainder: |
1858 | - blocks += 1 |
1859 | - return blocks * BLOCKSIZE |
1860 | - |
1861 | - def isreg(self): |
1862 | - return self.type in REGULAR_TYPES |
1863 | - def isfile(self): |
1864 | - return self.isreg() |
1865 | - def isdir(self): |
1866 | - return self.type == DIRTYPE |
1867 | - def issym(self): |
1868 | - return self.type == SYMTYPE |
1869 | - def islnk(self): |
1870 | - return self.type == LNKTYPE |
1871 | - def ischr(self): |
1872 | - return self.type == CHRTYPE |
1873 | - def isblk(self): |
1874 | - return self.type == BLKTYPE |
1875 | - def isfifo(self): |
1876 | - return self.type == FIFOTYPE |
1877 | - def issparse(self): |
1878 | - return self.type == GNUTYPE_SPARSE |
1879 | - def isdev(self): |
1880 | - return self.type in (CHRTYPE, BLKTYPE, FIFOTYPE) |
1881 | -# class TarInfo |
1882 | - |
1883 | -class TarFile(object): |
1884 | - """The TarFile Class provides an interface to tar archives. |
1885 | - """ |
1886 | - |
1887 | - debug = 0 # May be set from 0 (no msgs) to 3 (all msgs) |
1888 | - |
1889 | - dereference = False # If true, add content of linked file to the |
1890 | - # tar file, else the link. |
1891 | - |
1892 | - ignore_zeros = False # If true, skips empty or invalid blocks and |
1893 | - # continues processing. |
1894 | - |
1895 | - errorlevel = 1 # If 0, fatal errors only appear in debug |
1896 | - # messages (if debug >= 0). If > 0, errors |
1897 | - # are passed to the caller as exceptions. |
1898 | - |
1899 | - format = DEFAULT_FORMAT # The format to use when creating an archive. |
1900 | - |
1901 | - encoding = ENCODING # Encoding for 8-bit character strings. |
1902 | - |
1903 | - errors = None # Error handler for unicode conversion. |
1904 | - |
1905 | - tarinfo = TarInfo # The default TarInfo class to use. |
1906 | - |
1907 | - fileobject = ExFileObject # The default ExFileObject class to use. |
1908 | - |
1909 | - def __init__(self, name=None, mode="r", fileobj=None, format=None, |
1910 | - tarinfo=None, dereference=None, ignore_zeros=None, encoding=None, |
1911 | - errors=None, pax_headers=None, debug=None, errorlevel=None): |
1912 | - """Open an (uncompressed) tar archive `name'. `mode' is either 'r' to |
1913 | - read from an existing archive, 'a' to append data to an existing |
1914 | - file or 'w' to create a new file overwriting an existing one. `mode' |
1915 | - defaults to 'r'. |
1916 | - If `fileobj' is given, it is used for reading or writing data. If it |
1917 | - can be determined, `mode' is overridden by `fileobj's mode. |
1918 | - `fileobj' is not closed, when TarFile is closed. |
1919 | - """ |
1920 | - if len(mode) > 1 or mode not in "raw": |
1921 | - raise ValueError("mode must be 'r', 'a' or 'w'") |
1922 | - self.mode = mode |
1923 | - self._mode = {"r": "rb", "a": "r+b", "w": "wb"}[mode] |
1924 | - |
1925 | - if not fileobj: |
1926 | - if self.mode == "a" and not os.path.exists(name): |
1927 | - # Create nonexistent files in append mode. |
1928 | - self.mode = "w" |
1929 | - self._mode = "wb" |
1930 | - fileobj = bltn_open(name, self._mode) |
1931 | - self._extfileobj = False |
1932 | - else: |
1933 | - if name is None and hasattr(fileobj, "name"): |
1934 | - name = fileobj.name |
1935 | - if hasattr(fileobj, "mode"): |
1936 | - self._mode = fileobj.mode |
1937 | - self._extfileobj = True |
1938 | - if name: |
1939 | - self.name = os.path.abspath(name) |
1940 | - else: |
1941 | - self.name = None |
1942 | - self.fileobj = fileobj |
1943 | - |
1944 | - # Init attributes. |
1945 | - if format is not None: |
1946 | - self.format = format |
1947 | - if tarinfo is not None: |
1948 | - self.tarinfo = tarinfo |
1949 | - if dereference is not None: |
1950 | - self.dereference = dereference |
1951 | - if ignore_zeros is not None: |
1952 | - self.ignore_zeros = ignore_zeros |
1953 | - if encoding is not None: |
1954 | - self.encoding = encoding |
1955 | - |
1956 | - if errors is not None: |
1957 | - self.errors = errors |
1958 | - elif mode == "r": |
1959 | - self.errors = "utf-8" |
1960 | - else: |
1961 | - self.errors = "strict" |
1962 | - |
1963 | - if pax_headers is not None and self.format == PAX_FORMAT: |
1964 | - self.pax_headers = pax_headers |
1965 | - else: |
1966 | - self.pax_headers = {} |
1967 | - |
1968 | - if debug is not None: |
1969 | - self.debug = debug |
1970 | - if errorlevel is not None: |
1971 | - self.errorlevel = errorlevel |
1972 | - |
1973 | - # Init datastructures. |
1974 | - self.closed = False |
1975 | - self.members = [] # list of members as TarInfo objects |
1976 | - self._loaded = False # flag if all members have been read |
1977 | - self.offset = self.fileobj.tell() |
1978 | - # current position in the archive file |
1979 | - self.inodes = {} # dictionary caching the inodes of |
1980 | - # archive members already added |
1981 | - |
1982 | - try: |
1983 | - if self.mode == "r": |
1984 | - self.firstmember = None |
1985 | - self.firstmember = self.next() |
1986 | - |
1987 | - if self.mode == "a": |
1988 | - # Move to the end of the archive, |
1989 | - # before the first empty block. |
1990 | - while True: |
1991 | - self.fileobj.seek(self.offset) |
1992 | - try: |
1993 | - tarinfo = self.tarinfo.fromtarfile(self) |
1994 | - self.members.append(tarinfo) |
1995 | - except EOFHeaderError: |
1996 | - self.fileobj.seek(self.offset) |
1997 | - break |
1998 | - except HeaderError, e: |
1999 | - raise ReadError(str(e)) |
2000 | - |
2001 | - if self.mode in "aw": |
2002 | - self._loaded = True |
2003 | - |
2004 | - if self.pax_headers: |
2005 | - buf = self.tarinfo.create_pax_global_header(self.pax_headers.copy()) |
2006 | - self.fileobj.write(buf) |
2007 | - self.offset += len(buf) |
2008 | - except: |
2009 | - if not self._extfileobj: |
2010 | - self.fileobj.close() |
2011 | - self.closed = True |
2012 | - raise |
2013 | - |
2014 | - def _getposix(self): |
2015 | - return self.format == USTAR_FORMAT |
2016 | - def _setposix(self, value): |
2017 | - import warnings |
2018 | - warnings.warn("use the format attribute instead", DeprecationWarning, |
2019 | - 2) |
2020 | - if value: |
2021 | - self.format = USTAR_FORMAT |
2022 | - else: |
2023 | - self.format = GNU_FORMAT |
2024 | - posix = property(_getposix, _setposix) |
2025 | - |
2026 | - #-------------------------------------------------------------------------- |
2027 | - # Below are the classmethods which act as alternate constructors to the |
2028 | - # TarFile class. The open() method is the only one that is needed for |
2029 | - # public use; it is the "super"-constructor and is able to select an |
2030 | - # adequate "sub"-constructor for a particular compression using the mapping |
2031 | - # from OPEN_METH. |
2032 | - # |
2033 | - # This concept allows one to subclass TarFile without losing the comfort of |
2034 | - # the super-constructor. A sub-constructor is registered and made available |
2035 | - # by adding it to the mapping in OPEN_METH. |
2036 | - |
2037 | - @classmethod |
2038 | - def open(cls, name=None, mode="r", fileobj=None, bufsize=RECORDSIZE, **kwargs): |
2039 | - """Open a tar archive for reading, writing or appending. Return |
2040 | - an appropriate TarFile class. |
2041 | - |
2042 | - mode: |
2043 | - 'r' or 'r:*' open for reading with transparent compression |
2044 | - 'r:' open for reading exclusively uncompressed |
2045 | - 'r:gz' open for reading with gzip compression |
2046 | - 'r:bz2' open for reading with bzip2 compression |
2047 | - 'a' or 'a:' open for appending, creating the file if necessary |
2048 | - 'w' or 'w:' open for writing without compression |
2049 | - 'w:gz' open for writing with gzip compression |
2050 | - 'w:bz2' open for writing with bzip2 compression |
2051 | - |
2052 | - 'r|*' open a stream of tar blocks with transparent compression |
2053 | - 'r|' open an uncompressed stream of tar blocks for reading |
2054 | - 'r|gz' open a gzip compressed stream of tar blocks |
2055 | - 'r|bz2' open a bzip2 compressed stream of tar blocks |
2056 | - 'w|' open an uncompressed stream for writing |
2057 | - 'w|gz' open a gzip compressed stream for writing |
2058 | - 'w|bz2' open a bzip2 compressed stream for writing |
2059 | - """ |
2060 | - |
2061 | - if not name and not fileobj: |
2062 | - raise ValueError("nothing to open") |
2063 | - |
2064 | - if mode in ("r", "r:*"): |
2065 | - # Find out which *open() is appropriate for opening the file. |
2066 | - for comptype in cls.OPEN_METH: |
2067 | - func = getattr(cls, cls.OPEN_METH[comptype]) |
2068 | - if fileobj is not None: |
2069 | - saved_pos = fileobj.tell() |
2070 | - try: |
2071 | - return func(name, "r", fileobj, **kwargs) |
2072 | - except (ReadError, CompressionError), e: |
2073 | - if fileobj is not None: |
2074 | - fileobj.seek(saved_pos) |
2075 | - continue |
2076 | - raise ReadError("file could not be opened successfully") |
2077 | - |
2078 | - elif ":" in mode: |
2079 | - filemode, comptype = mode.split(":", 1) |
2080 | - filemode = filemode or "r" |
2081 | - comptype = comptype or "tar" |
2082 | - |
2083 | - # Select the *open() function according to |
2084 | - # given compression. |
2085 | - if comptype in cls.OPEN_METH: |
2086 | - func = getattr(cls, cls.OPEN_METH[comptype]) |
2087 | - else: |
2088 | - raise CompressionError("unknown compression type %r" % comptype) |
2089 | - return func(name, filemode, fileobj, **kwargs) |
2090 | - |
2091 | - elif "|" in mode: |
2092 | - filemode, comptype = mode.split("|", 1) |
2093 | - filemode = filemode or "r" |
2094 | - comptype = comptype or "tar" |
2095 | - |
2096 | - if filemode not in "rw": |
2097 | - raise ValueError("mode must be 'r' or 'w'") |
2098 | - |
2099 | - t = cls(name, filemode, |
2100 | - _Stream(name, filemode, comptype, fileobj, bufsize), |
2101 | - **kwargs) |
2102 | - t._extfileobj = False |
2103 | - return t |
2104 | - |
2105 | - elif mode in "aw": |
2106 | - return cls.taropen(name, mode, fileobj, **kwargs) |
2107 | - |
2108 | - raise ValueError("undiscernible mode") |
2109 | - |
2110 | - @classmethod |
2111 | - def taropen(cls, name, mode="r", fileobj=None, **kwargs): |
2112 | - """Open uncompressed tar archive name for reading or writing. |
2113 | - """ |
2114 | - if len(mode) > 1 or mode not in "raw": |
2115 | - raise ValueError("mode must be 'r', 'a' or 'w'") |
2116 | - return cls(name, mode, fileobj, **kwargs) |
2117 | - |
2118 | - @classmethod |
2119 | - def gzopen(cls, name, mode="r", fileobj=None, compresslevel=9, **kwargs): |
2120 | - """Open gzip compressed tar archive name for reading or writing. |
2121 | - Appending is not allowed. |
2122 | - """ |
2123 | - if len(mode) > 1 or mode not in "rw": |
2124 | - raise ValueError("mode must be 'r' or 'w'") |
2125 | - |
2126 | - try: |
2127 | - import gzip |
2128 | - gzip.GzipFile |
2129 | - except (ImportError, AttributeError): |
2130 | - raise CompressionError("gzip module is not available") |
2131 | - |
2132 | - if fileobj is None: |
2133 | - fileobj = bltn_open(name, mode + "b") |
2134 | - |
2135 | - try: |
2136 | - t = cls.taropen(name, mode, |
2137 | - gzip.GzipFile(name, mode, compresslevel, fileobj), |
2138 | - **kwargs) |
2139 | - except IOError: |
2140 | - raise ReadError("not a gzip file") |
2141 | - t._extfileobj = False |
2142 | - return t |
2143 | - |
2144 | - @classmethod |
2145 | - def bz2open(cls, name, mode="r", fileobj=None, compresslevel=9, **kwargs): |
2146 | - """Open bzip2 compressed tar archive name for reading or writing. |
2147 | - Appending is not allowed. |
2148 | - """ |
2149 | - if len(mode) > 1 or mode not in "rw": |
2150 | - raise ValueError("mode must be 'r' or 'w'.") |
2151 | - |
2152 | - try: |
2153 | - import bz2 |
2154 | - except ImportError: |
2155 | - raise CompressionError("bz2 module is not available") |
2156 | - |
2157 | - if fileobj is not None: |
2158 | - fileobj = _BZ2Proxy(fileobj, mode) |
2159 | - else: |
2160 | - fileobj = bz2.BZ2File(name, mode, compresslevel=compresslevel) |
2161 | - |
2162 | - try: |
2163 | - t = cls.taropen(name, mode, fileobj, **kwargs) |
2164 | - except (IOError, EOFError): |
2165 | - raise ReadError("not a bzip2 file") |
2166 | - t._extfileobj = False |
2167 | - return t |
2168 | - |
2169 | - # All *open() methods are registered here. |
2170 | - OPEN_METH = { |
2171 | - "tar": "taropen", # uncompressed tar |
2172 | - "gz": "gzopen", # gzip compressed tar |
2173 | - "bz2": "bz2open" # bzip2 compressed tar |
2174 | - } |
2175 | - |
2176 | - #-------------------------------------------------------------------------- |
2177 | - # The public methods which TarFile provides: |
2178 | - |
2179 | - def close(self): |
2180 | - """Close the TarFile. In write-mode, two finishing zero blocks are |
2181 | - appended to the archive. |
2182 | - """ |
2183 | - if self.closed: |
2184 | - return |
2185 | - |
2186 | - if self.mode in "aw": |
2187 | - self.fileobj.write(NUL * (BLOCKSIZE * 2)) |
2188 | - self.offset += (BLOCKSIZE * 2) |
2189 | - # fill up the end with zero-blocks |
2190 | - # (like option -b20 for tar does) |
2191 | - blocks, remainder = divmod(self.offset, RECORDSIZE) |
2192 | - if remainder > 0: |
2193 | - self.fileobj.write(NUL * (RECORDSIZE - remainder)) |
2194 | - |
2195 | - if not self._extfileobj: |
2196 | - self.fileobj.close() |
2197 | - self.closed = True |
2198 | - |
2199 | - def getmember(self, name): |
2200 | - """Return a TarInfo object for member `name'. If `name' can not be |
2201 | - found in the archive, KeyError is raised. If a member occurs more |
2202 | - than once in the archive, its last occurrence is assumed to be the |
2203 | - most up-to-date version. |
2204 | - """ |
2205 | - tarinfo = self._getmember(name) |
2206 | - if tarinfo is None: |
2207 | - raise KeyError("filename %r not found" % name) |
2208 | - return tarinfo |
2209 | - |
2210 | - def getmembers(self): |
2211 | - """Return the members of the archive as a list of TarInfo objects. The |
2212 | - list has the same order as the members in the archive. |
2213 | - """ |
2214 | - self._check() |
2215 | - if not self._loaded: # if we want to obtain a list of |
2216 | - self._load() # all members, we first have to |
2217 | - # scan the whole archive. |
2218 | - return self.members |
2219 | - |
2220 | - def getnames(self): |
2221 | - """Return the members of the archive as a list of their names. It has |
2222 | - the same order as the list returned by getmembers(). |
2223 | - """ |
2224 | - return [tarinfo.name for tarinfo in self.getmembers()] |
2225 | - |
2226 | - def gettarinfo(self, name=None, arcname=None, fileobj=None): |
2227 | - """Create a TarInfo object for either the file `name' or the file |
2228 | - object `fileobj' (using os.fstat on its file descriptor). You can |
2229 | - modify some of the TarInfo's attributes before you add it using |
2230 | - addfile(). If given, `arcname' specifies an alternative name for the |
2231 | - file in the archive. |
2232 | - """ |
2233 | - self._check("aw") |
2234 | - |
2235 | - # When fileobj is given, replace name by |
2236 | - # fileobj's real name. |
2237 | - if fileobj is not None: |
2238 | - name = fileobj.name |
2239 | - |
2240 | - # Building the name of the member in the archive. |
2241 | - # Backward slashes are converted to forward slashes, |
2242 | - # Absolute paths are turned to relative paths. |
2243 | - if arcname is None: |
2244 | - arcname = name |
2245 | - drv, arcname = os.path.splitdrive(arcname) |
2246 | - arcname = arcname.replace(os.sep, "/") |
2247 | - arcname = arcname.lstrip("/") |
2248 | - |
2249 | - # Now, fill the TarInfo object with |
2250 | - # information specific for the file. |
2251 | - tarinfo = self.tarinfo() |
2252 | - tarinfo.tarfile = self |
2253 | - |
2254 | - # Use os.stat or os.lstat, depending on platform |
2255 | - # and if symlinks shall be resolved. |
2256 | - if fileobj is None: |
2257 | - if hasattr(os, "lstat") and not self.dereference: |
2258 | - statres = os.lstat(name) |
2259 | - else: |
2260 | - statres = os.stat(name) |
2261 | - else: |
2262 | - statres = os.fstat(fileobj.fileno()) |
2263 | - linkname = "" |
2264 | - |
2265 | - stmd = statres.st_mode |
2266 | - if stat.S_ISREG(stmd): |
2267 | - inode = (statres.st_ino, statres.st_dev) |
2268 | - if not self.dereference and statres.st_nlink > 1 and \ |
2269 | - inode in self.inodes and arcname != self.inodes[inode]: |
2270 | - # Is it a hardlink to an already |
2271 | - # archived file? |
2272 | - type = LNKTYPE |
2273 | - linkname = self.inodes[inode] |
2274 | - else: |
2275 | - # The inode is added only if its valid. |
2276 | - # For win32 it is always 0. |
2277 | - type = REGTYPE |
2278 | - if inode[0]: |
2279 | - self.inodes[inode] = arcname |
2280 | - elif stat.S_ISDIR(stmd): |
2281 | - type = DIRTYPE |
2282 | - elif stat.S_ISFIFO(stmd): |
2283 | - type = FIFOTYPE |
2284 | - elif stat.S_ISLNK(stmd): |
2285 | - type = SYMTYPE |
2286 | - linkname = os.readlink(name) |
2287 | - elif stat.S_ISCHR(stmd): |
2288 | - type = CHRTYPE |
2289 | - elif stat.S_ISBLK(stmd): |
2290 | - type = BLKTYPE |
2291 | - else: |
2292 | - return None |
2293 | - |
2294 | - # Fill the TarInfo object with all |
2295 | - # information we can get. |
2296 | - tarinfo.name = arcname |
2297 | - tarinfo.mode = stmd |
2298 | - tarinfo.uid = statres.st_uid |
2299 | - tarinfo.gid = statres.st_gid |
2300 | - if type == REGTYPE: |
2301 | - tarinfo.size = statres.st_size |
2302 | - else: |
2303 | - tarinfo.size = 0L |
2304 | - tarinfo.mtime = statres.st_mtime |
2305 | - tarinfo.type = type |
2306 | - tarinfo.linkname = linkname |
2307 | - if pwd: |
2308 | - try: |
2309 | - tarinfo.uname = pwd.getpwuid(tarinfo.uid)[0] |
2310 | - except KeyError: |
2311 | - pass |
2312 | - if grp: |
2313 | - try: |
2314 | - tarinfo.gname = grp.getgrgid(tarinfo.gid)[0] |
2315 | - except KeyError: |
2316 | - pass |
2317 | - |
2318 | - if type in (CHRTYPE, BLKTYPE): |
2319 | - if hasattr(os, "major") and hasattr(os, "minor"): |
2320 | - tarinfo.devmajor = os.major(statres.st_rdev) |
2321 | - tarinfo.devminor = os.minor(statres.st_rdev) |
2322 | - return tarinfo |
2323 | - |
2324 | - def list(self, verbose=True): |
2325 | - """Print a table of contents to sys.stdout. If `verbose' is False, only |
2326 | - the names of the members are printed. If it is True, an `ls -l'-like |
2327 | - output is produced. |
2328 | - """ |
2329 | - self._check() |
2330 | - |
2331 | - for tarinfo in self: |
2332 | - if verbose: |
2333 | - print filemode(tarinfo.mode), |
2334 | - print "%s/%s" % (tarinfo.uname or tarinfo.uid, |
2335 | - tarinfo.gname or tarinfo.gid), |
2336 | - if tarinfo.ischr() or tarinfo.isblk(): |
2337 | - print "%10s" % ("%d,%d" \ |
2338 | - % (tarinfo.devmajor, tarinfo.devminor)), |
2339 | - else: |
2340 | - print "%10d" % tarinfo.size, |
2341 | - print "%d-%02d-%02d %02d:%02d:%02d" \ |
2342 | - % time.localtime(tarinfo.mtime)[:6], |
2343 | - |
2344 | - if tarinfo.isdir(): |
2345 | - print tarinfo.name + "/", |
2346 | - else: |
2347 | - print tarinfo.name, |
2348 | - |
2349 | - if verbose: |
2350 | - if tarinfo.issym(): |
2351 | - print "->", tarinfo.linkname, |
2352 | - if tarinfo.islnk(): |
2353 | - print "link to", tarinfo.linkname, |
2354 | |
2355 | - |
2356 | - def add(self, name, arcname=None, recursive=True, exclude=None, filter=None): |
2357 | - """Add the file `name' to the archive. `name' may be any type of file |
2358 | - (directory, fifo, symbolic link, etc.). If given, `arcname' |
2359 | - specifies an alternative name for the file in the archive. |
2360 | - Directories are added recursively by default. This can be avoided by |
2361 | - setting `recursive' to False. `exclude' is a function that should |
2362 | - return True for each filename to be excluded. `filter' is a function |
2363 | - that expects a TarInfo object argument and returns the changed |
2364 | - TarInfo object, if it returns None the TarInfo object will be |
2365 | - excluded from the archive. |
2366 | - """ |
2367 | - self._check("aw") |
2368 | - |
2369 | - if arcname is None: |
2370 | - arcname = name |
2371 | - |
2372 | - # Exclude pathnames. |
2373 | - if exclude is not None: |
2374 | - import warnings |
2375 | - warnings.warn("use the filter argument instead", |
2376 | - DeprecationWarning, 2) |
2377 | - if exclude(name): |
2378 | - self._dbg(2, "tarfile: Excluded %r" % name) |
2379 | - return |
2380 | - |
2381 | - # Skip if somebody tries to archive the archive... |
2382 | - if self.name is not None and os.path.abspath(name) == self.name: |
2383 | - self._dbg(2, "tarfile: Skipped %r" % name) |
2384 | - return |
2385 | - |
2386 | - self._dbg(1, name) |
2387 | - |
2388 | - # Create a TarInfo object from the file. |
2389 | - tarinfo = self.gettarinfo(name, arcname) |
2390 | - |
2391 | - if tarinfo is None: |
2392 | - self._dbg(1, "tarfile: Unsupported type %r" % name) |
2393 | - return |
2394 | - |
2395 | - # Change or exclude the TarInfo object. |
2396 | - if filter is not None: |
2397 | - tarinfo = filter(tarinfo) |
2398 | - if tarinfo is None: |
2399 | - self._dbg(2, "tarfile: Excluded %r" % name) |
2400 | - return |
2401 | - |
2402 | - # Append the tar header and data to the archive. |
2403 | - if tarinfo.isreg(): |
2404 | - f = bltn_open(name, "rb") |
2405 | - self.addfile(tarinfo, f) |
2406 | - f.close() |
2407 | - |
2408 | - elif tarinfo.isdir(): |
2409 | - self.addfile(tarinfo) |
2410 | - if recursive: |
2411 | - for f in os.listdir(name): |
2412 | - self.add(os.path.join(name, f), os.path.join(arcname, f), |
2413 | - recursive, exclude, filter) |
2414 | - |
2415 | - else: |
2416 | - self.addfile(tarinfo) |
2417 | - |
2418 | - def addfile(self, tarinfo, fileobj=None): |
2419 | - """Add the TarInfo object `tarinfo' to the archive. If `fileobj' is |
2420 | - given, tarinfo.size bytes are read from it and added to the archive. |
2421 | - You can create TarInfo objects using gettarinfo(). |
2422 | - On Windows platforms, `fileobj' should always be opened with mode |
2423 | - 'rb' to avoid irritation about the file size. |
2424 | - """ |
2425 | - self._check("aw") |
2426 | - |
2427 | - tarinfo = copy.copy(tarinfo) |
2428 | - |
2429 | - buf = tarinfo.tobuf(self.format, self.encoding, self.errors) |
2430 | - self.fileobj.write(buf) |
2431 | - self.offset += len(buf) |
2432 | - |
2433 | - # If there's data to follow, append it. |
2434 | - if fileobj is not None: |
2435 | - copyfileobj(fileobj, self.fileobj, tarinfo.size) |
2436 | - blocks, remainder = divmod(tarinfo.size, BLOCKSIZE) |
2437 | - if remainder > 0: |
2438 | - self.fileobj.write(NUL * (BLOCKSIZE - remainder)) |
2439 | - blocks += 1 |
2440 | - self.offset += blocks * BLOCKSIZE |
2441 | - |
2442 | - self.members.append(tarinfo) |
2443 | - |
2444 | - def extractall(self, path=".", members=None): |
2445 | - """Extract all members from the archive to the current working |
2446 | - directory and set owner, modification time and permissions on |
2447 | - directories afterwards. `path' specifies a different directory |
2448 | - to extract to. `members' is optional and must be a subset of the |
2449 | - list returned by getmembers(). |
2450 | - """ |
2451 | - directories = [] |
2452 | - |
2453 | - if members is None: |
2454 | - members = self |
2455 | - |
2456 | - for tarinfo in members: |
2457 | - if tarinfo.isdir(): |
2458 | - # Extract directories with a safe mode. |
2459 | - directories.append(tarinfo) |
2460 | - tarinfo = copy.copy(tarinfo) |
2461 | - tarinfo.mode = 0700 |
2462 | - self.extract(tarinfo, path) |
2463 | - |
2464 | - # Reverse sort directories. |
2465 | - directories.sort(key=operator.attrgetter('name')) |
2466 | - directories.reverse() |
2467 | - |
2468 | - # Set correct owner, mtime and filemode on directories. |
2469 | - for tarinfo in directories: |
2470 | - dirpath = os.path.join(path, tarinfo.name) |
2471 | - try: |
2472 | - self.chown(tarinfo, dirpath) |
2473 | - self.utime(tarinfo, dirpath) |
2474 | - self.chmod(tarinfo, dirpath) |
2475 | - except ExtractError, e: |
2476 | - if self.errorlevel > 1: |
2477 | - raise |
2478 | - else: |
2479 | - self._dbg(1, "tarfile: %s" % e) |
2480 | - |
2481 | - def extract(self, member, path=""): |
2482 | - """Extract a member from the archive to the current working directory, |
2483 | - using its full name. Its file information is extracted as accurately |
2484 | - as possible. `member' may be a filename or a TarInfo object. You can |
2485 | - specify a different directory using `path'. |
2486 | - """ |
2487 | - self._check("r") |
2488 | - |
2489 | - if isinstance(member, basestring): |
2490 | - tarinfo = self.getmember(member) |
2491 | - else: |
2492 | - tarinfo = member |
2493 | - |
2494 | - # Prepare the link target for makelink(). |
2495 | - if tarinfo.islnk(): |
2496 | - tarinfo._link_target = os.path.join(path, tarinfo.linkname) |
2497 | - |
2498 | - try: |
2499 | - self._extract_member(tarinfo, os.path.join(path, tarinfo.name)) |
2500 | - except EnvironmentError, e: |
2501 | - if self.errorlevel > 0: |
2502 | - raise |
2503 | - else: |
2504 | - if e.filename is None: |
2505 | - self._dbg(1, "tarfile: %s" % e.strerror) |
2506 | - else: |
2507 | - self._dbg(1, "tarfile: %s %r" % (e.strerror, e.filename)) |
2508 | - except ExtractError, e: |
2509 | - if self.errorlevel > 1: |
2510 | - raise |
2511 | - else: |
2512 | - self._dbg(1, "tarfile: %s" % e) |
2513 | - |
2514 | - def extractfile(self, member): |
2515 | - """Extract a member from the archive as a file object. `member' may be |
2516 | - a filename or a TarInfo object. If `member' is a regular file, a |
2517 | - file-like object is returned. If `member' is a link, a file-like |
2518 | - object is constructed from the link's target. If `member' is none of |
2519 | - the above, None is returned. |
2520 | - The file-like object is read-only and provides the following |
2521 | - methods: read(), readline(), readlines(), seek() and tell() |
2522 | - """ |
2523 | - self._check("r") |
2524 | - |
2525 | - if isinstance(member, basestring): |
2526 | - tarinfo = self.getmember(member) |
2527 | - else: |
2528 | - tarinfo = member |
2529 | - |
2530 | - if tarinfo.isreg(): |
2531 | - return self.fileobject(self, tarinfo) |
2532 | - |
2533 | - elif tarinfo.type not in SUPPORTED_TYPES: |
2534 | - # If a member's type is unknown, it is treated as a |
2535 | - # regular file. |
2536 | - return self.fileobject(self, tarinfo) |
2537 | - |
2538 | - elif tarinfo.islnk() or tarinfo.issym(): |
2539 | - if isinstance(self.fileobj, _Stream): |
2540 | - # A small but ugly workaround for the case that someone tries |
2541 | - # to extract a (sym)link as a file-object from a non-seekable |
2542 | - # stream of tar blocks. |
2543 | - raise StreamError("cannot extract (sym)link as file object") |
2544 | - else: |
2545 | - # A (sym)link's file object is its target's file object. |
2546 | - return self.extractfile(self._find_link_target(tarinfo)) |
2547 | - else: |
2548 | - # If there's no data associated with the member (directory, chrdev, |
2549 | - # blkdev, etc.), return None instead of a file object. |
2550 | - return None |
2551 | - |
2552 | - def _extract_member(self, tarinfo, targetpath): |
2553 | - """Extract the TarInfo object tarinfo to a physical |
2554 | - file called targetpath. |
2555 | - """ |
2556 | - # Fetch the TarInfo object for the given name |
2557 | - # and build the destination pathname, replacing |
2558 | - # forward slashes to platform specific separators. |
2559 | - targetpath = targetpath.rstrip("/") |
2560 | - targetpath = targetpath.replace("/", os.sep) |
2561 | - |
2562 | - # Create all upper directories. |
2563 | - upperdirs = os.path.dirname(targetpath) |
2564 | - if upperdirs and not os.path.exists(upperdirs): |
2565 | - # Create directories that are not part of the archive with |
2566 | - # default permissions. |
2567 | - os.makedirs(upperdirs) |
2568 | - |
2569 | - if tarinfo.islnk() or tarinfo.issym(): |
2570 | - self._dbg(1, "%s -> %s" % (tarinfo.name, tarinfo.linkname)) |
2571 | - else: |
2572 | - self._dbg(1, tarinfo.name) |
2573 | - |
2574 | - if tarinfo.isreg(): |
2575 | - self.makefile(tarinfo, targetpath) |
2576 | - elif tarinfo.isdir(): |
2577 | - self.makedir(tarinfo, targetpath) |
2578 | - elif tarinfo.isfifo(): |
2579 | - self.makefifo(tarinfo, targetpath) |
2580 | - elif tarinfo.ischr() or tarinfo.isblk(): |
2581 | - self.makedev(tarinfo, targetpath) |
2582 | - elif tarinfo.islnk() or tarinfo.issym(): |
2583 | - self.makelink(tarinfo, targetpath) |
2584 | - elif tarinfo.type not in SUPPORTED_TYPES: |
2585 | - self.makeunknown(tarinfo, targetpath) |
2586 | - else: |
2587 | - self.makefile(tarinfo, targetpath) |
2588 | - |
2589 | - self.chown(tarinfo, targetpath) |
2590 | - if not tarinfo.issym(): |
2591 | - self.chmod(tarinfo, targetpath) |
2592 | - self.utime(tarinfo, targetpath) |
2593 | - |
2594 | - #-------------------------------------------------------------------------- |
2595 | - # Below are the different file methods. They are called via |
2596 | - # _extract_member() when extract() is called. They can be replaced in a |
2597 | - # subclass to implement other functionality. |
2598 | - |
2599 | - def makedir(self, tarinfo, targetpath): |
2600 | - """Make a directory called targetpath. |
2601 | - """ |
2602 | - try: |
2603 | - # Use a safe mode for the directory, the real mode is set |
2604 | - # later in _extract_member(). |
2605 | - os.mkdir(targetpath, 0700) |
2606 | - except EnvironmentError, e: |
2607 | - if e.errno != errno.EEXIST: |
2608 | - raise |
2609 | - |
2610 | - def makefile(self, tarinfo, targetpath): |
2611 | - """Make a file called targetpath. |
2612 | - """ |
2613 | - source = self.extractfile(tarinfo) |
2614 | - target = bltn_open(targetpath, "wb") |
2615 | - copyfileobj(source, target) |
2616 | - source.close() |
2617 | - target.close() |
2618 | - |
2619 | - def makeunknown(self, tarinfo, targetpath): |
2620 | - """Make a file from a TarInfo object with an unknown type |
2621 | - at targetpath. |
2622 | - """ |
2623 | - self.makefile(tarinfo, targetpath) |
2624 | - self._dbg(1, "tarfile: Unknown file type %r, " \ |
2625 | - "extracted as regular file." % tarinfo.type) |
2626 | - |
2627 | - def makefifo(self, tarinfo, targetpath): |
2628 | - """Make a fifo called targetpath. |
2629 | - """ |
2630 | - if hasattr(os, "mkfifo"): |
2631 | - os.mkfifo(targetpath) |
2632 | - else: |
2633 | - raise ExtractError("fifo not supported by system") |
2634 | - |
2635 | - def makedev(self, tarinfo, targetpath): |
2636 | - """Make a character or block device called targetpath. |
2637 | - """ |
2638 | - if not hasattr(os, "mknod") or not hasattr(os, "makedev"): |
2639 | - raise ExtractError("special devices not supported by system") |
2640 | - |
2641 | - mode = tarinfo.mode |
2642 | - if tarinfo.isblk(): |
2643 | - mode |= stat.S_IFBLK |
2644 | - else: |
2645 | - mode |= stat.S_IFCHR |
2646 | - |
2647 | - os.mknod(targetpath, mode, |
2648 | - os.makedev(tarinfo.devmajor, tarinfo.devminor)) |
2649 | - |
2650 | - def makelink(self, tarinfo, targetpath): |
2651 | - """Make a (symbolic) link called targetpath. If it cannot be created |
2652 | - (platform limitation), we try to make a copy of the referenced file |
2653 | - instead of a link. |
2654 | - """ |
2655 | - if hasattr(os, "symlink") and hasattr(os, "link"): |
2656 | - # For systems that support symbolic and hard links. |
2657 | - if tarinfo.issym(): |
2658 | - os.symlink(tarinfo.linkname, targetpath) |
2659 | - else: |
2660 | - # See extract(). |
2661 | - if os.path.exists(tarinfo._link_target): |
2662 | - os.link(tarinfo._link_target, targetpath) |
2663 | - else: |
2664 | - self._extract_member(self._find_link_target(tarinfo), targetpath) |
2665 | - else: |
2666 | - try: |
2667 | - self._extract_member(self._find_link_target(tarinfo), targetpath) |
2668 | - except KeyError: |
2669 | - raise ExtractError("unable to resolve link inside archive") |
2670 | - |
2671 | - def chown(self, tarinfo, targetpath): |
2672 | - """Set owner of targetpath according to tarinfo. |
2673 | - """ |
2674 | - if pwd and hasattr(os, "geteuid") and os.geteuid() == 0: |
2675 | - # We have to be root to do so. |
2676 | - try: |
2677 | - g = grp.getgrnam(tarinfo.gname)[2] |
2678 | - except KeyError: |
2679 | - try: |
2680 | - g = grp.getgrgid(tarinfo.gid)[2] |
2681 | - except KeyError: |
2682 | - g = os.getgid() |
2683 | - try: |
2684 | - u = pwd.getpwnam(tarinfo.uname)[2] |
2685 | - except KeyError: |
2686 | - try: |
2687 | - u = pwd.getpwuid(tarinfo.uid)[2] |
2688 | - except KeyError: |
2689 | - u = os.getuid() |
2690 | - try: |
2691 | - if tarinfo.issym() and hasattr(os, "lchown"): |
2692 | - os.lchown(targetpath, u, g) |
2693 | - else: |
2694 | - if sys.platform != "os2emx": |
2695 | - os.chown(targetpath, u, g) |
2696 | - except EnvironmentError, e: |
2697 | - raise ExtractError("could not change owner to %d:%d" % (u, g)) |
2698 | - |
2699 | - def chmod(self, tarinfo, targetpath): |
2700 | - """Set file permissions of targetpath according to tarinfo. |
2701 | - """ |
2702 | - if hasattr(os, 'chmod'): |
2703 | - try: |
2704 | - os.chmod(targetpath, tarinfo.mode) |
2705 | - except EnvironmentError, e: |
2706 | - raise ExtractError("could not change mode") |
2707 | - |
2708 | - def utime(self, tarinfo, targetpath): |
2709 | - """Set modification time of targetpath according to tarinfo. |
2710 | - """ |
2711 | - if not hasattr(os, 'utime'): |
2712 | - return |
2713 | - try: |
2714 | - os.utime(targetpath, (tarinfo.mtime, tarinfo.mtime)) |
2715 | - except EnvironmentError, e: |
2716 | - raise ExtractError("could not change modification time") |
2717 | - |
2718 | - #-------------------------------------------------------------------------- |
2719 | - def next(self): |
2720 | - """Return the next member of the archive as a TarInfo object, when |
2721 | - TarFile is opened for reading. Return None if there is no more |
2722 | - available. |
2723 | - """ |
2724 | - self._check("ra") |
2725 | - if self.firstmember is not None: |
2726 | - m = self.firstmember |
2727 | - self.firstmember = None |
2728 | - return m |
2729 | - |
2730 | - # Read the next block. |
2731 | - self.fileobj.seek(self.offset) |
2732 | - tarinfo = None |
2733 | - while True: |
2734 | - try: |
2735 | - tarinfo = self.tarinfo.fromtarfile(self) |
2736 | - except EOFHeaderError, e: |
2737 | - if self.ignore_zeros: |
2738 | - self._dbg(2, "0x%X: %s" % (self.offset, e)) |
2739 | - self.offset += BLOCKSIZE |
2740 | - continue |
2741 | - except InvalidHeaderError, e: |
2742 | - if self.ignore_zeros: |
2743 | - self._dbg(2, "0x%X: %s" % (self.offset, e)) |
2744 | - self.offset += BLOCKSIZE |
2745 | - continue |
2746 | - elif self.offset == 0: |
2747 | - raise ReadError(str(e)) |
2748 | - except EmptyHeaderError: |
2749 | - if self.offset == 0: |
2750 | - raise ReadError("empty file") |
2751 | - except TruncatedHeaderError, e: |
2752 | - if self.offset == 0: |
2753 | - raise ReadError(str(e)) |
2754 | - except SubsequentHeaderError, e: |
2755 | - raise ReadError(str(e)) |
2756 | - break |
2757 | - |
2758 | - if tarinfo is not None: |
2759 | - self.members.append(tarinfo) |
2760 | - else: |
2761 | - self._loaded = True |
2762 | - |
2763 | - return tarinfo |
2764 | - |
2765 | - #-------------------------------------------------------------------------- |
2766 | - # Little helper methods: |
2767 | - |
2768 | - def _getmember(self, name, tarinfo=None, normalize=False): |
2769 | - """Find an archive member by name from bottom to top. |
2770 | - If tarinfo is given, it is used as the starting point. |
2771 | - """ |
2772 | - # Ensure that all members have been loaded. |
2773 | - members = self.getmembers() |
2774 | - |
2775 | - # Limit the member search list up to tarinfo. |
2776 | - if tarinfo is not None: |
2777 | - members = members[:members.index(tarinfo)] |
2778 | - |
2779 | - if normalize: |
2780 | - name = os.path.normpath(name) |
2781 | - |
2782 | - for member in reversed(members): |
2783 | - if normalize: |
2784 | - member_name = os.path.normpath(member.name) |
2785 | - else: |
2786 | - member_name = member.name |
2787 | - |
2788 | - if name == member_name: |
2789 | - return member |
2790 | - |
2791 | - def _load(self): |
2792 | - """Read through the entire archive file and look for readable |
2793 | - members. |
2794 | - """ |
2795 | - while True: |
2796 | - tarinfo = self.next() |
2797 | - if tarinfo is None: |
2798 | - break |
2799 | - self._loaded = True |
2800 | - |
2801 | - def _check(self, mode=None): |
2802 | - """Check if TarFile is still open, and if the operation's mode |
2803 | - corresponds to TarFile's mode. |
2804 | - """ |
2805 | - if self.closed: |
2806 | - raise IOError("%s is closed" % self.__class__.__name__) |
2807 | - if mode is not None and self.mode not in mode: |
2808 | - raise IOError("bad operation for mode %r" % self.mode) |
2809 | - |
2810 | - def _find_link_target(self, tarinfo): |
2811 | - """Find the target member of a symlink or hardlink member in the |
2812 | - archive. |
2813 | - """ |
2814 | - if tarinfo.issym(): |
2815 | - # Always search the entire archive. |
2816 | - linkname = os.path.dirname(tarinfo.name) + "/" + tarinfo.linkname |
2817 | - limit = None |
2818 | - else: |
2819 | - # Search the archive before the link, because a hard link is |
2820 | - # just a reference to an already archived file. |
2821 | - linkname = tarinfo.linkname |
2822 | - limit = tarinfo |
2823 | - |
2824 | - member = self._getmember(linkname, tarinfo=limit, normalize=True) |
2825 | - if member is None: |
2826 | - raise KeyError("linkname %r not found" % linkname) |
2827 | - return member |
2828 | - |
2829 | - def __iter__(self): |
2830 | - """Provide an iterator object. |
2831 | - """ |
2832 | - if self._loaded: |
2833 | - return iter(self.members) |
2834 | - else: |
2835 | - return TarIter(self) |
2836 | - |
2837 | - def _dbg(self, level, msg): |
2838 | - """Write debugging output to sys.stderr. |
2839 | - """ |
2840 | - if level <= self.debug: |
2841 | - print >> sys.stderr, msg |
2842 | - |
2843 | - def __enter__(self): |
2844 | - self._check() |
2845 | - return self |
2846 | - |
2847 | - def __exit__(self, type, value, traceback): |
2848 | - if type is None: |
2849 | - self.close() |
2850 | - else: |
2851 | - # An exception occurred. We must not call close() because |
2852 | - # it would try to write end-of-archive blocks and padding. |
2853 | - if not self._extfileobj: |
2854 | - self.fileobj.close() |
2855 | - self.closed = True |
2856 | -# class TarFile |
2857 | - |
2858 | -class TarIter: |
2859 | - """Iterator Class. |
2860 | - |
2861 | - for tarinfo in TarFile(...): |
2862 | - suite... |
2863 | - """ |
2864 | - |
2865 | - def __init__(self, tarfile): |
2866 | - """Construct a TarIter object. |
2867 | - """ |
2868 | - self.tarfile = tarfile |
2869 | - self.index = 0 |
2870 | - def __iter__(self): |
2871 | - """Return iterator object. |
2872 | - """ |
2873 | - return self |
2874 | - def next(self): |
2875 | - """Return the next item using TarFile's next() method. |
2876 | - When all members have been read, set TarFile as _loaded. |
2877 | - """ |
2878 | - # Fix for SF #1100429: Under rare circumstances it can |
2879 | - # happen that getmembers() is called during iteration, |
2880 | - # which will cause TarIter to stop prematurely. |
2881 | - if not self.tarfile._loaded: |
2882 | - tarinfo = self.tarfile.next() |
2883 | - if not tarinfo: |
2884 | - self.tarfile._loaded = True |
2885 | - raise StopIteration |
2886 | - else: |
2887 | - try: |
2888 | - tarinfo = self.tarfile.members[self.index] |
2889 | - except IndexError: |
2890 | - raise StopIteration |
2891 | - self.index += 1 |
2892 | - return tarinfo |
2893 | - |
2894 | -# Helper classes for sparse file support |
2895 | -class _section: |
2896 | - """Base class for _data and _hole. |
2897 | - """ |
2898 | - def __init__(self, offset, size): |
2899 | - self.offset = offset |
2900 | - self.size = size |
2901 | - def __contains__(self, offset): |
2902 | - return self.offset <= offset < self.offset + self.size |
2903 | - |
2904 | -class _data(_section): |
2905 | - """Represent a data section in a sparse file. |
2906 | - """ |
2907 | - def __init__(self, offset, size, realpos): |
2908 | - _section.__init__(self, offset, size) |
2909 | - self.realpos = realpos |
2910 | - |
2911 | -class _hole(_section): |
2912 | - """Represent a hole section in a sparse file. |
2913 | - """ |
2914 | - pass |
2915 | - |
2916 | -class _ringbuffer(list): |
2917 | - """Ringbuffer class which increases performance |
2918 | - over a regular list. |
2919 | - """ |
2920 | - def __init__(self): |
2921 | - self.idx = 0 |
2922 | - def find(self, offset): |
2923 | - idx = self.idx |
2924 | - while True: |
2925 | - item = self[idx] |
2926 | - if offset in item: |
2927 | - break |
2928 | - idx += 1 |
2929 | - if idx == len(self): |
2930 | - idx = 0 |
2931 | - if idx == self.idx: |
2932 | - # End of File |
2933 | - return None |
2934 | - self.idx = idx |
2935 | - return item |
2936 | - |
2937 | -#--------------------------------------------- |
2938 | -# zipfile compatible TarFile class |
2939 | -#--------------------------------------------- |
2940 | -TAR_PLAIN = 0 # zipfile.ZIP_STORED |
2941 | -TAR_GZIPPED = 8 # zipfile.ZIP_DEFLATED |
2942 | -class TarFileCompat: |
2943 | - """TarFile class compatible with standard module zipfile's |
2944 | - ZipFile class. |
2945 | - """ |
2946 | - def __init__(self, file, mode="r", compression=TAR_PLAIN): |
2947 | - from warnings import warnpy3k |
2948 | - warnpy3k("the TarFileCompat class has been removed in Python 3.0", |
2949 | - stacklevel=2) |
2950 | - if compression == TAR_PLAIN: |
2951 | - self.tarfile = TarFile.taropen(file, mode) |
2952 | - elif compression == TAR_GZIPPED: |
2953 | - self.tarfile = TarFile.gzopen(file, mode) |
2954 | - else: |
2955 | - raise ValueError("unknown compression constant") |
2956 | - if mode[0:1] == "r": |
2957 | - members = self.tarfile.getmembers() |
2958 | - for m in members: |
2959 | - m.filename = m.name |
2960 | - m.file_size = m.size |
2961 | - m.date_time = time.gmtime(m.mtime)[:6] |
2962 | - def namelist(self): |
2963 | - return map(lambda m: m.name, self.infolist()) |
2964 | - def infolist(self): |
2965 | - return filter(lambda m: m.type in REGULAR_TYPES, |
2966 | - self.tarfile.getmembers()) |
2967 | - def printdir(self): |
2968 | - self.tarfile.list() |
2969 | - def testzip(self): |
2970 | - return |
2971 | - def getinfo(self, name): |
2972 | - return self.tarfile.getmember(name) |
2973 | - def read(self, name): |
2974 | - return self.tarfile.extractfile(self.tarfile.getmember(name)).read() |
2975 | - def write(self, filename, arcname=None, compress_type=None): |
2976 | - self.tarfile.add(filename, arcname) |
2977 | - def writestr(self, zinfo, bytes): |
2978 | - try: |
2979 | - from cStringIO import StringIO |
2980 | - except ImportError: |
2981 | - from StringIO import StringIO |
2982 | - import calendar |
2983 | - tinfo = TarInfo(zinfo.filename) |
2984 | - tinfo.size = len(bytes) |
2985 | - tinfo.mtime = calendar.timegm(zinfo.date_time) |
2986 | - self.tarfile.addfile(tinfo, StringIO(bytes)) |
2987 | - def close(self): |
2988 | - self.tarfile.close() |
2989 | -#class TarFileCompat |
2990 | - |
2991 | -#-------------------- |
2992 | -# exported functions |
2993 | -#-------------------- |
2994 | -def is_tarfile(name): |
2995 | - """Return True if name points to a tar archive that we |
2996 | - are able to handle, else return False. |
2997 | - """ |
2998 | - try: |
2999 | - t = open(name) |
3000 | - t.close() |
3001 | - return True |
3002 | - except TarError: |
3003 | - return False |
3004 | - |
3005 | -bltn_open = open |
3006 | -open = TarFile.open |
3007 | |
3008 | === removed file 'duplicity/urlparse_2_5.py' |
3009 | --- duplicity/urlparse_2_5.py 2011-10-08 16:22:30 +0000 |
3010 | +++ duplicity/urlparse_2_5.py 1970-01-01 00:00:00 +0000 |
3011 | @@ -1,385 +0,0 @@ |
3012 | -# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*- |
3013 | - |
3014 | -"""Parse (absolute and relative) URLs. |
3015 | - |
3016 | -See RFC 1808: "Relative Uniform Resource Locators", by R. Fielding, |
3017 | -UC Irvine, June 1995. |
3018 | -""" |
3019 | - |
3020 | -__all__ = ["urlparse", "urlunparse", "urljoin", "urldefrag", |
3021 | - "urlsplit", "urlunsplit"] |
3022 | - |
3023 | -# A classification of schemes ('' means apply by default) |
3024 | -uses_relative = ['ftp', 'ftps', 'http', 'gopher', 'nntp', |
3025 | - 'wais', 'file', 'https', 'shttp', 'mms', |
3026 | - 'prospero', 'rtsp', 'rtspu', '', 'sftp', 'imap', 'imaps'] |
3027 | -uses_netloc = ['ftp', 'ftps', 'http', 'gopher', 'nntp', 'telnet', |
3028 | - 'wais', 'file', 'mms', 'https', 'shttp', |
3029 | - 'snews', 'prospero', 'rtsp', 'rtspu', 'rsync', '', |
3030 | - 'svn', 'svn+ssh', 'sftp', 'imap', 'imaps'] |
3031 | -non_hierarchical = ['gopher', 'hdl', 'mailto', 'news', |
3032 | - 'telnet', 'wais', 'snews', 'sip', 'sips', 'imap', 'imaps'] |
3033 | -uses_params = ['ftp', 'ftps', 'hdl', 'prospero', 'http', |
3034 | - 'https', 'shttp', 'rtsp', 'rtspu', 'sip', 'sips', |
3035 | - 'mms', '', 'sftp', 'imap', 'imaps'] |
3036 | -uses_query = ['http', 'wais', 'https', 'shttp', 'mms', |
3037 | - 'gopher', 'rtsp', 'rtspu', 'sip', 'sips', 'imap', 'imaps', ''] |
3038 | -uses_fragment = ['ftp', 'ftps', 'hdl', 'http', 'gopher', 'news', |
3039 | - 'nntp', 'wais', 'https', 'shttp', 'snews', |
3040 | - 'file', 'prospero', ''] |
3041 | - |
3042 | -# Characters valid in scheme names |
3043 | -scheme_chars = ('abcdefghijklmnopqrstuvwxyz' |
3044 | - 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' |
3045 | - '0123456789' |
3046 | - '+-.') |
3047 | - |
3048 | -MAX_CACHE_SIZE = 20 |
3049 | -_parse_cache = {} |
3050 | - |
3051 | -def clear_cache(): |
3052 | - """Clear the parse cache.""" |
3053 | - global _parse_cache |
3054 | - _parse_cache = {} |
3055 | - |
3056 | -import string |
3057 | -def _rsplit(str, delim, numsplit): |
3058 | - parts = string.split(str, delim) |
3059 | - if len(parts) <= numsplit + 1: |
3060 | - return parts |
3061 | - else: |
3062 | - left = string.join(parts[0:-numsplit], delim) |
3063 | - right = string.join(parts[len(parts)-numsplit:], delim) |
3064 | - return [left, right] |
3065 | - |
3066 | -class BaseResult(tuple): |
3067 | - """Base class for the parsed result objects. |
3068 | - |
3069 | - This provides the attributes shared by the two derived result |
3070 | - objects as read-only properties. The derived classes are |
3071 | - responsible for checking the right number of arguments were |
3072 | - supplied to the constructor. |
3073 | - |
3074 | - """ |
3075 | - |
3076 | - __slots__ = () |
3077 | - |
3078 | - # Attributes that access the basic components of the URL: |
3079 | - |
3080 | - def get_scheme(self): |
3081 | - return self[0] |
3082 | - scheme = property(get_scheme) |
3083 | - |
3084 | - def get_netloc(self): |
3085 | - return self[1] |
3086 | - netloc = property(get_netloc) |
3087 | - |
3088 | - def get_path(self): |
3089 | - return self[2] |
3090 | - path = property(get_path) |
3091 | - |
3092 | - def get_query(self): |
3093 | - return self[-2] |
3094 | - query = property(get_query) |
3095 | - |
3096 | - def get_fragment(self): |
3097 | - return self[-1] |
3098 | - fragment = property(get_fragment) |
3099 | - |
3100 | - # Additional attributes that provide access to parsed-out portions |
3101 | - # of the netloc: |
3102 | - |
3103 | - def get_username(self): |
3104 | - netloc = self.netloc |
3105 | - if "@" in netloc: |
3106 | - userinfo = _rsplit(netloc, "@", 1)[0] |
3107 | - if ":" in userinfo: |
3108 | - userinfo = userinfo.split(":", 1)[0] |
3109 | - return userinfo |
3110 | - return None |
3111 | - username = property(get_username) |
3112 | - |
3113 | - def get_password(self): |
3114 | - netloc = self.netloc |
3115 | - if "@" in netloc: |
3116 | - userinfo = _rsplit(netloc, "@", 1)[0] |
3117 | - if ":" in userinfo: |
3118 | - return userinfo.split(":", 1)[1] |
3119 | - return None |
3120 | - password = property(get_password) |
3121 | - |
3122 | - def get_hostname(self): |
3123 | - netloc = self.netloc.split('@')[-1] |
3124 | - if '[' in netloc and ']' in netloc: |
3125 | - return netloc.split(']')[0][1:].lower() |
3126 | - elif ':' in netloc: |
3127 | - return netloc.split(':')[0].lower() |
3128 | - elif netloc == '': |
3129 | - return None |
3130 | - else: |
3131 | - return netloc.lower() |
3132 | - hostname = property(get_hostname) |
3133 | - |
3134 | - def get_port(self): |
3135 | - netloc = self.netloc.split('@')[-1].split(']')[-1] |
3136 | - if ":" in netloc: |
3137 | - port = netloc.split(":", 1)[1] |
3138 | - return int(port, 10) |
3139 | - return None |
3140 | - port = property(get_port) |
3141 | - |
3142 | - |
3143 | -class SplitResult(BaseResult): |
3144 | - |
3145 | - __slots__ = () |
3146 | - |
3147 | - def __new__(cls, scheme, netloc, path, query, fragment): |
3148 | - return BaseResult.__new__( |
3149 | - cls, (scheme, netloc, path, query, fragment)) |
3150 | - |
3151 | - def geturl(self): |
3152 | - return urlunsplit(self) |
3153 | - |
3154 | - |
3155 | -class ParseResult(BaseResult): |
3156 | - |
3157 | - __slots__ = () |
3158 | - |
3159 | - def __new__(cls, scheme, netloc, path, params, query, fragment): |
3160 | - return BaseResult.__new__( |
3161 | - cls, (scheme, netloc, path, params, query, fragment)) |
3162 | - |
3163 | - def get_params(self): |
3164 | - return self[3] |
3165 | - params = property(get_params) |
3166 | - |
3167 | - def geturl(self): |
3168 | - return urlunparse(self) |
3169 | - |
3170 | - |
3171 | -def urlparse(url, scheme='', allow_fragments=True): |
3172 | - """Parse a URL into 6 components: |
3173 | - <scheme>://<netloc>/<path>;<params>?<query>#<fragment> |
3174 | - Return a 6-tuple: (scheme, netloc, path, params, query, fragment). |
3175 | - Note that we don't break the components up in smaller bits |
3176 | - (e.g. netloc is a single string) and we don't expand % escapes.""" |
3177 | - tuple = urlsplit(url, scheme, allow_fragments) |
3178 | - scheme, netloc, url, query, fragment = tuple |
3179 | - if scheme in uses_params and ';' in url: |
3180 | - url, params = _splitparams(url) |
3181 | - else: |
3182 | - params = '' |
3183 | - return ParseResult(scheme, netloc, url, params, query, fragment) |
3184 | - |
3185 | -def _splitparams(url): |
3186 | - if '/' in url: |
3187 | - i = url.find(';', url.rfind('/')) |
3188 | - if i < 0: |
3189 | - return url, '' |
3190 | - else: |
3191 | - i = url.find(';') |
3192 | - return url[:i], url[i+1:] |
3193 | - |
3194 | -def _splitnetloc(url, start=0): |
3195 | - for c in '/?#': # the order is important! |
3196 | - delim = url.find(c, start) |
3197 | - if delim >= 0: |
3198 | - break |
3199 | - else: |
3200 | - delim = len(url) |
3201 | - return url[start:delim], url[delim:] |
3202 | - |
3203 | -def urlsplit(url, scheme='', allow_fragments=True): |
3204 | - """Parse a URL into 5 components: |
3205 | - <scheme>://<netloc>/<path>?<query>#<fragment> |
3206 | - Return a 5-tuple: (scheme, netloc, path, query, fragment). |
3207 | - Note that we don't break the components up in smaller bits |
3208 | - (e.g. netloc is a single string) and we don't expand % escapes.""" |
3209 | - allow_fragments = bool(allow_fragments) |
3210 | - key = url, scheme, allow_fragments |
3211 | - cached = _parse_cache.get(key, None) |
3212 | - if cached: |
3213 | - return cached |
3214 | - if len(_parse_cache) >= MAX_CACHE_SIZE: # avoid runaway growth |
3215 | - clear_cache() |
3216 | - netloc = query = fragment = '' |
3217 | - i = url.find(':') |
3218 | - if i > 0: |
3219 | - if url[:i] == 'http': # optimize the common case |
3220 | - scheme = url[:i].lower() |
3221 | - url = url[i+1:] |
3222 | - if url[:2] == '//': |
3223 | - netloc, url = _splitnetloc(url, 2) |
3224 | - if allow_fragments and '#' in url: |
3225 | - url, fragment = url.split('#', 1) |
3226 | - if '?' in url: |
3227 | - url, query = url.split('?', 1) |
3228 | - v = SplitResult(scheme, netloc, url, query, fragment) |
3229 | - _parse_cache[key] = v |
3230 | - return v |
3231 | - for c in url[:i]: |
3232 | - if c not in scheme_chars: |
3233 | - break |
3234 | - else: |
3235 | - scheme, url = url[:i].lower(), url[i+1:] |
3236 | - if scheme in uses_netloc and url[:2] == '//': |
3237 | - netloc, url = _splitnetloc(url, 2) |
3238 | - if allow_fragments and scheme in uses_fragment and '#' in url: |
3239 | - url, fragment = url.split('#', 1) |
3240 | - if scheme in uses_query and '?' in url: |
3241 | - url, query = url.split('?', 1) |
3242 | - v = SplitResult(scheme, netloc, url, query, fragment) |
3243 | - _parse_cache[key] = v |
3244 | - return v |
3245 | - |
3246 | -def urlunparse((scheme, netloc, url, params, query, fragment)): |
3247 | - """Put a parsed URL back together again. This may result in a |
3248 | - slightly different, but equivalent URL, if the URL that was parsed |
3249 | - originally had redundant delimiters, e.g. a ? with an empty query |
3250 | - (the draft states that these are equivalent).""" |
3251 | - if params: |
3252 | - url = "%s;%s" % (url, params) |
3253 | - return urlunsplit((scheme, netloc, url, query, fragment)) |
3254 | - |
3255 | -def urlunsplit((scheme, netloc, url, query, fragment)): |
3256 | - if netloc or (scheme and scheme in uses_netloc and url[:2] != '//'): |
3257 | - if url and url[:1] != '/': url = '/' + url |
3258 | - url = '//' + (netloc or '') + url |
3259 | - if scheme: |
3260 | - url = scheme + ':' + url |
3261 | - if query: |
3262 | - url = url + '?' + query |
3263 | - if fragment: |
3264 | - url = url + '#' + fragment |
3265 | - return url |
3266 | - |
3267 | -def urljoin(base, url, allow_fragments=True): |
3268 | - """Join a base URL and a possibly relative URL to form an absolute |
3269 | - interpretation of the latter.""" |
3270 | - if not base: |
3271 | - return url |
3272 | - if not url: |
3273 | - return base |
3274 | - bscheme, bnetloc, bpath, bparams, bquery, bfragment = urlparse(base, '', allow_fragments) #@UnusedVariable |
3275 | - scheme, netloc, path, params, query, fragment = urlparse(url, bscheme, allow_fragments) |
3276 | - if scheme != bscheme or scheme not in uses_relative: |
3277 | - return url |
3278 | - if scheme in uses_netloc: |
3279 | - if netloc: |
3280 | - return urlunparse((scheme, netloc, path, |
3281 | - params, query, fragment)) |
3282 | - netloc = bnetloc |
3283 | - if path[:1] == '/': |
3284 | - return urlunparse((scheme, netloc, path, |
3285 | - params, query, fragment)) |
3286 | - if not (path or params or query): |
3287 | - return urlunparse((scheme, netloc, bpath, |
3288 | - bparams, bquery, fragment)) |
3289 | - segments = bpath.split('/')[:-1] + path.split('/') |
3290 | - # XXX The stuff below is bogus in various ways... |
3291 | - if segments[-1] == '.': |
3292 | - segments[-1] = '' |
3293 | - while '.' in segments: |
3294 | - segments.remove('.') |
3295 | - while 1: |
3296 | - i = 1 |
3297 | - n = len(segments) - 1 |
3298 | - while i < n: |
3299 | - if (segments[i] == '..' |
3300 | - and segments[i-1] not in ('', '..')): |
3301 | - del segments[i-1:i+1] |
3302 | - break |
3303 | - i = i+1 |
3304 | - else: |
3305 | - break |
3306 | - if segments == ['', '..']: |
3307 | - segments[-1] = '' |
3308 | - elif len(segments) >= 2 and segments[-1] == '..': |
3309 | - segments[-2:] = [''] |
3310 | - return urlunparse((scheme, netloc, '/'.join(segments), |
3311 | - params, query, fragment)) |
3312 | - |
3313 | -def urldefrag(url): |
3314 | - """Removes any existing fragment from URL. |
3315 | - |
3316 | - Returns a tuple of the defragmented URL and the fragment. If |
3317 | - the URL contained no fragments, the second element is the |
3318 | - empty string. |
3319 | - """ |
3320 | - if '#' in url: |
3321 | - s, n, p, a, q, frag = urlparse(url) |
3322 | - defrag = urlunparse((s, n, p, a, q, '')) |
3323 | - return defrag, frag |
3324 | - else: |
3325 | - return url, '' |
3326 | - |
3327 | - |
3328 | -test_input = """ |
3329 | - http://a/b/c/d |
3330 | - |
3331 | - g:h = <URL:g:h> |
3332 | - http:g = <URL:http://a/b/c/g> |
3333 | - http: = <URL:http://a/b/c/d> |
3334 | - g = <URL:http://a/b/c/g> |
3335 | - ./g = <URL:http://a/b/c/g> |
3336 | - g/ = <URL:http://a/b/c/g/> |
3337 | - /g = <URL:http://a/g> |
3338 | - //g = <URL:http://g> |
3339 | - ?y = <URL:http://a/b/c/d?y> |
3340 | - g?y = <URL:http://a/b/c/g?y> |
3341 | - g?y/./x = <URL:http://a/b/c/g?y/./x> |
3342 | - . = <URL:http://a/b/c/> |
3343 | - ./ = <URL:http://a/b/c/> |
3344 | - .. = <URL:http://a/b/> |
3345 | - ../ = <URL:http://a/b/> |
3346 | - ../g = <URL:http://a/b/g> |
3347 | - ../.. = <URL:http://a/> |
3348 | - ../../g = <URL:http://a/g> |
3349 | - ../../../g = <URL:http://a/../g> |
3350 | - ./../g = <URL:http://a/b/g> |
3351 | - ./g/. = <URL:http://a/b/c/g/> |
3352 | - /./g = <URL:http://a/./g> |
3353 | - g/./h = <URL:http://a/b/c/g/h> |
3354 | - g/../h = <URL:http://a/b/c/h> |
3355 | - http:g = <URL:http://a/b/c/g> |
3356 | - http: = <URL:http://a/b/c/d> |
3357 | - http:?y = <URL:http://a/b/c/d?y> |
3358 | - http:g?y = <URL:http://a/b/c/g?y> |
3359 | - http:g?y/./x = <URL:http://a/b/c/g?y/./x> |
3360 | -""" |
3361 | - |
3362 | -def test(): |
3363 | - import sys |
3364 | - base = '' |
3365 | - if sys.argv[1:]: |
3366 | - fn = sys.argv[1] |
3367 | - if fn == '-': |
3368 | - fp = sys.stdin |
3369 | - else: |
3370 | - fp = open(fn) |
3371 | - else: |
3372 | - try: |
3373 | - from cStringIO import StringIO |
3374 | - except ImportError: |
3375 | - from StringIO import StringIO |
3376 | - fp = StringIO(test_input) |
3377 | - while 1: |
3378 | - line = fp.readline() |
3379 | - if not line: break |
3380 | - words = line.split() |
3381 | - if not words: |
3382 | - continue |
3383 | - url = words[0] |
3384 | - parts = urlparse(url) |
3385 | - print '%-10s : %s' % (url, parts) |
3386 | - abs = urljoin(base, url) |
3387 | - if not base: |
3388 | - base = abs |
3389 | - wrapped = '<URL:%s>' % abs |
3390 | - print '%-10s = %s' % (url, wrapped) |
3391 | - if len(words) == 3 and words[1] == '=': |
3392 | - if wrapped != words[2]: |
3393 | - print 'EXPECTED', words[2], '!!!!!!!!!!' |
3394 | - |
3395 | -if __name__ == '__main__': |
3396 | - test() |
3397 | |
3398 | === modified file 'po/POTFILES.in' |
3399 | --- po/POTFILES.in 2014-01-24 14:44:45 +0000 |
3400 | +++ po/POTFILES.in 2014-04-16 20:51:42 +0000 |
3401 | @@ -7,7 +7,6 @@ |
3402 | duplicity/selection.py |
3403 | duplicity/globals.py |
3404 | duplicity/commandline.py |
3405 | -duplicity/urlparse_2_5.py |
3406 | duplicity/dup_temp.py |
3407 | duplicity/backend.py |
3408 | duplicity/asyncscheduler.py |
3409 | |
3410 | === modified file 'po/duplicity.pot' |
3411 | --- po/duplicity.pot 2014-01-24 14:44:45 +0000 |
3412 | +++ po/duplicity.pot 2014-04-16 20:51:42 +0000 |
3413 | @@ -8,7 +8,7 @@ |
3414 | msgstr "" |
3415 | "Project-Id-Version: PACKAGE VERSION\n" |
3416 | "Report-Msgid-Bugs-To: Kenneth Loafman <kenneth@loafman.com>\n" |
3417 | -"POT-Creation-Date: 2014-01-24 06:47-0600\n" |
3418 | +"POT-Creation-Date: 2014-04-16 16:34-0400\n" |
3419 | "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" |
3420 | "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" |
3421 | "Language-Team: LANGUAGE <LL@li.org>\n" |
3422 | @@ -118,194 +118,194 @@ |
3423 | msgid "Processed volume %d of %d" |
3424 | msgstr "" |
3425 | |
3426 | -#: ../bin/duplicity:756 |
3427 | +#: ../bin/duplicity:765 |
3428 | #, python-format |
3429 | msgid "Invalid data - %s hash mismatch for file:" |
3430 | msgstr "" |
3431 | |
3432 | -#: ../bin/duplicity:758 |
3433 | +#: ../bin/duplicity:767 |
3434 | #, python-format |
3435 | msgid "Calculated hash: %s" |
3436 | msgstr "" |
3437 | |
3438 | -#: ../bin/duplicity:759 |
3439 | +#: ../bin/duplicity:768 |
3440 | #, python-format |
3441 | msgid "Manifest hash: %s" |
3442 | msgstr "" |
3443 | |
3444 | -#: ../bin/duplicity:797 |
3445 | +#: ../bin/duplicity:806 |
3446 | #, python-format |
3447 | msgid "Volume was signed by key %s, not %s" |
3448 | msgstr "" |
3449 | |
3450 | -#: ../bin/duplicity:827 |
3451 | +#: ../bin/duplicity:836 |
3452 | #, python-format |
3453 | msgid "Verify complete: %s, %s." |
3454 | msgstr "" |
3455 | |
3456 | -#: ../bin/duplicity:828 |
3457 | +#: ../bin/duplicity:837 |
3458 | #, python-format |
3459 | msgid "%d file compared" |
3460 | msgid_plural "%d files compared" |
3461 | msgstr[0] "" |
3462 | msgstr[1] "" |
3463 | |
3464 | -#: ../bin/duplicity:830 |
3465 | +#: ../bin/duplicity:839 |
3466 | #, python-format |
3467 | msgid "%d difference found" |
3468 | msgid_plural "%d differences found" |
3469 | msgstr[0] "" |
3470 | msgstr[1] "" |
3471 | |
3472 | -#: ../bin/duplicity:849 |
3473 | +#: ../bin/duplicity:858 |
3474 | msgid "No extraneous files found, nothing deleted in cleanup." |
3475 | msgstr "" |
3476 | |
3477 | -#: ../bin/duplicity:854 |
3478 | +#: ../bin/duplicity:863 |
3479 | msgid "Deleting this file from backend:" |
3480 | msgid_plural "Deleting these files from backend:" |
3481 | msgstr[0] "" |
3482 | msgstr[1] "" |
3483 | |
3484 | -#: ../bin/duplicity:866 |
3485 | +#: ../bin/duplicity:875 |
3486 | msgid "Found the following file to delete:" |
3487 | msgid_plural "Found the following files to delete:" |
3488 | msgstr[0] "" |
3489 | msgstr[1] "" |
3490 | |
3491 | -#: ../bin/duplicity:870 |
3492 | +#: ../bin/duplicity:879 |
3493 | msgid "Run duplicity again with the --force option to actually delete." |
3494 | msgstr "" |
3495 | |
3496 | -#: ../bin/duplicity:913 |
3497 | +#: ../bin/duplicity:922 |
3498 | msgid "There are backup set(s) at time(s):" |
3499 | msgstr "" |
3500 | |
3501 | -#: ../bin/duplicity:915 |
3502 | +#: ../bin/duplicity:924 |
3503 | msgid "Which can't be deleted because newer sets depend on them." |
3504 | msgstr "" |
3505 | |
3506 | -#: ../bin/duplicity:919 |
3507 | +#: ../bin/duplicity:928 |
3508 | msgid "" |
3509 | "Current active backup chain is older than specified time. However, it will " |
3510 | "not be deleted. To remove all your backups, manually purge the repository." |
3511 | msgstr "" |
3512 | |
3513 | -#: ../bin/duplicity:925 |
3514 | +#: ../bin/duplicity:934 |
3515 | msgid "No old backup sets found, nothing deleted." |
3516 | msgstr "" |
3517 | |
3518 | -#: ../bin/duplicity:928 |
3519 | +#: ../bin/duplicity:937 |
3520 | msgid "Deleting backup chain at time:" |
3521 | msgid_plural "Deleting backup chains at times:" |
3522 | msgstr[0] "" |
3523 | msgstr[1] "" |
3524 | |
3525 | -#: ../bin/duplicity:939 |
3526 | +#: ../bin/duplicity:948 |
3527 | #, python-format |
3528 | msgid "Deleting incremental signature chain %s" |
3529 | msgstr "" |
3530 | |
3531 | -#: ../bin/duplicity:941 |
3532 | +#: ../bin/duplicity:950 |
3533 | #, python-format |
3534 | msgid "Deleting incremental backup chain %s" |
3535 | msgstr "" |
3536 | |
3537 | -#: ../bin/duplicity:944 |
3538 | +#: ../bin/duplicity:953 |
3539 | #, python-format |
3540 | msgid "Deleting complete signature chain %s" |
3541 | msgstr "" |
3542 | |
3543 | -#: ../bin/duplicity:946 |
3544 | +#: ../bin/duplicity:955 |
3545 | #, python-format |
3546 | msgid "Deleting complete backup chain %s" |
3547 | msgstr "" |
3548 | |
3549 | -#: ../bin/duplicity:952 |
3550 | +#: ../bin/duplicity:961 |
3551 | msgid "Found old backup chain at the following time:" |
3552 | msgid_plural "Found old backup chains at the following times:" |
3553 | msgstr[0] "" |
3554 | msgstr[1] "" |
3555 | |
3556 | -#: ../bin/duplicity:956 |
3557 | +#: ../bin/duplicity:965 |
3558 | msgid "Rerun command with --force option to actually delete." |
3559 | msgstr "" |
3560 | |
3561 | -#: ../bin/duplicity:1033 |
3562 | +#: ../bin/duplicity:1042 |
3563 | #, python-format |
3564 | msgid "Deleting local %s (not authoritative at backend)." |
3565 | msgstr "" |
3566 | |
3567 | -#: ../bin/duplicity:1037 |
3568 | +#: ../bin/duplicity:1046 |
3569 | #, python-format |
3570 | msgid "Unable to delete %s: %s" |
3571 | msgstr "" |
3572 | |
3573 | -#: ../bin/duplicity:1065 ../duplicity/dup_temp.py:263 |
3574 | +#: ../bin/duplicity:1074 ../duplicity/dup_temp.py:263 |
3575 | #, python-format |
3576 | msgid "Failed to read %s: %s" |
3577 | msgstr "" |
3578 | |
3579 | -#: ../bin/duplicity:1079 |
3580 | +#: ../bin/duplicity:1088 |
3581 | #, python-format |
3582 | msgid "Copying %s to local cache." |
3583 | msgstr "" |
3584 | |
3585 | -#: ../bin/duplicity:1127 |
3586 | +#: ../bin/duplicity:1136 |
3587 | msgid "Local and Remote metadata are synchronized, no sync needed." |
3588 | msgstr "" |
3589 | |
3590 | -#: ../bin/duplicity:1132 |
3591 | +#: ../bin/duplicity:1141 |
3592 | msgid "Synchronizing remote metadata to local cache..." |
3593 | msgstr "" |
3594 | |
3595 | -#: ../bin/duplicity:1145 |
3596 | +#: ../bin/duplicity:1156 |
3597 | msgid "Sync would copy the following from remote to local:" |
3598 | msgstr "" |
3599 | |
3600 | -#: ../bin/duplicity:1148 |
3601 | +#: ../bin/duplicity:1159 |
3602 | msgid "Sync would remove the following spurious local files:" |
3603 | msgstr "" |
3604 | |
3605 | -#: ../bin/duplicity:1191 |
3606 | +#: ../bin/duplicity:1202 |
3607 | msgid "Unable to get free space on temp." |
3608 | msgstr "" |
3609 | |
3610 | -#: ../bin/duplicity:1199 |
3611 | +#: ../bin/duplicity:1210 |
3612 | #, python-format |
3613 | msgid "Temp space has %d available, backup needs approx %d." |
3614 | msgstr "" |
3615 | |
3616 | -#: ../bin/duplicity:1202 |
3617 | +#: ../bin/duplicity:1213 |
3618 | #, python-format |
3619 | msgid "Temp has %d available, backup will use approx %d." |
3620 | msgstr "" |
3621 | |
3622 | -#: ../bin/duplicity:1210 |
3623 | +#: ../bin/duplicity:1221 |
3624 | msgid "Unable to get max open files." |
3625 | msgstr "" |
3626 | |
3627 | -#: ../bin/duplicity:1214 |
3628 | +#: ../bin/duplicity:1225 |
3629 | #, python-format |
3630 | msgid "" |
3631 | "Max open files of %s is too low, should be >= 1024.\n" |
3632 | "Use 'ulimit -n 1024' or higher to correct.\n" |
3633 | msgstr "" |
3634 | |
3635 | -#: ../bin/duplicity:1263 |
3636 | +#: ../bin/duplicity:1274 |
3637 | msgid "" |
3638 | "RESTART: The first volume failed to upload before termination.\n" |
3639 | " Restart is impossible...starting backup from beginning." |
3640 | msgstr "" |
3641 | |
3642 | -#: ../bin/duplicity:1269 |
3643 | +#: ../bin/duplicity:1280 |
3644 | #, python-format |
3645 | msgid "" |
3646 | "RESTART: Volumes %d to %d failed to upload before termination.\n" |
3647 | " Restarting backup at volume %d." |
3648 | msgstr "" |
3649 | |
3650 | -#: ../bin/duplicity:1276 |
3651 | +#: ../bin/duplicity:1287 |
3652 | #, python-format |
3653 | msgid "" |
3654 | "RESTART: Impossible backup state: manifest has %d vols, remote has %d vols.\n" |
3655 | @@ -314,7 +314,7 @@ |
3656 | " backup then restart the backup from the beginning." |
3657 | msgstr "" |
3658 | |
3659 | -#: ../bin/duplicity:1298 |
3660 | +#: ../bin/duplicity:1309 |
3661 | msgid "" |
3662 | "\n" |
3663 | "PYTHONOPTIMIZE in the environment causes duplicity to fail to\n" |
3664 | @@ -324,59 +324,59 @@ |
3665 | "See https://bugs.launchpad.net/duplicity/+bug/931175\n" |
3666 | msgstr "" |
3667 | |
3668 | -#: ../bin/duplicity:1388 |
3669 | +#: ../bin/duplicity:1400 |
3670 | #, python-format |
3671 | msgid "Last %s backup left a partial set, restarting." |
3672 | msgstr "" |
3673 | |
3674 | -#: ../bin/duplicity:1392 |
3675 | +#: ../bin/duplicity:1404 |
3676 | #, python-format |
3677 | msgid "Cleaning up previous partial %s backup set, restarting." |
3678 | msgstr "" |
3679 | |
3680 | -#: ../bin/duplicity:1403 |
3681 | +#: ../bin/duplicity:1415 |
3682 | msgid "Last full backup date:" |
3683 | msgstr "" |
3684 | |
3685 | -#: ../bin/duplicity:1405 |
3686 | +#: ../bin/duplicity:1417 |
3687 | msgid "Last full backup date: none" |
3688 | msgstr "" |
3689 | |
3690 | -#: ../bin/duplicity:1407 |
3691 | +#: ../bin/duplicity:1419 |
3692 | msgid "Last full backup is too old, forcing full backup" |
3693 | msgstr "" |
3694 | |
3695 | -#: ../bin/duplicity:1450 |
3696 | +#: ../bin/duplicity:1462 |
3697 | msgid "" |
3698 | "When using symmetric encryption, the signing passphrase must equal the " |
3699 | "encryption passphrase." |
3700 | msgstr "" |
3701 | |
3702 | -#: ../bin/duplicity:1503 |
3703 | +#: ../bin/duplicity:1515 |
3704 | msgid "INT intercepted...exiting." |
3705 | msgstr "" |
3706 | |
3707 | -#: ../bin/duplicity:1511 |
3708 | +#: ../bin/duplicity:1523 |
3709 | #, python-format |
3710 | msgid "GPG error detail: %s" |
3711 | msgstr "" |
3712 | |
3713 | -#: ../bin/duplicity:1521 |
3714 | +#: ../bin/duplicity:1533 |
3715 | #, python-format |
3716 | msgid "User error detail: %s" |
3717 | msgstr "" |
3718 | |
3719 | -#: ../bin/duplicity:1531 |
3720 | +#: ../bin/duplicity:1543 |
3721 | #, python-format |
3722 | msgid "Backend error detail: %s" |
3723 | msgstr "" |
3724 | |
3725 | -#: ../bin/rdiffdir:59 ../duplicity/commandline.py:237 |
3726 | +#: ../bin/rdiffdir:56 ../duplicity/commandline.py:237 |
3727 | #, python-format |
3728 | msgid "Error opening file %s" |
3729 | msgstr "" |
3730 | |
3731 | -#: ../bin/rdiffdir:122 |
3732 | +#: ../bin/rdiffdir:119 |
3733 | #, python-format |
3734 | msgid "File %s already exists, will not overwrite." |
3735 | msgstr "" |
3736 | @@ -493,8 +493,8 @@ |
3737 | #. Used in usage help to represent a Unix-style path name. Example: |
3738 | #. --archive-dir <path> |
3739 | #: ../duplicity/commandline.py:258 ../duplicity/commandline.py:268 |
3740 | -#: ../duplicity/commandline.py:285 ../duplicity/commandline.py:342 |
3741 | -#: ../duplicity/commandline.py:530 ../duplicity/commandline.py:746 |
3742 | +#: ../duplicity/commandline.py:285 ../duplicity/commandline.py:351 |
3743 | +#: ../duplicity/commandline.py:548 ../duplicity/commandline.py:764 |
3744 | msgid "path" |
3745 | msgstr "" |
3746 | |
3747 | @@ -505,8 +505,8 @@ |
3748 | #. Used in usage help to represent an ID for a GnuPG key. Example: |
3749 | #. --encrypt-key <gpg_key_id> |
3750 | #: ../duplicity/commandline.py:280 ../duplicity/commandline.py:287 |
3751 | -#: ../duplicity/commandline.py:362 ../duplicity/commandline.py:511 |
3752 | -#: ../duplicity/commandline.py:719 |
3753 | +#: ../duplicity/commandline.py:371 ../duplicity/commandline.py:529 |
3754 | +#: ../duplicity/commandline.py:737 |
3755 | msgid "gpg-key-id" |
3756 | msgstr "" |
3757 | |
3758 | @@ -514,42 +514,42 @@ |
3759 | #. matching one or more files, as described in the documentation. |
3760 | #. Example: |
3761 | #. --exclude <shell_pattern> |
3762 | -#: ../duplicity/commandline.py:295 ../duplicity/commandline.py:388 |
3763 | -#: ../duplicity/commandline.py:769 |
3764 | +#: ../duplicity/commandline.py:295 ../duplicity/commandline.py:397 |
3765 | +#: ../duplicity/commandline.py:787 |
3766 | msgid "shell_pattern" |
3767 | msgstr "" |
3768 | |
3769 | #. Used in usage help to represent the name of a file. Example: |
3770 | #. --log-file <filename> |
3771 | #: ../duplicity/commandline.py:301 ../duplicity/commandline.py:308 |
3772 | -#: ../duplicity/commandline.py:313 ../duplicity/commandline.py:390 |
3773 | -#: ../duplicity/commandline.py:395 ../duplicity/commandline.py:406 |
3774 | -#: ../duplicity/commandline.py:715 |
3775 | +#: ../duplicity/commandline.py:313 ../duplicity/commandline.py:399 |
3776 | +#: ../duplicity/commandline.py:404 ../duplicity/commandline.py:415 |
3777 | +#: ../duplicity/commandline.py:733 |
3778 | msgid "filename" |
3779 | msgstr "" |
3780 | |
3781 | #. Used in usage help to represent a regular expression (regexp). |
3782 | -#: ../duplicity/commandline.py:320 ../duplicity/commandline.py:397 |
3783 | +#: ../duplicity/commandline.py:320 ../duplicity/commandline.py:406 |
3784 | msgid "regular_expression" |
3785 | msgstr "" |
3786 | |
3787 | #. Used in usage help to represent a time spec for a previous |
3788 | #. point in time, as described in the documentation. Example: |
3789 | #. duplicity remove-older-than time [options] target_url |
3790 | -#: ../duplicity/commandline.py:354 ../duplicity/commandline.py:462 |
3791 | -#: ../duplicity/commandline.py:801 |
3792 | +#: ../duplicity/commandline.py:363 ../duplicity/commandline.py:474 |
3793 | +#: ../duplicity/commandline.py:819 |
3794 | msgid "time" |
3795 | msgstr "" |
3796 | |
3797 | #. Used in usage help. (Should be consistent with the "Options:" |
3798 | #. header.) Example: |
3799 | #. duplicity [full|incremental] [options] source_dir target_url |
3800 | -#: ../duplicity/commandline.py:358 ../duplicity/commandline.py:465 |
3801 | -#: ../duplicity/commandline.py:522 ../duplicity/commandline.py:734 |
3802 | +#: ../duplicity/commandline.py:367 ../duplicity/commandline.py:477 |
3803 | +#: ../duplicity/commandline.py:540 ../duplicity/commandline.py:752 |
3804 | msgid "options" |
3805 | msgstr "" |
3806 | |
3807 | -#: ../duplicity/commandline.py:373 |
3808 | +#: ../duplicity/commandline.py:382 |
3809 | #, python-format |
3810 | msgid "" |
3811 | "Running in 'ignore errors' mode due to %s; please re-consider if this was " |
3812 | @@ -557,150 +557,152 @@ |
3813 | msgstr "" |
3814 | |
3815 | #. Used in usage help to represent an imap mailbox |
3816 | -#: ../duplicity/commandline.py:386 |
3817 | +#: ../duplicity/commandline.py:395 |
3818 | msgid "imap_mailbox" |
3819 | msgstr "" |
3820 | |
3821 | -#: ../duplicity/commandline.py:400 |
3822 | +#: ../duplicity/commandline.py:409 |
3823 | msgid "file_descriptor" |
3824 | msgstr "" |
3825 | |
3826 | #. Used in usage help to represent a desired number of |
3827 | #. something. Example: |
3828 | #. --num-retries <number> |
3829 | -#: ../duplicity/commandline.py:411 ../duplicity/commandline.py:433 |
3830 | -#: ../duplicity/commandline.py:448 ../duplicity/commandline.py:486 |
3831 | -#: ../duplicity/commandline.py:560 ../duplicity/commandline.py:729 |
3832 | +#: ../duplicity/commandline.py:420 ../duplicity/commandline.py:442 |
3833 | +#: ../duplicity/commandline.py:454 ../duplicity/commandline.py:460 |
3834 | +#: ../duplicity/commandline.py:498 ../duplicity/commandline.py:503 |
3835 | +#: ../duplicity/commandline.py:507 ../duplicity/commandline.py:578 |
3836 | +#: ../duplicity/commandline.py:747 |
3837 | msgid "number" |
3838 | msgstr "" |
3839 | |
3840 | #. Used in usage help (noun) |
3841 | -#: ../duplicity/commandline.py:414 |
3842 | +#: ../duplicity/commandline.py:423 |
3843 | msgid "backup name" |
3844 | msgstr "" |
3845 | |
3846 | #. noun |
3847 | -#: ../duplicity/commandline.py:495 ../duplicity/commandline.py:498 |
3848 | -#: ../duplicity/commandline.py:501 ../duplicity/commandline.py:700 |
3849 | +#: ../duplicity/commandline.py:513 ../duplicity/commandline.py:516 |
3850 | +#: ../duplicity/commandline.py:519 ../duplicity/commandline.py:718 |
3851 | msgid "command" |
3852 | msgstr "" |
3853 | |
3854 | -#: ../duplicity/commandline.py:519 |
3855 | +#: ../duplicity/commandline.py:537 |
3856 | msgid "paramiko|pexpect" |
3857 | msgstr "" |
3858 | |
3859 | -#: ../duplicity/commandline.py:525 |
3860 | +#: ../duplicity/commandline.py:543 |
3861 | msgid "pem formatted bundle of certificate authorities" |
3862 | msgstr "" |
3863 | |
3864 | #. Used in usage help. Example: |
3865 | #. --timeout <seconds> |
3866 | -#: ../duplicity/commandline.py:535 ../duplicity/commandline.py:763 |
3867 | +#: ../duplicity/commandline.py:553 ../duplicity/commandline.py:781 |
3868 | msgid "seconds" |
3869 | msgstr "" |
3870 | |
3871 | #. abbreviation for "character" (noun) |
3872 | -#: ../duplicity/commandline.py:541 ../duplicity/commandline.py:697 |
3873 | +#: ../duplicity/commandline.py:559 ../duplicity/commandline.py:715 |
3874 | msgid "char" |
3875 | msgstr "" |
3876 | |
3877 | -#: ../duplicity/commandline.py:663 |
3878 | +#: ../duplicity/commandline.py:681 |
3879 | #, python-format |
3880 | msgid "Using archive dir: %s" |
3881 | msgstr "" |
3882 | |
3883 | -#: ../duplicity/commandline.py:664 |
3884 | +#: ../duplicity/commandline.py:682 |
3885 | #, python-format |
3886 | msgid "Using backup name: %s" |
3887 | msgstr "" |
3888 | |
3889 | -#: ../duplicity/commandline.py:671 |
3890 | +#: ../duplicity/commandline.py:689 |
3891 | #, python-format |
3892 | msgid "Command line error: %s" |
3893 | msgstr "" |
3894 | |
3895 | -#: ../duplicity/commandline.py:672 |
3896 | +#: ../duplicity/commandline.py:690 |
3897 | msgid "Enter 'duplicity --help' for help screen." |
3898 | msgstr "" |
3899 | |
3900 | #. Used in usage help to represent a Unix-style path name. Example: |
3901 | #. rsync://user[:password]@other_host[:port]//absolute_path |
3902 | -#: ../duplicity/commandline.py:685 |
3903 | +#: ../duplicity/commandline.py:703 |
3904 | msgid "absolute_path" |
3905 | msgstr "" |
3906 | |
3907 | #. Used in usage help. Example: |
3908 | #. tahoe://alias/some_dir |
3909 | -#: ../duplicity/commandline.py:689 |
3910 | +#: ../duplicity/commandline.py:707 |
3911 | msgid "alias" |
3912 | msgstr "" |
3913 | |
3914 | #. Used in help to represent a "bucket name" for Amazon Web |
3915 | #. Services' Simple Storage Service (S3). Example: |
3916 | #. s3://other.host/bucket_name[/prefix] |
3917 | -#: ../duplicity/commandline.py:694 |
3918 | +#: ../duplicity/commandline.py:712 |
3919 | msgid "bucket_name" |
3920 | msgstr "" |
3921 | |
3922 | #. Used in usage help to represent the name of a container in |
3923 | #. Amazon Web Services' Cloudfront. Example: |
3924 | #. cf+http://container_name |
3925 | -#: ../duplicity/commandline.py:705 |
3926 | +#: ../duplicity/commandline.py:723 |
3927 | msgid "container_name" |
3928 | msgstr "" |
3929 | |
3930 | #. noun |
3931 | -#: ../duplicity/commandline.py:708 |
3932 | +#: ../duplicity/commandline.py:726 |
3933 | msgid "count" |
3934 | msgstr "" |
3935 | |
3936 | #. Used in usage help to represent the name of a file directory |
3937 | -#: ../duplicity/commandline.py:711 |
3938 | +#: ../duplicity/commandline.py:729 |
3939 | msgid "directory" |
3940 | msgstr "" |
3941 | |
3942 | #. Used in usage help, e.g. to represent the name of a code |
3943 | #. module. Example: |
3944 | #. rsync://user[:password]@other.host[:port]::/module/some_dir |
3945 | -#: ../duplicity/commandline.py:724 |
3946 | +#: ../duplicity/commandline.py:742 |
3947 | msgid "module" |
3948 | msgstr "" |
3949 | |
3950 | #. Used in usage help to represent an internet hostname. Example: |
3951 | #. ftp://user[:password]@other.host[:port]/some_dir |
3952 | -#: ../duplicity/commandline.py:738 |
3953 | +#: ../duplicity/commandline.py:756 |
3954 | msgid "other.host" |
3955 | msgstr "" |
3956 | |
3957 | #. Used in usage help. Example: |
3958 | #. ftp://user[:password]@other.host[:port]/some_dir |
3959 | -#: ../duplicity/commandline.py:742 |
3960 | +#: ../duplicity/commandline.py:760 |
3961 | msgid "password" |
3962 | msgstr "" |
3963 | |
3964 | #. Used in usage help to represent a TCP port number. Example: |
3965 | #. ftp://user[:password]@other.host[:port]/some_dir |
3966 | -#: ../duplicity/commandline.py:750 |
3967 | +#: ../duplicity/commandline.py:768 |
3968 | msgid "port" |
3969 | msgstr "" |
3970 | |
3971 | #. Used in usage help. This represents a string to be used as a |
3972 | #. prefix to names for backup files created by Duplicity. Example: |
3973 | #. s3://other.host/bucket_name[/prefix] |
3974 | -#: ../duplicity/commandline.py:755 |
3975 | +#: ../duplicity/commandline.py:773 |
3976 | msgid "prefix" |
3977 | msgstr "" |
3978 | |
3979 | #. Used in usage help to represent a Unix-style path name. Example: |
3980 | #. rsync://user[:password]@other.host[:port]/relative_path |
3981 | -#: ../duplicity/commandline.py:759 |
3982 | +#: ../duplicity/commandline.py:777 |
3983 | msgid "relative_path" |
3984 | msgstr "" |
3985 | |
3986 | #. Used in usage help to represent the name of a single file |
3987 | #. directory or a Unix-style path to a directory. Example: |
3988 | #. file:///some_dir |
3989 | -#: ../duplicity/commandline.py:774 |
3990 | +#: ../duplicity/commandline.py:792 |
3991 | msgid "some_dir" |
3992 | msgstr "" |
3993 | |
3994 | @@ -708,14 +710,14 @@ |
3995 | #. directory or a Unix-style path to a directory where files will be |
3996 | #. coming FROM. Example: |
3997 | #. duplicity [full|incremental] [options] source_dir target_url |
3998 | -#: ../duplicity/commandline.py:780 |
3999 | +#: ../duplicity/commandline.py:798 |
4000 | msgid "source_dir" |
4001 | msgstr "" |
4002 | |
4003 | #. Used in usage help to represent a URL files will be coming |
4004 | #. FROM. Example: |
4005 | #. duplicity [restore] [options] source_url target_dir |
4006 | -#: ../duplicity/commandline.py:785 |
4007 | +#: ../duplicity/commandline.py:803 |
4008 | msgid "source_url" |
4009 | msgstr "" |
4010 | |
4011 | @@ -723,75 +725,75 @@ |
4012 | #. directory or a Unix-style path to a directory. where files will be |
4013 | #. going TO. Example: |
4014 | #. duplicity [restore] [options] source_url target_dir |
4015 | -#: ../duplicity/commandline.py:791 |
4016 | +#: ../duplicity/commandline.py:809 |
4017 | msgid "target_dir" |
4018 | msgstr "" |
4019 | |
4020 | #. Used in usage help to represent a URL files will be going TO. |
4021 | #. Example: |
4022 | #. duplicity [full|incremental] [options] source_dir target_url |
4023 | -#: ../duplicity/commandline.py:796 |
4024 | +#: ../duplicity/commandline.py:814 |
4025 | msgid "target_url" |
4026 | msgstr "" |
4027 | |
4028 | #. Used in usage help to represent a user name (i.e. login). |
4029 | #. Example: |
4030 | #. ftp://user[:password]@other.host[:port]/some_dir |
4031 | -#: ../duplicity/commandline.py:806 |
4032 | +#: ../duplicity/commandline.py:824 |
4033 | msgid "user" |
4034 | msgstr "" |
4035 | |
4036 | #. Header in usage help |
4037 | -#: ../duplicity/commandline.py:823 |
4038 | +#: ../duplicity/commandline.py:841 |
4039 | msgid "Backends and their URL formats:" |
4040 | msgstr "" |
4041 | |
4042 | #. Header in usage help |
4043 | -#: ../duplicity/commandline.py:848 |
4044 | +#: ../duplicity/commandline.py:866 |
4045 | msgid "Commands:" |
4046 | msgstr "" |
4047 | |
4048 | -#: ../duplicity/commandline.py:872 |
4049 | +#: ../duplicity/commandline.py:890 |
4050 | #, python-format |
4051 | msgid "Specified archive directory '%s' does not exist, or is not a directory" |
4052 | msgstr "" |
4053 | |
4054 | -#: ../duplicity/commandline.py:881 |
4055 | +#: ../duplicity/commandline.py:899 |
4056 | #, python-format |
4057 | msgid "" |
4058 | "Sign key should be an 8 character hex string, like 'AA0E73D2'.\n" |
4059 | "Received '%s' instead." |
4060 | msgstr "" |
4061 | |
4062 | -#: ../duplicity/commandline.py:941 |
4063 | +#: ../duplicity/commandline.py:959 |
4064 | #, python-format |
4065 | msgid "" |
4066 | "Restore destination directory %s already exists.\n" |
4067 | "Will not overwrite." |
4068 | msgstr "" |
4069 | |
4070 | -#: ../duplicity/commandline.py:946 |
4071 | +#: ../duplicity/commandline.py:964 |
4072 | #, python-format |
4073 | msgid "Verify directory %s does not exist" |
4074 | msgstr "" |
4075 | |
4076 | -#: ../duplicity/commandline.py:952 |
4077 | +#: ../duplicity/commandline.py:970 |
4078 | #, python-format |
4079 | msgid "Backup source directory %s does not exist." |
4080 | msgstr "" |
4081 | |
4082 | -#: ../duplicity/commandline.py:981 |
4083 | +#: ../duplicity/commandline.py:999 |
4084 | #, python-format |
4085 | msgid "Command line warning: %s" |
4086 | msgstr "" |
4087 | |
4088 | -#: ../duplicity/commandline.py:981 |
4089 | +#: ../duplicity/commandline.py:999 |
4090 | msgid "" |
4091 | "Selection options --exclude/--include\n" |
4092 | "currently work only when backing up,not restoring." |
4093 | msgstr "" |
4094 | |
4095 | -#: ../duplicity/commandline.py:1029 |
4096 | +#: ../duplicity/commandline.py:1047 |
4097 | #, python-format |
4098 | msgid "" |
4099 | "Bad URL '%s'.\n" |
4100 | @@ -799,61 +801,61 @@ |
4101 | "\"file:///usr/local\". See the man page for more information." |
4102 | msgstr "" |
4103 | |
4104 | -#: ../duplicity/commandline.py:1054 |
4105 | +#: ../duplicity/commandline.py:1072 |
4106 | msgid "Main action: " |
4107 | msgstr "" |
4108 | |
4109 | -#: ../duplicity/backend.py:87 |
4110 | +#: ../duplicity/backend.py:109 |
4111 | #, python-format |
4112 | msgid "Import of %s %s" |
4113 | msgstr "" |
4114 | |
4115 | -#: ../duplicity/backend.py:164 |
4116 | +#: ../duplicity/backend.py:186 |
4117 | #, python-format |
4118 | msgid "Could not initialize backend: %s" |
4119 | msgstr "" |
4120 | |
4121 | -#: ../duplicity/backend.py:320 |
4122 | +#: ../duplicity/backend.py:311 |
4123 | #, python-format |
4124 | msgid "Attempt %s failed: %s: %s" |
4125 | msgstr "" |
4126 | |
4127 | -#: ../duplicity/backend.py:322 ../duplicity/backend.py:352 |
4128 | -#: ../duplicity/backend.py:359 |
4129 | +#: ../duplicity/backend.py:313 ../duplicity/backend.py:343 |
4130 | +#: ../duplicity/backend.py:350 |
4131 | #, python-format |
4132 | msgid "Backtrace of previous error: %s" |
4133 | msgstr "" |
4134 | |
4135 | -#: ../duplicity/backend.py:350 |
4136 | +#: ../duplicity/backend.py:341 |
4137 | #, python-format |
4138 | msgid "Attempt %s failed. %s: %s" |
4139 | msgstr "" |
4140 | |
4141 | -#: ../duplicity/backend.py:361 |
4142 | +#: ../duplicity/backend.py:352 |
4143 | #, python-format |
4144 | msgid "Giving up after %s attempts. %s: %s" |
4145 | msgstr "" |
4146 | |
4147 | -#: ../duplicity/backend.py:546 ../duplicity/backend.py:570 |
4148 | +#: ../duplicity/backend.py:537 ../duplicity/backend.py:561 |
4149 | #, python-format |
4150 | msgid "Reading results of '%s'" |
4151 | msgstr "" |
4152 | |
4153 | -#: ../duplicity/backend.py:585 |
4154 | +#: ../duplicity/backend.py:576 |
4155 | #, python-format |
4156 | msgid "Running '%s' failed with code %d (attempt #%d)" |
4157 | msgid_plural "Running '%s' failed with code %d (attempt #%d)" |
4158 | msgstr[0] "" |
4159 | msgstr[1] "" |
4160 | |
4161 | -#: ../duplicity/backend.py:589 |
4162 | +#: ../duplicity/backend.py:580 |
4163 | #, python-format |
4164 | msgid "" |
4165 | "Error is:\n" |
4166 | "%s" |
4167 | msgstr "" |
4168 | |
4169 | -#: ../duplicity/backend.py:591 |
4170 | +#: ../duplicity/backend.py:582 |
4171 | #, python-format |
4172 | msgid "Giving up trying to execute '%s' after %d attempt" |
4173 | msgid_plural "Giving up trying to execute '%s' after %d attempts" |
4174 | |
4175 | === modified file 'setup.py' |
4176 | --- setup.py 2014-04-16 20:51:42 +0000 |
4177 | +++ setup.py 2014-04-16 20:51:42 +0000 |
4178 | @@ -28,8 +28,8 @@ |
4179 | |
4180 | version_string = "$version" |
4181 | |
4182 | -if sys.version_info[:2] < (2,4): |
4183 | - print "Sorry, duplicity requires version 2.4 or later of python" |
4184 | +if sys.version_info[:2] < (2, 6): |
4185 | + print "Sorry, duplicity requires version 2.6 or later of python" |
4186 | sys.exit(1) |
4187 | |
4188 | incdir_list = libdir_list = None |
4189 | @@ -53,8 +53,6 @@ |
4190 | 'README', |
4191 | 'README-REPO', |
4192 | 'README-LOG', |
4193 | - 'tarfile-LICENSE', |
4194 | - 'tarfile-CHANGES', |
4195 | 'CHANGELOG']), |
4196 | ] |
4197 | |
4198 | |
4199 | === removed file 'tarfile-CHANGES' |
4200 | --- tarfile-CHANGES 2011-08-23 18:14:17 +0000 |
4201 | +++ tarfile-CHANGES 1970-01-01 00:00:00 +0000 |
4202 | @@ -1,3 +0,0 @@ |
4203 | -tarfile.py is a copy of python2.7's tarfile.py. |
4204 | - |
4205 | -No changes besides 2.4 compatibility have been made. |
4206 | |
4207 | === removed file 'tarfile-LICENSE' |
4208 | --- tarfile-LICENSE 2011-10-05 14:13:31 +0000 |
4209 | +++ tarfile-LICENSE 1970-01-01 00:00:00 +0000 |
4210 | @@ -1,92 +0,0 @@ |
4211 | -irdu-backup uses tarfile, written by Lars Gustäbel. The following |
4212 | -notice was included in the tarfile distribution: |
4213 | - |
4214 | ------------------------------------------------------------------ |
4215 | - tarfile - python module for accessing TAR archives |
4216 | - |
4217 | - Lars Gustäbel <lars@gustaebel.de> |
4218 | ------------------------------------------------------------------ |
4219 | - |
4220 | - |
4221 | -Description |
4222 | ------------ |
4223 | - |
4224 | -The tarfile module provides a set of functions for accessing TAR |
4225 | -format archives. Because it is written in pure Python, it does |
4226 | -not require any platform specific functions. GZIP compressed TAR |
4227 | -archives are seamlessly supported. |
4228 | - |
4229 | - |
4230 | -Requirements |
4231 | ------------- |
4232 | - |
4233 | -tarfile needs at least Python version 2.2. |
4234 | -(For a tarfile for Python 1.5.2 take a look on the webpage.) |
4235 | - |
4236 | - |
4237 | -!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! |
4238 | -IMPORTANT NOTE (*NIX only) |
4239 | --------------------------- |
4240 | - |
4241 | -The addition of character and block devices is enabled by a C |
4242 | -extension module (_tarfile.c), because Python does not yet |
4243 | -provide the major() and minor() macros. |
4244 | -Currently Linux and FreeBSD are implemented. If your OS is not |
4245 | -supported, then please send me a patch. |
4246 | - |
4247 | -!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! |
4248 | - |
4249 | - |
4250 | -Download |
4251 | --------- |
4252 | - |
4253 | -You can download the newest version at URL: |
4254 | -http://www.gustaebel.de/lars/tarfile/ |
4255 | - |
4256 | - |
4257 | -Installation |
4258 | ------------- |
4259 | - |
4260 | -1. extract the tarfile-x.x.x.tar.gz archive to a temporary folder |
4261 | -2. type "python setup.py install" |
4262 | - |
4263 | - |
4264 | -Contact |
4265 | -------- |
4266 | - |
4267 | -Suggestions, comments, bug reports and patches to: |
4268 | -lars@gustaebel.de |
4269 | - |
4270 | - |
4271 | -License |
4272 | -------- |
4273 | - |
4274 | -Copyright (C) 2002 Lars Gustäbel <lars@gustaebel.de> |
4275 | -All rights reserved. |
4276 | - |
4277 | -Permission is hereby granted, free of charge, to any person |
4278 | -obtaining a copy of this software and associated documentation |
4279 | -files (the "Software"), to deal in the Software without |
4280 | -restriction, including without limitation the rights to use, |
4281 | -copy, modify, merge, publish, distribute, sublicense, and/or sell |
4282 | -copies of the Software, and to permit persons to whom the |
4283 | -Software is furnished to do so, subject to the following |
4284 | -conditions: |
4285 | - |
4286 | -The above copyright notice and this permission notice shall be |
4287 | -included in all copies or substantial portions of the Software. |
4288 | - |
4289 | -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, |
4290 | -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES |
4291 | -OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND |
4292 | -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT |
4293 | -HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, |
4294 | -WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING |
4295 | -FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR |
4296 | -OTHER DEALINGS IN THE SOFTWARE. |
4297 | - |
4298 | - |
4299 | -README Version |
4300 | --------------- |
4301 | - |
4302 | -$Id: tarfile-LICENSE,v 1.1 2002/10/29 01:49:46 bescoto Exp $ |
4303 | |
4304 | === modified file 'testing/__init__.py' |
4305 | --- testing/__init__.py 2014-04-16 20:51:42 +0000 |
4306 | +++ testing/__init__.py 2014-04-16 20:51:42 +0000 |
4307 | @@ -1,3 +0,0 @@ |
4308 | -import sys |
4309 | -if sys.version_info < (2, 5,): |
4310 | - import tests |
4311 | |
4312 | === modified file 'testing/run-tests' |
4313 | --- testing/run-tests 2014-04-16 20:51:42 +0000 |
4314 | +++ testing/run-tests 2014-04-16 20:51:42 +0000 |
4315 | @@ -46,7 +46,7 @@ |
4316 | done |
4317 | |
4318 | # run against all supported python versions |
4319 | -for v in 2.4 2.5 2.6 2.7; do |
4320 | +for v in 2.6 2.7; do |
4321 | type python$v >& /dev/null |
4322 | if [ $? == 1 ]; then |
4323 | echo "python$v not found on system" |
4324 | |
4325 | === modified file 'testing/run-tests-ve' |
4326 | --- testing/run-tests-ve 2014-04-16 20:51:42 +0000 |
4327 | +++ testing/run-tests-ve 2014-04-16 20:51:42 +0000 |
4328 | @@ -46,7 +46,7 @@ |
4329 | done |
4330 | |
4331 | # run against all supported python versions |
4332 | -for v in 2.4 2.5 2.6 2.7; do |
4333 | +for v in 2.6 2.7; do |
4334 | ve=~/virtual$v |
4335 | if [ $? == 1 ]; then |
4336 | echo "virtual$v not found on system" |
4337 | |
4338 | === modified file 'testing/tests/__init__.py' |
4339 | --- testing/tests/__init__.py 2014-04-16 20:51:42 +0000 |
4340 | +++ testing/tests/__init__.py 2014-04-16 20:51:42 +0000 |
4341 | @@ -41,12 +41,3 @@ |
4342 | # Standardize time |
4343 | os.environ['TZ'] = 'US/Central' |
4344 | time.tzset() |
4345 | - |
4346 | -# Automatically add all submodules into this namespace. Helps python2.4 |
4347 | -# unittest work. |
4348 | -if sys.version_info < (2, 5,): |
4349 | - for module in os.listdir(_this_dir): |
4350 | - if module == '__init__.py' or module[-3:] != '.py': |
4351 | - continue |
4352 | - __import__(module[:-3], locals(), globals()) |
4353 | - del module |
4354 | |
4355 | === modified file 'testing/tests/test_parsedurl.py' |
4356 | --- testing/tests/test_parsedurl.py 2011-11-04 04:33:06 +0000 |
4357 | +++ testing/tests/test_parsedurl.py 2014-04-16 20:51:42 +0000 |
4358 | @@ -55,6 +55,13 @@ |
4359 | assert pu.username is None, pu.username |
4360 | assert pu.port is None, pu.port |
4361 | |
4362 | + pu = duplicity.backend.ParsedUrl("file://home") |
4363 | + assert pu.scheme == "file", pu.scheme |
4364 | + assert pu.netloc == "", pu.netloc |
4365 | + assert pu.path == "//home", pu.path |
4366 | + assert pu.username is None, pu.username |
4367 | + assert pu.port is None, pu.port |
4368 | + |
4369 | pu = duplicity.backend.ParsedUrl("ftp://foo@bar:pass@example.com:123/home") |
4370 | assert pu.scheme == "ftp", pu.scheme |
4371 | assert pu.netloc == "foo@bar:pass@example.com:123", pu.netloc |
4372 | @@ -121,7 +128,9 @@ |
4373 | def test_errors(self): |
4374 | """Test various url errors""" |
4375 | self.assertRaises(InvalidBackendURL, duplicity.backend.ParsedUrl, |
4376 | - "ssh://foo@bar:pass@example.com:/home") |
4377 | + "ssh:///home") # we require a hostname for ssh |
4378 | + self.assertRaises(InvalidBackendURL, duplicity.backend.ParsedUrl, |
4379 | + "file:path") # no relative paths for non-netloc schemes |
4380 | self.assertRaises(UnsupportedBackendScheme, duplicity.backend.get_backend, |
4381 | "foo://foo@bar:pass@example.com/home") |
4382 | |
4383 | |
4384 | === modified file 'testing/tests/test_tarfile.py' |
4385 | --- testing/tests/test_tarfile.py 2013-07-12 19:47:32 +0000 |
4386 | +++ testing/tests/test_tarfile.py 2014-04-16 20:51:42 +0000 |
4387 | @@ -1,7 +1,6 @@ |
4388 | # -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*- |
4389 | # |
4390 | -# Copyright 2002 Ben Escoto <ben@emerose.org> |
4391 | -# Copyright 2007 Kenneth Loafman <kenneth@loafman.com> |
4392 | +# Copyright 2013 Michael Terry <mike@mterry.name> |
4393 | # |
4394 | # This file is part of duplicity. |
4395 | # |
4396 | @@ -19,309 +18,18 @@ |
4397 | # along with duplicity; if not, write to the Free Software Foundation, |
4398 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
4399 | |
4400 | -# |
4401 | -# unittest for the tarfile module |
4402 | -# |
4403 | -# $Id: test_tarfile.py,v 1.11 2009/04/02 14:47:12 loafman Exp $ |
4404 | - |
4405 | import helper |
4406 | -import sys, os, shutil, StringIO, tempfile, unittest, stat |
4407 | - |
4408 | +import unittest |
4409 | +from duplicity import cached_ops |
4410 | from duplicity import tarfile |
4411 | |
4412 | helper.setup() |
4413 | |
4414 | -SAMPLETAR = "testtar.tar" |
4415 | -TEMPDIR = tempfile.mktemp() |
4416 | - |
4417 | -def join(*args): |
4418 | - return os.path.normpath(apply(os.path.join, args)) |
4419 | - |
4420 | -class BaseTest(unittest.TestCase): |
4421 | - """Base test for tarfile. |
4422 | - """ |
4423 | - |
4424 | - def setUp(self): |
4425 | - os.mkdir(TEMPDIR) |
4426 | - self.tar = tarfile.open(SAMPLETAR) |
4427 | - self.tar.errorlevel = 1 |
4428 | - |
4429 | - def tearDown(self): |
4430 | - self.tar.close() |
4431 | - shutil.rmtree(TEMPDIR) |
4432 | - |
4433 | - def isroot(self): |
4434 | - return hasattr(os, "geteuid") and os.geteuid() == 0 |
4435 | - |
4436 | -class Test_All(BaseTest): |
4437 | - """Allround test. |
4438 | - """ |
4439 | - files_in_tempdir = ["tempdir", |
4440 | - "tempdir/0length", |
4441 | - "tempdir/large", |
4442 | - "tempdir/hardlinked1", |
4443 | - "tempdir/hardlinked2", |
4444 | - "tempdir/fifo", |
4445 | - "tempdir/symlink"] |
4446 | - |
4447 | - tempdir_data = {"0length": "", |
4448 | - "large": "hello, world!" * 10000, |
4449 | - "hardlinked1": "foo", |
4450 | - "hardlinked2": "foo"} |
4451 | - |
4452 | - def test_iteration(self): |
4453 | - """Test iteration through temp2.tar""" |
4454 | - self.make_temptar() |
4455 | - i = 0 |
4456 | - tf = tarfile.TarFile("none", "r", FileLogger(open("temp2.tar", "rb"))) |
4457 | - tf.debug = 3 |
4458 | - for tarinfo in tf: i += 1 #@UnusedVariable |
4459 | - assert i >= 6, i |
4460 | - |
4461 | - def _test_extraction(self): |
4462 | - """Test if regular files and links are extracted correctly. |
4463 | - """ |
4464 | - for tarinfo in self.tar: |
4465 | - if tarinfo.isreg() or tarinfo.islnk() or tarinfo.issym(): |
4466 | - self.tar.extract(tarinfo, TEMPDIR) |
4467 | - name = join(TEMPDIR, tarinfo.name) |
4468 | - data1 = file(name, "rb").read() |
4469 | - data2 = self.tar.extractfile(tarinfo).read() |
4470 | - self.assert_(data1 == data2, |
4471 | - "%s was not extracted successfully." |
4472 | - % tarinfo.name) |
4473 | - |
4474 | - if not tarinfo.issym(): |
4475 | - self.assert_(tarinfo.mtime == os.path.getmtime(name), |
4476 | - "%s's modification time was not set correctly." |
4477 | - % tarinfo.name) |
4478 | - |
4479 | - if tarinfo.isdev(): |
4480 | - if hasattr(os, "mkfifo") and tarinfo.isfifo(): |
4481 | - self.tar.extract(tarinfo, TEMPDIR) |
4482 | - name = join(TEMPDIR, tarinfo.name) |
4483 | - self.assert_(tarinfo.mtime == os.path.getmtime(name), |
4484 | - "%s's modification time was not set correctly." |
4485 | - % tarinfo.name) |
4486 | - |
4487 | - elif hasattr(os, "mknod") and self.isroot(): |
4488 | - self.tar.extract(tarinfo, TEMPDIR) |
4489 | - name = join(TEMPDIR, tarinfo.name) |
4490 | - self.assert_(tarinfo.mtime == os.path.getmtime(name), |
4491 | - "%s's modification time was not set correctly." |
4492 | - % tarinfo.name) |
4493 | - |
4494 | - def test_addition(self): |
4495 | - """Test if regular files are added correctly. |
4496 | - For this, we extract all regular files from our sample tar |
4497 | - and add them to a new one, which we check afterwards. |
4498 | - """ |
4499 | - files = [] |
4500 | - for tarinfo in self.tar: |
4501 | - if tarinfo.isreg(): |
4502 | - self.tar.extract(tarinfo, TEMPDIR) |
4503 | - files.append(tarinfo.name) |
4504 | - |
4505 | - buf = StringIO.StringIO() |
4506 | - tar = tarfile.open("test.tar", "w", buf) |
4507 | - for f in files: |
4508 | - path = join(TEMPDIR, f) |
4509 | - tarinfo = tar.gettarinfo(path) |
4510 | - tarinfo.name = f |
4511 | - tar.addfile(tarinfo, file(path, "rb")) |
4512 | - tar.close() |
4513 | - |
4514 | - buf.seek(0) |
4515 | - tar = tarfile.open("test.tar", "r", buf) |
4516 | - for tarinfo in tar: |
4517 | - data1 = file(join(TEMPDIR, tarinfo.name), "rb").read() |
4518 | - data2 = tar.extractfile(tarinfo).read() |
4519 | - self.assert_(data1 == data2) |
4520 | - tar.close() |
4521 | - |
4522 | - def make_tempdir(self): |
4523 | - """Make a temp directory with assorted files in it""" |
4524 | - try: |
4525 | - os.lstat("tempdir") |
4526 | - except OSError: |
4527 | - pass |
4528 | - else: # assume already exists |
4529 | - assert not os.system("rm -r tempdir") |
4530 | - os.mkdir("tempdir") |
4531 | - |
4532 | - def write_file(name): |
4533 | - """Write appropriate data into file named name in tempdir""" |
4534 | - fp = open("tempdir/%s" % (name,), "wb") |
4535 | - fp.write(self.tempdir_data[name]) |
4536 | - fp.close() |
4537 | - |
4538 | - # Make 0length file |
4539 | - write_file("0length") |
4540 | - os.chmod("tempdir/%s" % ("0length",), 0604) |
4541 | - |
4542 | - # Make regular file 130000 bytes in length |
4543 | - write_file("large") |
4544 | - |
4545 | - # Make hard linked files |
4546 | - write_file("hardlinked1") |
4547 | - os.link("tempdir/hardlinked1", "tempdir/hardlinked2") |
4548 | - |
4549 | - # Make a fifo |
4550 | - os.mkfifo("tempdir/fifo") |
4551 | - |
4552 | - # Make symlink |
4553 | - os.symlink("foobar", "tempdir/symlink") |
4554 | - |
4555 | - def make_temptar(self): |
4556 | - """Tar up tempdir, write to "temp2.tar" """ |
4557 | - try: |
4558 | - os.lstat("temp2.tar") |
4559 | - except OSError: |
4560 | - pass |
4561 | - else: |
4562 | - assert not os.system("rm temp2.tar") |
4563 | - |
4564 | - self.make_tempdir() |
4565 | - tf = tarfile.TarFile("temp2.tar", "w") |
4566 | - for filename in self.files_in_tempdir: |
4567 | - tf.add(filename, filename, 0) |
4568 | - tf.close() |
4569 | - |
4570 | - def test_tarfile_creation(self): |
4571 | - """Create directory, make tarfile, extract using gnutar, compare""" |
4572 | - self.make_temptar() |
4573 | - self.extract_and_compare_tarfile() |
4574 | - |
4575 | - def extract_and_compare_tarfile(self): |
4576 | - old_umask = os.umask(022) |
4577 | - os.system("rm -r tempdir") |
4578 | - assert not os.system("tar -xf temp2.tar") |
4579 | - |
4580 | - def compare_data(name): |
4581 | - """Assert data is what should be""" |
4582 | - fp = open("tempdir/" + name, "rb") |
4583 | - buf = fp.read() |
4584 | - fp.close() |
4585 | - assert buf == self.tempdir_data[name] |
4586 | - |
4587 | - s = os.lstat("tempdir") |
4588 | - assert stat.S_ISDIR(s.st_mode) |
4589 | - |
4590 | - for key in self.tempdir_data: compare_data(key) |
4591 | - |
4592 | - # Check to make sure permissions saved |
4593 | - s = os.lstat("tempdir/0length") |
4594 | - assert stat.S_IMODE(s.st_mode) == 0604, stat.S_IMODE(s.st_mode) |
4595 | - |
4596 | - s = os.lstat("tempdir/fifo") |
4597 | - assert stat.S_ISFIFO(s.st_mode) |
4598 | - |
4599 | - # Check to make sure hardlinked files still hardlinked |
4600 | - s1 = os.lstat("tempdir/hardlinked1") |
4601 | - s2 = os.lstat("tempdir/hardlinked2") |
4602 | - assert s1.st_ino == s2.st_ino |
4603 | - |
4604 | - # Check symlink |
4605 | - s = os.lstat("tempdir/symlink") |
4606 | - assert stat.S_ISLNK(s.st_mode) |
4607 | - |
4608 | - os.umask(old_umask) |
4609 | - |
4610 | -class Test_FObj(BaseTest): |
4611 | - """Test for read operations via file-object. |
4612 | - """ |
4613 | - |
4614 | - def _test_sparse(self): |
4615 | - """Test extraction of the sparse file. |
4616 | - """ |
4617 | - BLOCK = 4096 |
4618 | - for tarinfo in self.tar: |
4619 | - if tarinfo.issparse(): |
4620 | - f = self.tar.extractfile(tarinfo) |
4621 | - b = 0 |
4622 | - block = 0 |
4623 | - while 1: |
4624 | - buf = f.read(BLOCK) |
4625 | - if not buf: |
4626 | - break |
4627 | - block += 1 |
4628 | - self.assert_(BLOCK == len(buf)) |
4629 | - if not b: |
4630 | - self.assert_("\0" * BLOCK == buf, |
4631 | - "sparse block is broken") |
4632 | - else: |
4633 | - self.assert_("0123456789ABCDEF" * 256 == buf, |
4634 | - "sparse block is broken") |
4635 | - b = 1 - b |
4636 | - self.assert_(block == 24, "too few sparse blocks") |
4637 | - f.close() |
4638 | - |
4639 | - def _test_readlines(self): |
4640 | - """Test readlines() method of _FileObject. |
4641 | - """ |
4642 | - self.tar.extract("pep.txt", TEMPDIR) |
4643 | - lines1 = file(join(TEMPDIR, "pep.txt"), "r").readlines() |
4644 | - lines2 = self.tar.extractfile("pep.txt").readlines() |
4645 | - self.assert_(lines1 == lines2, "readline() does not work correctly") |
4646 | - |
4647 | - def _test_seek(self): |
4648 | - """Test seek() method of _FileObject, incl. random reading. |
4649 | - """ |
4650 | - self.tar.extract("pep.txt", TEMPDIR) |
4651 | - data = file(join(TEMPDIR, "pep.txt"), "rb").read() |
4652 | - |
4653 | - tarinfo = self.tar.getmember("pep.txt") |
4654 | - fobj = self.tar.extractfile(tarinfo) |
4655 | - |
4656 | - text = fobj.read() #@UnusedVariable |
4657 | - fobj.seek(0) |
4658 | - self.assert_(0 == fobj.tell(), |
4659 | - "seek() to file's start failed") |
4660 | - fobj.seek(4096, 0) |
4661 | - self.assert_(4096 == fobj.tell(), |
4662 | - "seek() to absolute position failed") |
4663 | - fobj.seek(-2048, 1) |
4664 | - self.assert_(2048 == fobj.tell(), |
4665 | - "seek() to negative relative position failed") |
4666 | - fobj.seek(2048, 1) |
4667 | - self.assert_(4096 == fobj.tell(), |
4668 | - "seek() to positive relative position failed") |
4669 | - s = fobj.read(10) |
4670 | - self.assert_(s == data[4096:4106], |
4671 | - "read() after seek failed") |
4672 | - fobj.seek(0, 2) |
4673 | - self.assert_(tarinfo.size == fobj.tell(), |
4674 | - "seek() to file's end failed") |
4675 | - self.assert_(fobj.read() == "", |
4676 | - "read() at file's end did not return empty string") |
4677 | - fobj.seek(-tarinfo.size, 2) |
4678 | - self.assert_(0 == fobj.tell(), |
4679 | - "relative seek() to file's start failed") |
4680 | - fobj.seek(1024) |
4681 | - s1 = fobj.readlines() |
4682 | - fobj.seek(1024) |
4683 | - s2 = fobj.readlines() |
4684 | - self.assert_(s1 == s2, |
4685 | - "readlines() after seek failed") |
4686 | - fobj.close() |
4687 | - |
4688 | -class FileLogger: |
4689 | - """Like a file but log requests""" |
4690 | - def __init__(self, infp): |
4691 | - self.infp = infp |
4692 | - def read(self, length): |
4693 | - #print "Reading ", length |
4694 | - return self.infp.read(length) |
4695 | - def seek(self, position): |
4696 | - #print "Seeking to ", position |
4697 | - return self.infp.seek(position) |
4698 | - def tell(self): |
4699 | - #print "Telling" |
4700 | - return self.infp.tell() |
4701 | - def close(self): |
4702 | - #print "Closing" |
4703 | - return self.infp.close() |
4704 | - |
4705 | + |
4706 | +class TarfileTest(unittest.TestCase): |
4707 | + def test_cached_ops(self): |
4708 | + self.assertTrue(tarfile.grp is cached_ops) |
4709 | + self.assertTrue(tarfile.pwd is cached_ops) |
4710 | |
4711 | if __name__ == "__main__": |
4712 | unittest.main() |
4713 | |
4714 | === modified file 'testing/tests/test_unicode.py' |
4715 | --- testing/tests/test_unicode.py 2013-12-27 06:39:00 +0000 |
4716 | +++ testing/tests/test_unicode.py 2014-04-16 20:51:42 +0000 |
4717 | @@ -29,13 +29,11 @@ |
4718 | if 'duplicity' in sys.modules: |
4719 | del(sys.modules["duplicity"]) |
4720 | |
4721 | - @patch('gettext.translation') |
4722 | + @patch('gettext.install') |
4723 | def test_module_install(self, gettext_mock): |
4724 | """Make sure we convert translations to unicode""" |
4725 | import duplicity |
4726 | - gettext_mock.assert_called_once_with('duplicity', fallback=True) |
4727 | - gettext_mock.return_value.install.assert_called_once_with(unicode=True) |
4728 | - assert ngettext is gettext_mock.return_value.ungettext |
4729 | + gettext_mock.assert_called_once_with('duplicity', unicode=True, names=['ngettext']) |
4730 | |
4731 | if __name__ == "__main__": |
4732 | unittest.main() |
4733 | |
4734 | === removed file 'testing/testtar.tar' |
4735 | Binary files testing/testtar.tar 2002-10-29 01:49:46 +0000 and testing/testtar.tar 1970-01-01 00:00:00 +0000 differ |
/me hugs Ken