Merge lp:~kevinoid/duplicity/windows-port into lp:duplicity/0.6
- windows-port
- Merge into 0.6-series
Status: | Rejected | ||||||||
---|---|---|---|---|---|---|---|---|---|
Rejected by: | Kenneth Loafman | ||||||||
Proposed branch: | lp:~kevinoid/duplicity/windows-port | ||||||||
Merge into: | lp:duplicity/0.6 | ||||||||
Diff against target: |
1552 lines (+668/-245) 18 files modified
dist/makedist (+58/-35) dist/setup.py (+9/-10) duplicity-bin (+96/-17) duplicity.1 (+5/-3) duplicity/GnuPGInterface.py (+215/-127) duplicity/backend.py (+14/-1) duplicity/backends/localbackend.py (+10/-1) duplicity/commandline.py (+14/-1) duplicity/compilec.py (+16/-3) duplicity/dup_temp.py (+9/-5) duplicity/globals.py (+48/-5) duplicity/manifest.py (+9/-3) duplicity/patchdir.py (+15/-0) duplicity/path.py (+102/-17) duplicity/selection.py (+21/-11) duplicity/tarfile.py (+6/-2) po/update-pot (+0/-4) po/update-pot.py (+21/-0) |
||||||||
To merge this branch: | bzr merge lp:~kevinoid/duplicity/windows-port | ||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
duplicity-team | Pending | ||
Review via email: mp+39287@code.launchpad.net |
Commit message
Description of the change
This branch includes changes to support running Duplicity natively on Windows, as requested in bug 451582. I have done my best to separate out each change into logical units for commits and provide a detailed explanation and rationale for each change in the commit message.
The current work is only intended to port the main functionality of Duplicity and the local backend. The other backends have not been tested (and several, particularly ssh, are known not to work on Windows).
Most of the commits should not change any of the functionality of Duplicity. However, you may wish to take particular notice of revision 677 and 698 which do introduce functionality changes.
Note: Revision 682 added some portability improvements for the restart process and fixed bug 637556 in the process.
Kenneth Loafman (kenneth-loafman) wrote : | # |
Unmerged revisions
- 703. By Kevin Locke <email address hidden>
-
Include updates to GnuPGInterface
These updates are from the current state of a branch of development that
I created to port GnuPGInterface to Windows, fix some bugs, and add
Python 3 support. The branch is available on github at
<http://github. com/kevinoid/ py-gnupg>. These changes are taken from
commit 91667c.I have assurances from the original author of GnuPGInterface that the
changes will be merged into his sources with minimal changes (if any)
once he has time and that it is safe to merge these changes into other
projects that need them without introducing a significant maintenance
burden.Note: This version of GnuPGInterface now uses subprocess rather than
"raw" fork/exec. The threaded waitpid was removed due to threading
issues with subprocess (particularly Issue 1731717). It should be
largely unnecessary as any zombie child processes are reaped when a new
subprocess is started. If the more immediate reaping is required, the
threaded wait can easily be re-added (and less-easily be made thread
safe with subprocess).Signed-off-by: Kevin Locke <email address hidden>
- 702. By Kevin Locke <email address hidden>
-
Unwrap temporary file instances
tempfile.
TemporaryFile returns an instance of a wrapper class on
non-POSIX, non-Cygwin systems. Yet the librsync code requires a
standard file object. On Windows, the wrapper is unnecessary since the
file is opened with O_TEMPORARY and is deleted automatically on close.
On other systems the wrapper may be necessary to delete a file, so we
error out (in a way that future developers on those systems should be
able to find and understand...).Note: This is a bit of a hack and relies on undocumented properties of
the object returned from tempfile.TemporaryFile. However, it avoids
the need to track and delete the temporary file ourselves.Signed-off-by: Kevin Locke <email address hidden>
- 701. By Kevin Locke <email address hidden>
-
Make the manifest format more interoperable
At the cost of losing a bit of information, use the POSIX path
and EOL convention as part of the file format for manifest files.
This way backups created on one platform can be restored on another with
a different path and/or EOL convention.To accomplish this, change the file mode to binary and convert native
paths to POSIX paths during manifest creation and back during load.Note: During load the drive specifier for the target directory (if
there is one) is added to the POSIX-converted path. This allows
restoring cross-platform files more easily at the cost of losing a
warning about restoring files to a different drive than the original
backup. If this is unacceptable, the drive could be the first component
of the POSIX path or the cross-platform interoperability could be
dropped.Signed-off-by: Kevin Locke <email address hidden>
- 700. By Kevin Locke <email address hidden>
-
Prevent Windows paths from being parsed as URLs
Windows paths which begin with a drive specifier are parsed as having a
scheme which is the drive letter. In order to avoid against these paths
being treated as URLs, check if the potential url_string looks like a
Windows path.Note: The regex is not perfect, since it is possible that the "c" URL
scheme would not require slashes after the protocol. So limit the
checks to Windows platforms where paths are more likely to have this
form.Signed-off-by: Kevin Locke <email address hidden>
- 699. By Kevin Locke <email address hidden>
-
Make file:// URL parsing more portable
Create path.from_url_path utility function which takes the path
component of a file URL and converts it into the native path
representation on the platform.Note: RFC 1738 specifies that anything after file:// before the next
slash is a hostname (and therefore, implicitly, all paths are absolute).
However, to maintain backwards compatibility, allow for relative paths
to be specified in this manner.Signed-off-by: Kevin Locke <email address hidden>
- 698. By Kevin Locke <email address hidden>
-
Ensure prefix ends with empty path component
WARNING: Functionality change
In order to prevent the given prefix from matching directories which
begin with the last component of prefix, ensure that prefix ends with an
empty path component (causing a tailing slash on POSIX systems).
Otherwise the prefix /foo/b would also include /foo/bar.Note: If this functionality really was intended, wherever prefix is
removed from a path, the result should not start with a directory
separator (since it would result in a "/" first element in the return
value from path.split_all).Signed-off-by: Kevin Locke <email address hidden>
- 697. By Kevin Locke <email address hidden>
-
Replace path manipulation based on "/"
Make use of path.split_all and os.path.join where paths were manipulated
based on "/"In selection, change path import to keep path namespace, but alias
path.Path to Path to minimize the diff. The bare uses of split_all
seemed unnecessarily confusing and the other members of path are not
used directly so there is no real need for the * import.Note: Removed unnecessary use of filter to remove blanks as this is
handled implicitly by os.path.split and therefore by path.split_all.Note2: Need to be careful that os.path.join must have at least 1
argument, while "/".join() could work on a 0-length array.Signed-off-by: Kevin Locke <email address hidden>
- 696. By Kevin Locke <email address hidden>
-
Create split_all utility function in path
This function is intended to replace the pathstr.split("/") idiom in
many places and provides a convenient way to break an arbitrary path
into its constituent components.Note: Although it would be possible to split on os.altsep and os.sep
and it would work in many cases, and run a bit faster, it doesn't handle
more complex path systems and is likely to fail in corner cases even in
less complex path systems.Signed-off-by: Kevin Locke <email address hidden>
- 695. By Kevin Locke <email address hidden>
-
Support systems without pwd and/or grp in tarfile
Inside tarfile this case is normally checked by the calling code, but
since uid2uname and gid2gname are called by Path without checking for
pwd and grp (and we want to support this case), remove the assertion
that these modules are defined. When they are not, throw KeyError
indicating the ID could not be mapped.Signed-off-by: Kevin Locke <email address hidden>
- 694. By Kevin Locke <email address hidden>
-
Support more path systems in glob->regex conversion
- Match against the system path separators, rather than "/"
- Update duplicity.1 man page to indicate that the metacharacters match
against the directory separator, rather than "/"Note: This won't be portable to systems with more complex path systems
(like VMS), but that case is not trivial so it is worth waiting for a
need to arise.Signed-off-by: Kevin Locke <email address hidden>
Preview Diff
1 | === modified file 'dist/makedist' | |||
2 | --- dist/makedist 2010-07-22 19:15:11 +0000 | |||
3 | +++ dist/makedist 2010-10-25 15:49:45 +0000 | |||
4 | @@ -20,14 +20,14 @@ | |||
5 | 20 | # along with duplicity; if not, write to the Free Software Foundation, | 20 | # along with duplicity; if not, write to the Free Software Foundation, |
6 | 21 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | 21 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
7 | 22 | 22 | ||
9 | 23 | import os, re, shutil, time, sys | 23 | import os, re, shutil, subprocess, tarfile, traceback, time, sys, zipfile |
10 | 24 | 24 | ||
11 | 25 | SourceDir = "duplicity" | 25 | SourceDir = "duplicity" |
12 | 26 | DistDir = "dist" | 26 | DistDir = "dist" |
13 | 27 | 27 | ||
14 | 28 | # Various details about the files must also be specified by the rpm | 28 | # Various details about the files must also be specified by the rpm |
15 | 29 | # spec template. | 29 | # spec template. |
17 | 30 | spec_template = "dist/duplicity.spec.template" | 30 | spec_template = os.path.join("dist", "duplicity.spec.template") |
18 | 31 | 31 | ||
19 | 32 | def VersionedCopy(source, dest): | 32 | def VersionedCopy(source, dest): |
20 | 33 | """ | 33 | """ |
21 | @@ -48,14 +48,13 @@ | |||
22 | 48 | fout.write(outbuf) | 48 | fout.write(outbuf) |
23 | 49 | assert not fout.close() | 49 | assert not fout.close() |
24 | 50 | 50 | ||
26 | 51 | def MakeTar(): | 51 | def MakeArchives(): |
27 | 52 | """Create duplicity tar file""" | 52 | """Create duplicity tar file""" |
28 | 53 | tardir = "duplicity-%s" % Version | 53 | tardir = "duplicity-%s" % Version |
34 | 54 | tarfile = "duplicity-%s.tar.gz" % Version | 54 | tarname = "duplicity-%s.tar.gz" % Version |
35 | 55 | try: | 55 | zipname = "duplicity-%s.zip" % Version |
36 | 56 | os.lstat(tardir) | 56 | if os.path.exists(tardir): |
37 | 57 | os.system("rm -rf " + tardir) | 57 | shutil.rmtree(tardir) |
33 | 58 | except OSError: pass | ||
38 | 59 | 58 | ||
39 | 60 | os.mkdir(tardir) | 59 | os.mkdir(tardir) |
40 | 61 | for filename in [ | 60 | for filename in [ |
41 | @@ -66,10 +65,10 @@ | |||
42 | 66 | "LOG-README", | 65 | "LOG-README", |
43 | 67 | "README", | 66 | "README", |
44 | 68 | "tarfile-LICENSE", | 67 | "tarfile-LICENSE", |
47 | 69 | SourceDir + "/_librsyncmodule.c", | 68 | os.path.join(SourceDir, "_librsyncmodule.c"), |
48 | 70 | DistDir + "/setup.py", | 69 | os.path.join(DistDir, "setup.py"), |
49 | 71 | ]: | 70 | ]: |
51 | 72 | assert not os.system("cp %s %s" % (filename, tardir)), filename | 71 | shutil.copy(filename, tardir) |
52 | 73 | 72 | ||
53 | 74 | os.mkdir(os.path.join(tardir, "src")) | 73 | os.mkdir(os.path.join(tardir, "src")) |
54 | 75 | for filename in [ | 74 | for filename in [ |
55 | @@ -104,39 +103,46 @@ | |||
56 | 104 | "urlparse_2_5.py", | 103 | "urlparse_2_5.py", |
57 | 105 | "util.py", | 104 | "util.py", |
58 | 106 | ]: | 105 | ]: |
60 | 107 | assert not os.system("cp %s/%s %s/src" % (SourceDir, filename, tardir)), filename | 106 | shutil.copy(os.path.join(SourceDir, filename), |
61 | 107 | os.path.join(tardir, "src")) | ||
62 | 108 | 108 | ||
63 | 109 | os.mkdir(os.path.join(tardir, "src", "backends")) | 109 | os.mkdir(os.path.join(tardir, "src", "backends")) |
64 | 110 | for filename in [ | 110 | for filename in [ |
77 | 111 | "backends/botobackend.py", | 111 | "botobackend.py", |
78 | 112 | "backends/cloudfilesbackend.py", | 112 | "cloudfilesbackend.py", |
79 | 113 | "backends/ftpbackend.py", | 113 | "ftpbackend.py", |
80 | 114 | "backends/giobackend.py", | 114 | "giobackend.py", |
81 | 115 | "backends/hsibackend.py", | 115 | "hsibackend.py", |
82 | 116 | "backends/imapbackend.py", | 116 | "imapbackend.py", |
83 | 117 | "backends/__init__.py", | 117 | "__init__.py", |
84 | 118 | "backends/localbackend.py", | 118 | "localbackend.py", |
85 | 119 | "backends/rsyncbackend.py", | 119 | "rsyncbackend.py", |
86 | 120 | "backends/sshbackend.py", | 120 | "sshbackend.py", |
87 | 121 | "backends/tahoebackend.py", | 121 | "tahoebackend.py", |
88 | 122 | "backends/webdavbackend.py", | 122 | "webdavbackend.py", |
89 | 123 | ]: | 123 | ]: |
92 | 124 | assert not os.system("cp %s/%s %s/src/backends" % | 124 | shutil.copy(os.path.join(SourceDir, "backends", filename), |
93 | 125 | (SourceDir, filename, tardir)), filename | 125 | os.path.join(tardir, "src", "backends")) |
94 | 126 | |||
95 | 127 | if subprocess.call([sys.executable, "update-pot.py"], cwd="po") != 0: | ||
96 | 128 | sys.stderr.write("update-pot.py failed, translation files not updated!\n") | ||
97 | 126 | 129 | ||
98 | 127 | os.mkdir(os.path.join(tardir, "po")) | 130 | os.mkdir(os.path.join(tardir, "po")) |
99 | 128 | assert not os.system("cd po && ./update-pot") | ||
100 | 129 | for filename in [ | 131 | for filename in [ |
101 | 130 | "duplicity.pot", | 132 | "duplicity.pot", |
102 | 131 | ]: | 133 | ]: |
105 | 132 | assert not os.system("cp po/%s %s/po" % (filename, tardir)), filename | 134 | shutil.copy(os.path.join("po", filename), os.path.join(tardir, "po")) |
106 | 133 | linguas = open('po/LINGUAS') | 135 | linguas = open(os.path.join("po", "LINGUAS")) |
107 | 134 | for line in linguas: | 136 | for line in linguas: |
108 | 135 | langs = line.split() | 137 | langs = line.split() |
109 | 136 | for lang in langs: | 138 | for lang in langs: |
110 | 137 | assert not os.mkdir(os.path.join(tardir, "po", lang)), lang | 139 | assert not os.mkdir(os.path.join(tardir, "po", lang)), lang |
113 | 138 | assert not os.system("cp po/%s.po %s/po/%s" % (lang, tardir, lang)), lang | 140 | shutil.copy(os.path.join("po", lang + ".po"), |
114 | 139 | assert not os.system("msgfmt po/%s.po -o %s/po/%s/duplicity.mo" % (lang, tardir, lang)), lang | 141 | os.path.join(tardir, "po", lang)) |
115 | 142 | if os.system("msgfmt %s -o %s" % | ||
116 | 143 | (os.path.join("po", lang + ".po"), | ||
117 | 144 | os.path.join(tardir, "po", lang, "duplicity.mo"))) != 0: | ||
118 | 145 | sys.stderr.write("Translation for " + lang + " NOT updated!\n") | ||
119 | 140 | linguas.close() | 146 | linguas.close() |
120 | 141 | 147 | ||
121 | 142 | VersionedCopy(os.path.join(SourceDir, "globals.py"), | 148 | VersionedCopy(os.path.join(SourceDir, "globals.py"), |
122 | @@ -154,9 +160,26 @@ | |||
123 | 154 | 160 | ||
124 | 155 | os.chmod(os.path.join(tardir, "setup.py"), 0755) | 161 | os.chmod(os.path.join(tardir, "setup.py"), 0755) |
125 | 156 | os.chmod(os.path.join(tardir, "rdiffdir"), 0644) | 162 | os.chmod(os.path.join(tardir, "rdiffdir"), 0644) |
127 | 157 | os.system("tar -czf %s %s" % (tarfile, tardir)) | 163 | |
128 | 164 | with tarfile.open(tarname, "w:gz") as tar: | ||
129 | 165 | tar.add(tardir) | ||
130 | 166 | |||
131 | 167 | def add_dir_to_zip(dir, zip, clen=None): | ||
132 | 168 | if clen == None: | ||
133 | 169 | clen = len(dir) | ||
134 | 170 | |||
135 | 171 | for entry in os.listdir(dir): | ||
136 | 172 | entrypath = os.path.join(dir, entry) | ||
137 | 173 | if os.path.isdir(entrypath): | ||
138 | 174 | add_dir_to_zip(entrypath, zip, clen) | ||
139 | 175 | else: | ||
140 | 176 | zip.write(entrypath, entrypath[clen:]) | ||
141 | 177 | |||
142 | 178 | with zipfile.ZipFile(zipname, "w") as zip: | ||
143 | 179 | add_dir_to_zip(tardir, zip) | ||
144 | 180 | |||
145 | 158 | shutil.rmtree(tardir) | 181 | shutil.rmtree(tardir) |
147 | 159 | return tarfile | 182 | return (tarname, zipname) |
148 | 160 | 183 | ||
149 | 161 | def MakeSpecFile(): | 184 | def MakeSpecFile(): |
150 | 162 | """Create spec file using spec template""" | 185 | """Create spec file using spec template""" |
151 | @@ -166,8 +189,8 @@ | |||
152 | 166 | 189 | ||
153 | 167 | def Main(): | 190 | def Main(): |
154 | 168 | print "Processing version " + Version | 191 | print "Processing version " + Version |
157 | 169 | tarfile = MakeTar() | 192 | archives = MakeArchives() |
158 | 170 | print "Made tar file " + tarfile | 193 | print "Made archives: %s" % (archives,) |
159 | 171 | specfile = MakeSpecFile() | 194 | specfile = MakeSpecFile() |
160 | 172 | print "Made specfile " + specfile | 195 | print "Made specfile " + specfile |
161 | 173 | 196 | ||
162 | 174 | 197 | ||
163 | === modified file 'dist/setup.py' | |||
164 | --- dist/setup.py 2010-10-06 14:37:22 +0000 | |||
165 | +++ dist/setup.py 2010-10-25 15:49:45 +0000 | |||
166 | @@ -31,16 +31,15 @@ | |||
167 | 31 | 31 | ||
168 | 32 | incdir_list = libdir_list = None | 32 | incdir_list = libdir_list = None |
169 | 33 | 33 | ||
180 | 34 | if os.name == 'posix': | 34 | LIBRSYNC_DIR = os.environ.get('LIBRSYNC_DIR', '') |
181 | 35 | LIBRSYNC_DIR = os.environ.get('LIBRSYNC_DIR', '') | 35 | args = sys.argv[:] |
182 | 36 | args = sys.argv[:] | 36 | for arg in args: |
183 | 37 | for arg in args: | 37 | if arg.startswith('--librsync-dir='): |
184 | 38 | if arg.startswith('--librsync-dir='): | 38 | LIBRSYNC_DIR = arg.split('=')[1] |
185 | 39 | LIBRSYNC_DIR = arg.split('=')[1] | 39 | sys.argv.remove(arg) |
186 | 40 | sys.argv.remove(arg) | 40 | if LIBRSYNC_DIR: |
187 | 41 | if LIBRSYNC_DIR: | 41 | incdir_list = [os.path.join(LIBRSYNC_DIR, 'include')] |
188 | 42 | incdir_list = [os.path.join(LIBRSYNC_DIR, 'include')] | 42 | libdir_list = [os.path.join(LIBRSYNC_DIR, 'lib')] |
179 | 43 | libdir_list = [os.path.join(LIBRSYNC_DIR, 'lib')] | ||
189 | 44 | 43 | ||
190 | 45 | data_files = [('share/man/man1', | 44 | data_files = [('share/man/man1', |
191 | 46 | ['duplicity.1', | 45 | ['duplicity.1', |
192 | 47 | 46 | ||
193 | === modified file 'duplicity-bin' | |||
194 | --- duplicity-bin 2010-08-26 13:01:10 +0000 | |||
195 | +++ duplicity-bin 2010-10-25 15:49:45 +0000 | |||
196 | @@ -28,11 +28,24 @@ | |||
197 | 28 | # any suggestions. | 28 | # any suggestions. |
198 | 29 | 29 | ||
199 | 30 | import getpass, gzip, os, sys, time, types | 30 | import getpass, gzip, os, sys, time, types |
201 | 31 | import traceback, platform, statvfs, resource, re | 31 | import traceback, platform, statvfs, re |
202 | 32 | 32 | ||
203 | 33 | import gettext | 33 | import gettext |
204 | 34 | gettext.install('duplicity') | 34 | gettext.install('duplicity') |
205 | 35 | 35 | ||
206 | 36 | try: | ||
207 | 37 | import resource | ||
208 | 38 | have_resource = True | ||
209 | 39 | except ImportError: | ||
210 | 40 | have_resource = False | ||
211 | 41 | |||
212 | 42 | if sys.platform == "win32": | ||
213 | 43 | import ctypes | ||
214 | 44 | import ctypes.util | ||
215 | 45 | # Not to be confused with Python's msvcrt module which wraps part of msvcrt | ||
216 | 46 | # Note: Load same msvcrt as Python to avoid cross-CRT problems | ||
217 | 47 | ctmsvcrt = ctypes.cdll[ctypes.util.find_msvcrt()] | ||
218 | 48 | |||
219 | 36 | from duplicity import log | 49 | from duplicity import log |
220 | 37 | log.setup() | 50 | log.setup() |
221 | 38 | 51 | ||
222 | @@ -554,7 +567,10 @@ | |||
223 | 554 | @param col_stats: collection status | 567 | @param col_stats: collection status |
224 | 555 | """ | 568 | """ |
225 | 556 | if globals.restore_dir: | 569 | if globals.restore_dir: |
227 | 557 | index = tuple(globals.restore_dir.split("/")) | 570 | index = path.split_all(globals.restore_dir) |
228 | 571 | if index[-1] == "": | ||
229 | 572 | del index[-1] | ||
230 | 573 | index = tuple(index) | ||
231 | 558 | else: | 574 | else: |
232 | 559 | index = () | 575 | index = () |
233 | 560 | time = globals.restore_time or dup_time.curtime | 576 | time = globals.restore_time or dup_time.curtime |
234 | @@ -994,16 +1010,13 @@ | |||
235 | 994 | # First check disk space in temp area. | 1010 | # First check disk space in temp area. |
236 | 995 | tempfile, tempname = tempdir.default().mkstemp() | 1011 | tempfile, tempname = tempdir.default().mkstemp() |
237 | 996 | os.close(tempfile) | 1012 | os.close(tempfile) |
238 | 1013 | |||
239 | 997 | # strip off the temp dir and file | 1014 | # strip off the temp dir and file |
246 | 998 | tempfs = os.path.sep.join(tempname.split(os.path.sep)[:-2]) | 1015 | tempfs = os.path.split(os.path.split(tempname)[0])[0] |
247 | 999 | try: | 1016 | |
242 | 1000 | stats = os.statvfs(tempfs) | ||
243 | 1001 | except: | ||
244 | 1002 | log.FatalError(_("Unable to get free space on temp."), | ||
245 | 1003 | log.ErrorCode.get_freespace_failed) | ||
248 | 1004 | # Calculate space we need for at least 2 volumes of full or inc | 1017 | # Calculate space we need for at least 2 volumes of full or inc |
249 | 1005 | # plus about 30% of one volume for the signature files. | 1018 | # plus about 30% of one volume for the signature files. |
251 | 1006 | freespace = stats[statvfs.F_FRSIZE] * stats[statvfs.F_BAVAIL] | 1019 | freespace = get_free_space(tempfs) |
252 | 1007 | needspace = (((globals.async_concurrency + 1) * globals.volsize) | 1020 | needspace = (((globals.async_concurrency + 1) * globals.volsize) |
253 | 1008 | + int(0.30 * globals.volsize)) | 1021 | + int(0.30 * globals.volsize)) |
254 | 1009 | if freespace < needspace: | 1022 | if freespace < needspace: |
255 | @@ -1015,16 +1028,82 @@ | |||
256 | 1015 | 1028 | ||
257 | 1016 | # Some environments like Cygwin run with an artificially | 1029 | # Some environments like Cygwin run with an artificially |
258 | 1017 | # low value for max open files. Check for safe number. | 1030 | # low value for max open files. Check for safe number. |
259 | 1031 | check_resource_limits() | ||
260 | 1032 | |||
261 | 1033 | |||
262 | 1034 | def check_resource_limits(): | ||
263 | 1035 | """ | ||
264 | 1036 | Check for sufficient resource limits: | ||
265 | 1037 | - enough max open files | ||
266 | 1038 | Attempt to increase limits to sufficient values if insufficient | ||
267 | 1039 | Put out fatal error if not sufficient to run | ||
268 | 1040 | |||
269 | 1041 | Requires the resource module | ||
270 | 1042 | |||
271 | 1043 | @rtype: void | ||
272 | 1044 | @return: void | ||
273 | 1045 | """ | ||
274 | 1046 | if have_resource: | ||
275 | 1018 | try: | 1047 | try: |
276 | 1019 | soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE) | 1048 | soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE) |
277 | 1020 | except resource.error: | 1049 | except resource.error: |
278 | 1021 | log.FatalError(_("Unable to get max open files."), | 1050 | log.FatalError(_("Unable to get max open files."), |
279 | 1022 | log.ErrorCode.get_ulimit_failed) | 1051 | log.ErrorCode.get_ulimit_failed) |
285 | 1023 | maxopen = min([l for l in (soft, hard) if l > -1]) | 1052 | |
286 | 1024 | if maxopen < 1024: | 1053 | if soft > -1 and soft < 1024 and soft < hard: |
287 | 1025 | log.FatalError(_("Max open files of %s is too low, should be >= 1024.\n" | 1054 | try: |
288 | 1026 | "Use 'ulimit -n 1024' or higher to correct.\n") % (maxopen,), | 1055 | newsoft = min(1024, hard) |
289 | 1027 | log.ErrorCode.maxopen_too_low) | 1056 | resource.setrlimit(resource.RLIMIT_NOFILE, (newsoft, hard)) |
290 | 1057 | soft = newsoft | ||
291 | 1058 | except resource.error: | ||
292 | 1059 | pass | ||
293 | 1060 | elif sys.platform == "win32": | ||
294 | 1061 | # 2048 from http://msdn.microsoft.com/en-us/library/6e3b887c.aspx | ||
295 | 1062 | soft, hard = ctmsvcrt._getmaxstdio(), 2048 | ||
296 | 1063 | |||
297 | 1064 | if soft < 1024: | ||
298 | 1065 | newsoft = ctmsvcrt._setmaxstdio(1024) | ||
299 | 1066 | if newsoft > -1: | ||
300 | 1067 | soft = newsoft | ||
301 | 1068 | else: | ||
302 | 1069 | log.FatalError(_("Unable to get max open files."), | ||
303 | 1070 | log.ErrorCode.get_ulimit_failed) | ||
304 | 1071 | |||
305 | 1072 | maxopen = min([l for l in (soft, hard) if l > -1]) | ||
306 | 1073 | if maxopen < 1024: | ||
307 | 1074 | log.FatalError(_("Max open files of %s is too low, should be >= 1024.\n" | ||
308 | 1075 | "Use 'ulimit -n 1024' or higher to correct.\n") % (maxopen,), | ||
309 | 1076 | log.ErrorCode.maxopen_too_low) | ||
310 | 1077 | |||
311 | 1078 | |||
312 | 1079 | def get_free_space(dir): | ||
313 | 1080 | """ | ||
314 | 1081 | Get the free space available in a given directory | ||
315 | 1082 | |||
316 | 1083 | @type dir: string | ||
317 | 1084 | @param dir: directory in which to measure free space | ||
318 | 1085 | |||
319 | 1086 | @rtype: int | ||
320 | 1087 | @return: amount of free space on the filesystem containing dir (in bytes) | ||
321 | 1088 | """ | ||
322 | 1089 | if hasattr(os, "statvfs"): | ||
323 | 1090 | try: | ||
324 | 1091 | stats = os.statvfs(dir) | ||
325 | 1092 | except: | ||
326 | 1093 | log.FatalError(_("Unable to get free space on temp."), | ||
327 | 1094 | log.ErrorCode.get_freespace_failed) | ||
328 | 1095 | |||
329 | 1096 | return stats[statvfs.F_FRSIZE] * stats[statvfs.F_BAVAIL] | ||
330 | 1097 | elif sys.platform == "win32": | ||
331 | 1098 | freespaceull = ctypes.c_ulonglong(0) | ||
332 | 1099 | ctypes.windll.kernel32.GetDiskFreeSpaceExW(ctypes.c_wchar_p(dir), | ||
333 | 1100 | None, None, ctypes.pointer(freespaceull)) | ||
334 | 1101 | |||
335 | 1102 | return freespaceull.value | ||
336 | 1103 | else: | ||
337 | 1104 | log.FatalError(_("Unable to get free space on temp."), | ||
338 | 1105 | log.ErrorCode.get_freespace_failed) | ||
339 | 1106 | |||
340 | 1028 | 1107 | ||
341 | 1029 | def log_startup_parms(verbosity=log.INFO): | 1108 | def log_startup_parms(verbosity=log.INFO): |
342 | 1030 | """ | 1109 | """ |
343 | @@ -1071,7 +1150,7 @@ | |||
344 | 1071 | log.Notice(_("RESTART: The first volume failed to upload before termination.\n" | 1150 | log.Notice(_("RESTART: The first volume failed to upload before termination.\n" |
345 | 1072 | " Restart is impossible...starting backup from beginning.")) | 1151 | " Restart is impossible...starting backup from beginning.")) |
346 | 1073 | self.last_backup.delete() | 1152 | self.last_backup.delete() |
348 | 1074 | os.execve(sys.argv[0], sys.argv[1:], os.environ) | 1153 | self.execv(sys.executable, sys.argv) |
349 | 1075 | elif mf_len - self.start_vol > 0: | 1154 | elif mf_len - self.start_vol > 0: |
350 | 1076 | # upload of N vols failed, fix manifest and restart | 1155 | # upload of N vols failed, fix manifest and restart |
351 | 1077 | log.Notice(_("RESTART: Volumes %d to %d failed to upload before termination.\n" | 1156 | log.Notice(_("RESTART: Volumes %d to %d failed to upload before termination.\n" |
352 | @@ -1087,7 +1166,7 @@ | |||
353 | 1087 | " backup then restart the backup from the beginning.") % | 1166 | " backup then restart the backup from the beginning.") % |
354 | 1088 | (mf_len, self.start_vol)) | 1167 | (mf_len, self.start_vol)) |
355 | 1089 | self.last_backup.delete() | 1168 | self.last_backup.delete() |
357 | 1090 | os.execve(sys.argv[0], sys.argv[1:], os.environ) | 1169 | self.execv(sys.executable, sys.argv) |
358 | 1091 | 1170 | ||
359 | 1092 | 1171 | ||
360 | 1093 | def setLastSaved(self, mf): | 1172 | def setLastSaved(self, mf): |
361 | @@ -1112,7 +1191,7 @@ | |||
362 | 1112 | 1191 | ||
363 | 1113 | # if python is run setuid, it's only partway set, | 1192 | # if python is run setuid, it's only partway set, |
364 | 1114 | # so make sure to run with euid/egid of root | 1193 | # so make sure to run with euid/egid of root |
366 | 1115 | if os.geteuid() == 0: | 1194 | if hasattr(os, "geteuid") and os.geteuid() == 0: |
367 | 1116 | # make sure uid/gid match euid/egid | 1195 | # make sure uid/gid match euid/egid |
368 | 1117 | os.setuid(os.geteuid()) | 1196 | os.setuid(os.geteuid()) |
369 | 1118 | os.setgid(os.getegid()) | 1197 | os.setgid(os.getegid()) |
370 | 1119 | 1198 | ||
371 | === modified file 'duplicity.1' | |||
372 | --- duplicity.1 2010-08-26 14:11:14 +0000 | |||
373 | +++ duplicity.1 2010-10-25 15:49:45 +0000 | |||
374 | @@ -871,14 +871,16 @@ | |||
375 | 871 | .BR [...] . | 871 | .BR [...] . |
376 | 872 | As in a normal shell, | 872 | As in a normal shell, |
377 | 873 | .B * | 873 | .B * |
379 | 874 | can be expanded to any string of characters not containing "/", | 874 | can be expanded to any string of characters not containing a directory |
380 | 875 | separator, | ||
381 | 875 | .B ? | 876 | .B ? |
383 | 876 | expands to any character except "/", and | 877 | expands to any character except a directory separator, and |
384 | 877 | .B [...] | 878 | .B [...] |
385 | 878 | expands to a single character of those characters specified (ranges | 879 | expands to a single character of those characters specified (ranges |
386 | 879 | are acceptable). The new special pattern, | 880 | are acceptable). The new special pattern, |
387 | 880 | .BR ** , | 881 | .BR ** , |
389 | 881 | expands to any string of characters whether or not it contains "/". | 882 | expands to any string of characters whether or not it contains a |
390 | 883 | directory separator. | ||
391 | 882 | Furthermore, if the pattern starts with "ignorecase:" (case | 884 | Furthermore, if the pattern starts with "ignorecase:" (case |
392 | 883 | insensitive), then this prefix will be removed and any character in | 885 | insensitive), then this prefix will be removed and any character in |
393 | 884 | the string can be replaced with an upper- or lowercase version of | 886 | the string can be replaced with an upper- or lowercase version of |
394 | 885 | 887 | ||
395 | === modified file 'duplicity/GnuPGInterface.py' | |||
396 | --- duplicity/GnuPGInterface.py 2010-07-22 19:15:11 +0000 | |||
397 | +++ duplicity/GnuPGInterface.py 2010-10-25 15:49:45 +0000 | |||
398 | @@ -220,42 +220,55 @@ | |||
399 | 220 | or see http://www.gnu.org/copyleft/lesser.html | 220 | or see http://www.gnu.org/copyleft/lesser.html |
400 | 221 | """ | 221 | """ |
401 | 222 | 222 | ||
402 | 223 | import errno | ||
403 | 223 | import os | 224 | import os |
404 | 225 | import subprocess | ||
405 | 224 | import sys | 226 | import sys |
412 | 225 | import fcntl | 227 | |
413 | 226 | 228 | if sys.platform == "win32": | |
414 | 227 | from duplicity import log | 229 | # Required windows-only imports |
415 | 228 | 230 | import msvcrt | |
416 | 229 | try: | 231 | import _subprocess |
417 | 230 | import threading | 232 | |
418 | 233 | # Define next function for Python pre-2.6 | ||
419 | 234 | try: | ||
420 | 235 | next | ||
421 | 236 | except NameError: | ||
422 | 237 | def next(itr): | ||
423 | 238 | return itr.next() | ||
424 | 239 | |||
425 | 240 | try: | ||
426 | 241 | import fcntl | ||
427 | 231 | except ImportError: | 242 | except ImportError: |
430 | 232 | import dummy_threading #@UnusedImport | 243 | # import success/failure is checked before use |
431 | 233 | log.Warn("Threading not available -- zombie processes may appear") | 244 | pass |
432 | 234 | 245 | ||
433 | 235 | __author__ = "Frank J. Tobin, ftobin@neverending.org" | 246 | __author__ = "Frank J. Tobin, ftobin@neverending.org" |
434 | 236 | __version__ = "0.3.2" | 247 | __version__ = "0.3.2" |
436 | 237 | __revision__ = "$Id: GnuPGInterface.py,v 1.6 2009/06/06 17:35:19 loafman Exp $" | 248 | __revision__ = "$Id$" |
437 | 238 | 249 | ||
438 | 239 | # "standard" filehandles attached to processes | 250 | # "standard" filehandles attached to processes |
439 | 240 | _stds = [ 'stdin', 'stdout', 'stderr' ] | 251 | _stds = [ 'stdin', 'stdout', 'stderr' ] |
440 | 241 | 252 | ||
441 | 242 | # the permissions each type of fh needs to be opened with | 253 | # the permissions each type of fh needs to be opened with |
449 | 243 | _fd_modes = { 'stdin': 'w', | 254 | _fd_modes = { 'stdin': 'wb', |
450 | 244 | 'stdout': 'r', | 255 | 'stdout': 'rb', |
451 | 245 | 'stderr': 'r', | 256 | 'stderr': 'rb', |
452 | 246 | 'passphrase': 'w', | 257 | 'passphrase': 'wb', |
453 | 247 | 'command': 'w', | 258 | 'attribute': 'rb', |
454 | 248 | 'logger': 'r', | 259 | 'command': 'wb', |
455 | 249 | 'status': 'r' | 260 | 'logger': 'rb', |
456 | 261 | 'status': 'rb' | ||
457 | 250 | } | 262 | } |
458 | 251 | 263 | ||
459 | 252 | # correlation between handle names and the arguments we'll pass | 264 | # correlation between handle names and the arguments we'll pass |
460 | 253 | _fd_options = { 'passphrase': '--passphrase-fd', | 265 | _fd_options = { 'passphrase': '--passphrase-fd', |
461 | 254 | 'logger': '--logger-fd', | 266 | 'logger': '--logger-fd', |
462 | 255 | 'status': '--status-fd', | 267 | 'status': '--status-fd', |
464 | 256 | 'command': '--command-fd' } | 268 | 'command': '--command-fd', |
465 | 269 | 'attribute': '--attribute-fd' } | ||
466 | 257 | 270 | ||
468 | 258 | class GnuPG: | 271 | class GnuPG(object): |
469 | 259 | """Class instances represent GnuPG. | 272 | """Class instances represent GnuPG. |
470 | 260 | 273 | ||
471 | 261 | Instance attributes of a GnuPG object are: | 274 | Instance attributes of a GnuPG object are: |
472 | @@ -276,6 +289,8 @@ | |||
473 | 276 | the command-line options used when calling GnuPG. | 289 | the command-line options used when calling GnuPG. |
474 | 277 | """ | 290 | """ |
475 | 278 | 291 | ||
476 | 292 | __slots__ = ['call', 'passphrase', 'options'] | ||
477 | 293 | |||
478 | 279 | def __init__(self): | 294 | def __init__(self): |
479 | 280 | self.call = 'gpg' | 295 | self.call = 'gpg' |
480 | 281 | self.passphrase = None | 296 | self.passphrase = None |
481 | @@ -349,14 +364,14 @@ | |||
482 | 349 | if attach_fhs == None: attach_fhs = {} | 364 | if attach_fhs == None: attach_fhs = {} |
483 | 350 | 365 | ||
484 | 351 | for std in _stds: | 366 | for std in _stds: |
486 | 352 | if not attach_fhs.has_key(std) \ | 367 | if std not in attach_fhs \ |
487 | 353 | and std not in create_fhs: | 368 | and std not in create_fhs: |
488 | 354 | attach_fhs.setdefault(std, getattr(sys, std)) | 369 | attach_fhs.setdefault(std, getattr(sys, std)) |
489 | 355 | 370 | ||
490 | 356 | handle_passphrase = 0 | 371 | handle_passphrase = 0 |
491 | 357 | 372 | ||
492 | 358 | if self.passphrase != None \ | 373 | if self.passphrase != None \ |
494 | 359 | and not attach_fhs.has_key('passphrase') \ | 374 | and 'passphrase' not in attach_fhs \ |
495 | 360 | and 'passphrase' not in create_fhs: | 375 | and 'passphrase' not in create_fhs: |
496 | 361 | handle_passphrase = 1 | 376 | handle_passphrase = 1 |
497 | 362 | create_fhs.append('passphrase') | 377 | create_fhs.append('passphrase') |
498 | @@ -366,7 +381,10 @@ | |||
499 | 366 | 381 | ||
500 | 367 | if handle_passphrase: | 382 | if handle_passphrase: |
501 | 368 | passphrase_fh = process.handles['passphrase'] | 383 | passphrase_fh = process.handles['passphrase'] |
503 | 369 | passphrase_fh.write( self.passphrase ) | 384 | if sys.version_info >= (3, 0) and isinstance(self.passphrase, str): |
504 | 385 | passphrase_fh.write( self.passphrase.encode() ) | ||
505 | 386 | else: | ||
506 | 387 | passphrase_fh.write( self.passphrase ) | ||
507 | 370 | passphrase_fh.close() | 388 | passphrase_fh.close() |
508 | 371 | del process.handles['passphrase'] | 389 | del process.handles['passphrase'] |
509 | 372 | 390 | ||
510 | @@ -379,45 +397,40 @@ | |||
511 | 379 | 397 | ||
512 | 380 | process = Process() | 398 | process = Process() |
513 | 381 | 399 | ||
519 | 382 | for fh_name in create_fhs + attach_fhs.keys(): | 400 | for fh_name in create_fhs + list(attach_fhs.keys()): |
520 | 383 | if not _fd_modes.has_key(fh_name): | 401 | if fh_name not in _fd_modes: |
521 | 384 | raise KeyError, \ | 402 | raise KeyError("unrecognized filehandle name '%s'; must be one of %s" \ |
522 | 385 | "unrecognized filehandle name '%s'; must be one of %s" \ | 403 | % (fh_name, list(_fd_modes.keys()))) |
518 | 386 | % (fh_name, _fd_modes.keys()) | ||
523 | 387 | 404 | ||
524 | 388 | for fh_name in create_fhs: | 405 | for fh_name in create_fhs: |
525 | 389 | # make sure the user doesn't specify a filehandle | 406 | # make sure the user doesn't specify a filehandle |
526 | 390 | # to be created *and* attached | 407 | # to be created *and* attached |
531 | 391 | if attach_fhs.has_key(fh_name): | 408 | if fh_name in attach_fhs: |
532 | 392 | raise ValueError, \ | 409 | raise ValueError("cannot have filehandle '%s' in both create_fhs and attach_fhs" \ |
533 | 393 | "cannot have filehandle '%s' in both create_fhs and attach_fhs" \ | 410 | % fh_name) |
530 | 394 | % fh_name | ||
534 | 395 | 411 | ||
535 | 396 | pipe = os.pipe() | 412 | pipe = os.pipe() |
536 | 397 | # fix by drt@un.bewaff.net noting | 413 | # fix by drt@un.bewaff.net noting |
537 | 398 | # that since pipes are unidirectional on some systems, | 414 | # that since pipes are unidirectional on some systems, |
538 | 399 | # so we have to 'turn the pipe around' | 415 | # so we have to 'turn the pipe around' |
539 | 400 | # if we are writing | 416 | # if we are writing |
541 | 401 | if _fd_modes[fh_name] == 'w': pipe = (pipe[1], pipe[0]) | 417 | if _fd_modes[fh_name][0] == 'w': pipe = (pipe[1], pipe[0]) |
542 | 418 | |||
543 | 419 | # Close the parent end in child to prevent deadlock | ||
544 | 420 | if "fcntl" in globals(): | ||
545 | 421 | fcntl.fcntl(pipe[0], fcntl.F_SETFD, fcntl.FD_CLOEXEC) | ||
546 | 422 | |||
547 | 402 | process._pipes[fh_name] = Pipe(pipe[0], pipe[1], 0) | 423 | process._pipes[fh_name] = Pipe(pipe[0], pipe[1], 0) |
548 | 403 | 424 | ||
549 | 404 | for fh_name, fh in attach_fhs.items(): | 425 | for fh_name, fh in attach_fhs.items(): |
550 | 405 | process._pipes[fh_name] = Pipe(fh.fileno(), fh.fileno(), 1) | 426 | process._pipes[fh_name] = Pipe(fh.fileno(), fh.fileno(), 1) |
551 | 406 | 427 | ||
566 | 407 | process.pid = os.fork() | 428 | self._launch_process(process, gnupg_commands, args) |
567 | 408 | if process.pid != 0: | 429 | return self._handle_pipes(process) |
568 | 409 | # start a threaded_waitpid on the child | 430 | |
569 | 410 | process.thread = threading.Thread(target=threaded_waitpid, | 431 | |
570 | 411 | name="wait%d" % process.pid, | 432 | def _handle_pipes(self, process): |
571 | 412 | args=(process,)) | 433 | """Deal with pipes after the child process has been created""" |
558 | 413 | process.thread.start() | ||
559 | 414 | |||
560 | 415 | if process.pid == 0: self._as_child(process, gnupg_commands, args) | ||
561 | 416 | return self._as_parent(process) | ||
562 | 417 | |||
563 | 418 | |||
564 | 419 | def _as_parent(self, process): | ||
565 | 420 | """Stuff run after forking in parent""" | ||
572 | 421 | for k, p in process._pipes.items(): | 434 | for k, p in process._pipes.items(): |
573 | 422 | if not p.direct: | 435 | if not p.direct: |
574 | 423 | os.close(p.child) | 436 | os.close(p.child) |
575 | @@ -428,43 +441,137 @@ | |||
576 | 428 | 441 | ||
577 | 429 | return process | 442 | return process |
578 | 430 | 443 | ||
592 | 431 | 444 | def _create_preexec_fn(self, process): | |
593 | 432 | def _as_child(self, process, gnupg_commands, args): | 445 | """Create and return a function to do cleanup before exec |
594 | 433 | """Stuff run after forking in child""" | 446 | |
595 | 434 | # child | 447 | The cleanup function will close all file descriptors which are not |
596 | 435 | for std in _stds: | 448 | needed by the child process. This is required to prevent unnecessary |
597 | 436 | p = process._pipes[std] | 449 | blocking on the final read of pipes not set FD_CLOEXEC due to gpg |
598 | 437 | os.dup2( p.child, getattr(sys, "__%s__" % std).fileno() ) | 450 | inheriting an open copy of the input end of the pipe. This can cause |
599 | 438 | 451 | delays in unrelated parts of the program or deadlocks in the case that | |
600 | 439 | for k, p in process._pipes.items(): | 452 | one end of the pipe is passed to attach_fds. |
601 | 440 | if p.direct and k not in _stds: | 453 | |
602 | 441 | # we want the fh to stay open after execing | 454 | FIXME: There is a race condition where a pipe can be created in |
603 | 442 | fcntl.fcntl( p.child, fcntl.F_SETFD, 0 ) | 455 | another thread after this function runs before exec is called and it |
604 | 443 | 456 | will not be closed. This race condition will remain until a better | |
605 | 457 | way to avoid closing the error pipe created by submodule is identified. | ||
606 | 458 | """ | ||
607 | 459 | if sys.platform == "win32": | ||
608 | 460 | return None # No cleanup necessary | ||
609 | 461 | |||
610 | 462 | try: | ||
611 | 463 | MAXFD = os.sysconf("SC_OPEN_MAX") | ||
612 | 464 | if MAXFD == -1: | ||
613 | 465 | MAXFD = 256 | ||
614 | 466 | except: | ||
615 | 467 | MAXFD = 256 | ||
616 | 468 | |||
617 | 469 | # Get list of fds to close now, so we don't close the error pipe | ||
618 | 470 | # created by submodule for reporting exec errors | ||
619 | 471 | child_fds = [p.child for p in process._pipes.values()] | ||
620 | 472 | child_fds.sort() | ||
621 | 473 | child_fds.append(MAXFD) # Sentinel value, simplifies code greatly | ||
622 | 474 | |||
623 | 475 | child_fds_iter = iter(child_fds) | ||
624 | 476 | child_fd = next(child_fds_iter) | ||
625 | 477 | while child_fd < 3: | ||
626 | 478 | child_fd = next(child_fds_iter) | ||
627 | 479 | |||
628 | 480 | extra_fds = [] | ||
629 | 481 | # FIXME: Is there a better (portable) way to list all open FDs? | ||
630 | 482 | for fd in range(3, MAXFD): | ||
631 | 483 | if fd > child_fd: | ||
632 | 484 | child_fd = next(child_fds_iter) | ||
633 | 485 | |||
634 | 486 | if fd == child_fd: | ||
635 | 487 | continue | ||
636 | 488 | |||
637 | 489 | try: | ||
638 | 490 | # Note: Can't use lseek, can cause nul byte in pipes | ||
639 | 491 | # where the position has not been set by read/write | ||
640 | 492 | #os.lseek(fd, os.SEEK_CUR, 0) | ||
641 | 493 | os.tcgetpgrp(fd) | ||
642 | 494 | except OSError: | ||
643 | 495 | # FIXME: When support for Python 2.5 is dropped, use 'as' | ||
644 | 496 | oe = sys.exc_info()[1] | ||
645 | 497 | if oe.errno == errno.EBADF: | ||
646 | 498 | continue | ||
647 | 499 | |||
648 | 500 | extra_fds.append(fd) | ||
649 | 501 | |||
650 | 502 | def preexec_fn(): | ||
651 | 503 | # Note: This function runs after standard FDs have been renumbered | ||
652 | 504 | # from their original values to 0, 1, 2 | ||
653 | 505 | |||
654 | 506 | for fd in extra_fds: | ||
655 | 507 | try: | ||
656 | 508 | os.close(fd) | ||
657 | 509 | except OSError: | ||
658 | 510 | pass | ||
659 | 511 | |||
660 | 512 | # Ensure that all descriptors passed to the child will remain open | ||
661 | 513 | # Arguably FD_CLOEXEC descriptors should be an argument error | ||
662 | 514 | # But for backwards compatibility, we just fix it here (after fork) | ||
663 | 515 | for fd in [0, 1, 2] + child_fds[:-1]: | ||
664 | 516 | try: | ||
665 | 517 | fcntl.fcntl(fd, fcntl.F_SETFD, 0) | ||
666 | 518 | except OSError: | ||
667 | 519 | # Will happen for renumbered FDs | ||
668 | 520 | pass | ||
669 | 521 | |||
670 | 522 | return preexec_fn | ||
671 | 523 | |||
672 | 524 | |||
673 | 525 | def _launch_process(self, process, gnupg_commands, args): | ||
674 | 526 | """Run the child process""" | ||
675 | 444 | fd_args = [] | 527 | fd_args = [] |
676 | 445 | |||
677 | 446 | for k, p in process._pipes.items(): | 528 | for k, p in process._pipes.items(): |
678 | 447 | # set command-line options for non-standard fds | 529 | # set command-line options for non-standard fds |
681 | 448 | if k not in _stds: | 530 | if k in _stds: |
682 | 449 | fd_args.extend([ _fd_options[k], "%d" % p.child ]) | 531 | continue |
683 | 450 | 532 | ||
685 | 451 | if not p.direct: os.close(p.parent) | 533 | if sys.platform == "win32": |
686 | 534 | # Must pass inheritable os file handle | ||
687 | 535 | curproc = _subprocess.GetCurrentProcess() | ||
688 | 536 | pchandle = msvcrt.get_osfhandle(p.child) | ||
689 | 537 | pcihandle = _subprocess.DuplicateHandle( | ||
690 | 538 | curproc, pchandle, curproc, 0, 1, | ||
691 | 539 | _subprocess.DUPLICATE_SAME_ACCESS) | ||
692 | 540 | fdarg = pcihandle.Detach() | ||
693 | 541 | else: | ||
694 | 542 | # Must pass file descriptor | ||
695 | 543 | fdarg = p.child | ||
696 | 544 | fd_args.extend([ _fd_options[k], str(fdarg) ]) | ||
697 | 452 | 545 | ||
698 | 453 | command = [ self.call ] + fd_args + self.options.get_args() \ | 546 | command = [ self.call ] + fd_args + self.options.get_args() \ |
699 | 454 | + gnupg_commands + args | 547 | + gnupg_commands + args |
700 | 455 | 548 | ||
705 | 456 | os.execvp( command[0], command ) | 549 | if len(fd_args) > 0: |
706 | 457 | 550 | # Can't close all file descriptors | |
707 | 458 | 551 | # Create preexec function to close what we can | |
708 | 459 | class Pipe: | 552 | preexec_fn = self._create_preexec_fn(process) |
709 | 553 | |||
710 | 554 | process._subproc = subprocess.Popen(command, | ||
711 | 555 | stdin=process._pipes['stdin'].child, | ||
712 | 556 | stdout=process._pipes['stdout'].child, | ||
713 | 557 | stderr=process._pipes['stderr'].child, | ||
714 | 558 | close_fds=not len(fd_args) > 0, | ||
715 | 559 | preexec_fn=preexec_fn, | ||
716 | 560 | shell=False) | ||
717 | 561 | process.pid = process._subproc.pid | ||
718 | 562 | |||
719 | 563 | |||
720 | 564 | class Pipe(object): | ||
721 | 460 | """simple struct holding stuff about pipes we use""" | 565 | """simple struct holding stuff about pipes we use""" |
722 | 566 | __slots__ = ['parent', 'child', 'direct'] | ||
723 | 567 | |||
724 | 461 | def __init__(self, parent, child, direct): | 568 | def __init__(self, parent, child, direct): |
725 | 462 | self.parent = parent | 569 | self.parent = parent |
726 | 463 | self.child = child | 570 | self.child = child |
727 | 464 | self.direct = direct | 571 | self.direct = direct |
728 | 465 | 572 | ||
729 | 466 | 573 | ||
731 | 467 | class Options: | 574 | class Options(object): |
732 | 468 | """Objects of this class encompass options passed to GnuPG. | 575 | """Objects of this class encompass options passed to GnuPG. |
733 | 469 | This class is responsible for determining command-line arguments | 576 | This class is responsible for determining command-line arguments |
734 | 470 | which are based on options. It can be said that a GnuPG | 577 | which are based on options. It can be said that a GnuPG |
735 | @@ -493,6 +600,8 @@ | |||
736 | 493 | 600 | ||
737 | 494 | * homedir | 601 | * homedir |
738 | 495 | * default_key | 602 | * default_key |
739 | 603 | * keyring | ||
740 | 604 | * secret_keyring | ||
741 | 496 | * comment | 605 | * comment |
742 | 497 | * compress_algo | 606 | * compress_algo |
743 | 498 | * options | 607 | * options |
744 | @@ -536,38 +645,34 @@ | |||
745 | 536 | ['--armor', '--recipient', 'Alice', '--recipient', 'Bob', '--no-secmem-warning'] | 645 | ['--armor', '--recipient', 'Alice', '--recipient', 'Bob', '--no-secmem-warning'] |
746 | 537 | """ | 646 | """ |
747 | 538 | 647 | ||
748 | 648 | booleans = ('armor', 'no_greeting', 'verbose', 'no_verbose', | ||
749 | 649 | 'batch', 'always_trust', 'rfc1991', 'openpgp', | ||
750 | 650 | 'quiet', 'no_options', 'textmode', 'force_v3_sigs') | ||
751 | 651 | |||
752 | 652 | metas = ('meta_pgp_5_compatible', 'meta_pgp_2_compatible', | ||
753 | 653 | 'meta_interactive') | ||
754 | 654 | |||
755 | 655 | strings = ('homedir', 'default_key', 'comment', 'compress_algo', | ||
756 | 656 | 'options', 'keyring', 'secret_keyring') | ||
757 | 657 | |||
758 | 658 | lists = ('encrypt_to', 'recipients') | ||
759 | 659 | |||
760 | 660 | __slots__ = booleans + metas + strings + lists + ('extra_args',) | ||
761 | 661 | |||
762 | 539 | def __init__(self): | 662 | def __init__(self): |
776 | 540 | # booleans | 663 | for b in self.booleans: |
777 | 541 | self.armor = 0 | 664 | setattr(self, b, 0) |
765 | 542 | self.no_greeting = 0 | ||
766 | 543 | self.verbose = 0 | ||
767 | 544 | self.no_verbose = 0 | ||
768 | 545 | self.quiet = 0 | ||
769 | 546 | self.batch = 0 | ||
770 | 547 | self.always_trust = 0 | ||
771 | 548 | self.rfc1991 = 0 | ||
772 | 549 | self.openpgp = 0 | ||
773 | 550 | self.force_v3_sigs = 0 | ||
774 | 551 | self.no_options = 0 | ||
775 | 552 | self.textmode = 0 | ||
778 | 553 | 665 | ||
782 | 554 | # meta-option booleans | 666 | for m in self.metas: |
783 | 555 | self.meta_pgp_5_compatible = 0 | 667 | setattr(self, m, 0) |
781 | 556 | self.meta_pgp_2_compatible = 0 | ||
784 | 557 | self.meta_interactive = 1 | 668 | self.meta_interactive = 1 |
785 | 558 | 669 | ||
798 | 559 | # strings | 670 | for s in self.strings: |
799 | 560 | self.homedir = None | 671 | setattr(self, s, None) |
800 | 561 | self.default_key = None | 672 | |
801 | 562 | self.comment = None | 673 | for l in self.lists: |
802 | 563 | self.compress_algo = None | 674 | setattr(self, l, []) |
803 | 564 | self.options = None | 675 | |
792 | 565 | |||
793 | 566 | # lists | ||
794 | 567 | self.encrypt_to = [] | ||
795 | 568 | self.recipients = [] | ||
796 | 569 | |||
797 | 570 | # miscellaneous arguments | ||
804 | 571 | self.extra_args = [] | 676 | self.extra_args = [] |
805 | 572 | 677 | ||
806 | 573 | def get_args( self ): | 678 | def get_args( self ): |
807 | @@ -583,6 +688,8 @@ | |||
808 | 583 | if self.comment != None: args.extend( [ '--comment', self.comment ] ) | 688 | if self.comment != None: args.extend( [ '--comment', self.comment ] ) |
809 | 584 | if self.compress_algo != None: args.extend( [ '--compress-algo', self.compress_algo ] ) | 689 | if self.compress_algo != None: args.extend( [ '--compress-algo', self.compress_algo ] ) |
810 | 585 | if self.default_key != None: args.extend( [ '--default-key', self.default_key ] ) | 690 | if self.default_key != None: args.extend( [ '--default-key', self.default_key ] ) |
811 | 691 | if self.keyring != None: args.extend( [ '--keyring', self.keyring ] ) | ||
812 | 692 | if self.secret_keyring != None: args.extend( [ '--secret-keyring', self.secret_keyring ] ) | ||
813 | 586 | 693 | ||
814 | 587 | if self.no_options: args.append( '--no-options' ) | 694 | if self.no_options: args.append( '--no-options' ) |
815 | 588 | if self.armor: args.append( '--armor' ) | 695 | if self.armor: args.append( '--armor' ) |
816 | @@ -615,7 +722,7 @@ | |||
817 | 615 | return args | 722 | return args |
818 | 616 | 723 | ||
819 | 617 | 724 | ||
821 | 618 | class Process: | 725 | class Process(object): |
822 | 619 | """Objects of this class encompass properties of a GnuPG | 726 | """Objects of this class encompass properties of a GnuPG |
823 | 620 | process spawned by GnuPG.run(). | 727 | process spawned by GnuPG.run(). |
824 | 621 | 728 | ||
825 | @@ -637,43 +744,24 @@ | |||
826 | 637 | os.waitpid() to clean up the process, especially | 744 | os.waitpid() to clean up the process, especially |
827 | 638 | if multiple calls are made to run(). | 745 | if multiple calls are made to run(). |
828 | 639 | """ | 746 | """ |
829 | 747 | __slots__ = ['_pipes', 'handles', 'pid', '_subproc'] | ||
830 | 640 | 748 | ||
831 | 641 | def __init__(self): | 749 | def __init__(self): |
838 | 642 | self._pipes = {} | 750 | self._pipes = {} |
839 | 643 | self.handles = {} | 751 | self.handles = {} |
840 | 644 | self.pid = None | 752 | self.pid = None |
841 | 645 | self._waited = None | 753 | self._subproc = None |
836 | 646 | self.thread = None | ||
837 | 647 | self.returned = None | ||
842 | 648 | 754 | ||
843 | 649 | def wait(self): | 755 | def wait(self): |
868 | 650 | """ | 756 | """Wait on the process to exit, allowing for child cleanup. |
869 | 651 | Wait on threaded_waitpid to exit and examine results. | 757 | Will raise an IOError if the process exits non-zero.""" |
870 | 652 | Will raise an IOError if the process exits non-zero. | 758 | |
871 | 653 | """ | 759 | e = self._subproc.wait() |
872 | 654 | if self.returned == None: | 760 | if e != 0: |
873 | 655 | self.thread.join() | 761 | raise IOError("GnuPG exited non-zero, with code %d" % e) |
850 | 656 | if self.returned != 0: | ||
851 | 657 | raise IOError, "GnuPG exited non-zero, with code %d" % (self.returned >> 8) | ||
852 | 658 | |||
853 | 659 | |||
854 | 660 | def threaded_waitpid(process): | ||
855 | 661 | """ | ||
856 | 662 | When started as a thread with the Process object, thread | ||
857 | 663 | will execute an immediate waitpid() against the process | ||
858 | 664 | pid and will collect the process termination info. This | ||
859 | 665 | will allow us to reap child processes as soon as possible, | ||
860 | 666 | thus freeing resources quickly. | ||
861 | 667 | """ | ||
862 | 668 | try: | ||
863 | 669 | process.returned = os.waitpid(process.pid, 0)[1] | ||
864 | 670 | except: | ||
865 | 671 | log.Debug("GPG process %d terminated before wait()" % process.pid) | ||
866 | 672 | process.returned = 0 | ||
867 | 673 | |||
874 | 674 | 762 | ||
875 | 675 | def _run_doctests(): | 763 | def _run_doctests(): |
877 | 676 | import doctest, GnuPGInterface #@UnresolvedImport | 764 | import doctest, GnuPGInterface |
878 | 677 | return doctest.testmod(GnuPGInterface) | 765 | return doctest.testmod(GnuPGInterface) |
879 | 678 | 766 | ||
880 | 679 | # deprecated | 767 | # deprecated |
881 | 680 | 768 | ||
882 | === modified file 'duplicity/backend.py' | |||
883 | --- duplicity/backend.py 2010-08-09 18:56:03 +0000 | |||
884 | +++ duplicity/backend.py 2010-10-25 15:49:45 +0000 | |||
885 | @@ -64,7 +64,8 @@ | |||
886 | 64 | @return: void | 64 | @return: void |
887 | 65 | """ | 65 | """ |
888 | 66 | path = duplicity.backends.__path__[0] | 66 | path = duplicity.backends.__path__[0] |
890 | 67 | assert path.endswith("duplicity/backends"), duplicity.backends.__path__ | 67 | assert os.path.normcase(path).endswith("duplicity" + os.path.sep + "backends"), \ |
891 | 68 | duplicity.backends.__path__ | ||
892 | 68 | 69 | ||
893 | 69 | files = os.listdir(path) | 70 | files = os.listdir(path) |
894 | 70 | for fn in files: | 71 | for fn in files: |
895 | @@ -201,6 +202,12 @@ | |||
896 | 201 | 202 | ||
897 | 202 | Raise InvalidBackendURL on invalid URL's | 203 | Raise InvalidBackendURL on invalid URL's |
898 | 203 | """ | 204 | """ |
899 | 205 | |||
900 | 206 | if sys.platform == "win32": | ||
901 | 207 | # Regex to match a path containing a Windows drive specifier | ||
902 | 208 | # Valid paths include "C:" "C:/" "C:stuff" "C:/stuff", not "C://stuff" | ||
903 | 209 | _drivespecre = re.compile("^[a-z]:(?![/\\\\]{2})", re.IGNORECASE) | ||
904 | 210 | |||
905 | 204 | def __init__(self, url_string): | 211 | def __init__(self, url_string): |
906 | 205 | self.url_string = url_string | 212 | self.url_string = url_string |
907 | 206 | _ensure_urlparser_initialized() | 213 | _ensure_urlparser_initialized() |
908 | @@ -266,6 +273,12 @@ | |||
909 | 266 | if not pu.scheme: | 273 | if not pu.scheme: |
910 | 267 | return | 274 | return |
911 | 268 | 275 | ||
912 | 276 | # This happens with implicit local paths with a drive specifier | ||
913 | 277 | if sys.platform == "win32" and \ | ||
914 | 278 | re.match(ParsedUrl._drivespecre, url_string): | ||
915 | 279 | self.scheme = "" | ||
916 | 280 | return | ||
917 | 281 | |||
918 | 269 | # Our backends do not handle implicit hosts. | 282 | # Our backends do not handle implicit hosts. |
919 | 270 | if pu.scheme in urlparser.uses_netloc and not pu.hostname: | 283 | if pu.scheme in urlparser.uses_netloc and not pu.hostname: |
920 | 271 | raise InvalidBackendURL("Missing hostname in a backend URL which " | 284 | raise InvalidBackendURL("Missing hostname in a backend URL which " |
921 | 272 | 285 | ||
922 | === modified file 'duplicity/backends/localbackend.py' | |||
923 | --- duplicity/backends/localbackend.py 2010-07-22 19:15:11 +0000 | |||
924 | +++ duplicity/backends/localbackend.py 2010-10-25 15:49:45 +0000 | |||
925 | @@ -39,7 +39,16 @@ | |||
926 | 39 | # The URL form "file:MyFile" is not a valid duplicity target. | 39 | # The URL form "file:MyFile" is not a valid duplicity target. |
927 | 40 | if not parsed_url.path.startswith( '//' ): | 40 | if not parsed_url.path.startswith( '//' ): |
928 | 41 | raise BackendException( "Bad file:// path syntax." ) | 41 | raise BackendException( "Bad file:// path syntax." ) |
930 | 42 | self.remote_pathdir = path.Path(parsed_url.path[2:]) | 42 | |
931 | 43 | # According to RFC 1738, file URLs take the form | ||
932 | 44 | # file://<hostname>/<path> where <hostname> == "" is localhost | ||
933 | 45 | # However, for backwards compatibility, interpret file://stuff/... as | ||
934 | 46 | # being a relative path starting with directory stuff | ||
935 | 47 | if parsed_url.path[2] == '/': | ||
936 | 48 | pathstr = path.from_url_path(parsed_url.path[3:], is_abs=True) | ||
937 | 49 | else: | ||
938 | 50 | pathstr = path.from_url_path(parsed_url.path[2:], is_abs=False) | ||
939 | 51 | self.remote_pathdir = path.Path(pathstr) | ||
940 | 43 | 52 | ||
941 | 44 | def put(self, source_path, remote_filename = None, rename = None): | 53 | def put(self, source_path, remote_filename = None, rename = None): |
942 | 45 | """If rename is set, try that first, copying if doesn't work""" | 54 | """If rename is set, try that first, copying if doesn't work""" |
943 | 46 | 55 | ||
944 | === modified file 'duplicity/commandline.py' | |||
945 | --- duplicity/commandline.py 2010-10-06 15:57:51 +0000 | |||
946 | +++ duplicity/commandline.py 2010-10-25 15:49:45 +0000 | |||
947 | @@ -184,8 +184,21 @@ | |||
948 | 184 | def set_log_fd(fd): | 184 | def set_log_fd(fd): |
949 | 185 | if fd < 1: | 185 | if fd < 1: |
950 | 186 | raise optparse.OptionValueError("log-fd must be greater than zero.") | 186 | raise optparse.OptionValueError("log-fd must be greater than zero.") |
951 | 187 | if sys.platform == "win32": | ||
952 | 188 | # Convert OS file handle to C file descriptor | ||
953 | 189 | import msvcrt | ||
954 | 190 | fd = msvcrt.open_osfhandle(fd, 1) # 1 = _O_WRONLY | ||
955 | 191 | raise optparse.OptionValueError("Unable to open log-fd.") | ||
956 | 187 | log.add_fd(fd) | 192 | log.add_fd(fd) |
957 | 188 | 193 | ||
958 | 194 | def set_restore_dir(dir): | ||
959 | 195 | # Remove empty tail component, if any | ||
960 | 196 | head, tail = os.path.split(dir) | ||
961 | 197 | if not tail: | ||
962 | 198 | dir = head | ||
963 | 199 | |||
964 | 200 | globals.restore_dir = dir | ||
965 | 201 | |||
966 | 189 | def set_time_sep(sep, opt): | 202 | def set_time_sep(sep, opt): |
967 | 190 | if sep == '-': | 203 | if sep == '-': |
968 | 191 | raise optparse.OptionValueError("Dash ('-') not valid for time-separator.") | 204 | raise optparse.OptionValueError("Dash ('-') not valid for time-separator.") |
969 | @@ -291,7 +304,7 @@ | |||
970 | 291 | # --archive-dir <path> | 304 | # --archive-dir <path> |
971 | 292 | parser.add_option("--file-to-restore", "-r", action="callback", type="file", | 305 | parser.add_option("--file-to-restore", "-r", action="callback", type="file", |
972 | 293 | metavar=_("path"), dest="restore_dir", | 306 | metavar=_("path"), dest="restore_dir", |
974 | 294 | callback=lambda o, s, v, p: setattr(p.values, "restore_dir", v.rstrip('/'))) | 307 | callback=set_restore_dir) |
975 | 295 | 308 | ||
976 | 296 | # Used to confirm certain destructive operations like deleting old files. | 309 | # Used to confirm certain destructive operations like deleting old files. |
977 | 297 | parser.add_option("--force", action="store_true") | 310 | parser.add_option("--force", action="store_true") |
978 | 298 | 311 | ||
979 | === modified file 'duplicity/compilec.py' | |||
980 | --- duplicity/compilec.py 2009-04-01 15:07:45 +0000 | |||
981 | +++ duplicity/compilec.py 2010-10-25 15:49:45 +0000 | |||
982 | @@ -20,7 +20,9 @@ | |||
983 | 20 | # along with duplicity; if not, write to the Free Software Foundation, | 20 | # along with duplicity; if not, write to the Free Software Foundation, |
984 | 21 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | 21 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
985 | 22 | 22 | ||
987 | 23 | import sys, os | 23 | import os |
988 | 24 | import shutil | ||
989 | 25 | import sys | ||
990 | 24 | from distutils.core import setup, Extension | 26 | from distutils.core import setup, Extension |
991 | 25 | 27 | ||
992 | 26 | assert len(sys.argv) == 1 | 28 | assert len(sys.argv) == 1 |
993 | @@ -33,5 +35,16 @@ | |||
994 | 33 | ["_librsyncmodule.c"], | 35 | ["_librsyncmodule.c"], |
995 | 34 | libraries=["rsync"])]) | 36 | libraries=["rsync"])]) |
996 | 35 | 37 | ||
999 | 36 | assert not os.system("mv `find build -name _librsync.so` .") | 38 | def find_any_of(filenames, basedir="."): |
1000 | 37 | assert not os.system("rm -rf build") | 39 | for dirpath, dirnames, dirfilenames in os.walk(basedir): |
1001 | 40 | for filename in filenames: | ||
1002 | 41 | if filename in dirfilenames: | ||
1003 | 42 | return os.path.join(dirpath, filename) | ||
1004 | 43 | |||
1005 | 44 | extfile = find_any_of(("_librsync.pyd", "_librsync.so"), "build") | ||
1006 | 45 | if not extfile: | ||
1007 | 46 | sys.stderr.write("Can't find _librsync extension binary, build failed?\n") | ||
1008 | 47 | sys.exit(1) | ||
1009 | 48 | |||
1010 | 49 | os.rename(extfile, os.path.basename(extfile)) | ||
1011 | 50 | shutil.rmtree("build") | ||
1012 | 38 | 51 | ||
1013 | === modified file 'duplicity/dup_temp.py' | |||
1014 | --- duplicity/dup_temp.py 2010-07-22 19:15:11 +0000 | |||
1015 | +++ duplicity/dup_temp.py 2010-10-25 15:49:45 +0000 | |||
1016 | @@ -161,9 +161,11 @@ | |||
1017 | 161 | We have achieved the first checkpoint, make file visible and permanent. | 161 | We have achieved the first checkpoint, make file visible and permanent. |
1018 | 162 | """ | 162 | """ |
1019 | 163 | assert not globals.restart | 163 | assert not globals.restart |
1021 | 164 | self.tdp.rename(self.dirpath.append(self.partname)) | 164 | # Can't rename files open for write on Windows. Wait for close hook |
1022 | 165 | if sys.platform != "win32": | ||
1023 | 166 | self.tdp.rename(self.dirpath.append(self.partname)) | ||
1024 | 167 | del self.hooklist[0] | ||
1025 | 165 | self.fileobj.flush() | 168 | self.fileobj.flush() |
1026 | 166 | del self.hooklist[0] | ||
1027 | 167 | 169 | ||
1028 | 168 | def to_remote(self): | 170 | def to_remote(self): |
1029 | 169 | """ | 171 | """ |
1030 | @@ -173,13 +175,15 @@ | |||
1031 | 173 | pr = file_naming.parse(self.remname) | 175 | pr = file_naming.parse(self.remname) |
1032 | 174 | src = self.dirpath.append(self.partname) | 176 | src = self.dirpath.append(self.partname) |
1033 | 175 | tgt = self.dirpath.append(self.remname) | 177 | tgt = self.dirpath.append(self.remname) |
1034 | 176 | src_iter = SrcIter(src) | ||
1035 | 177 | if pr.compressed: | 178 | if pr.compressed: |
1036 | 179 | src_iter = SrcIter(src) | ||
1037 | 178 | gpg.GzipWriteFile(src_iter, tgt.name, size = sys.maxint) | 180 | gpg.GzipWriteFile(src_iter, tgt.name, size = sys.maxint) |
1038 | 179 | elif pr.encrypted: | 181 | elif pr.encrypted: |
1039 | 182 | src_iter = SrcIter(src) | ||
1040 | 180 | gpg.GPGWriteFile(src_iter, tgt.name, globals.gpg_profile, size = sys.maxint) | 183 | gpg.GPGWriteFile(src_iter, tgt.name, globals.gpg_profile, size = sys.maxint) |
1041 | 181 | else: | 184 | else: |
1043 | 182 | os.system("cp -p %s %s" % (src.name, tgt.name)) | 185 | src.copy(tgt) |
1044 | 186 | src.copy_attribs(tgt) | ||
1045 | 183 | globals.backend.put(tgt) #@UndefinedVariable | 187 | globals.backend.put(tgt) #@UndefinedVariable |
1046 | 184 | os.unlink(tgt.name) | 188 | os.unlink(tgt.name) |
1047 | 185 | 189 | ||
1048 | @@ -189,9 +193,9 @@ | |||
1049 | 189 | """ | 193 | """ |
1050 | 190 | src = self.dirpath.append(self.partname) | 194 | src = self.dirpath.append(self.partname) |
1051 | 191 | tgt = self.dirpath.append(self.permname) | 195 | tgt = self.dirpath.append(self.permname) |
1052 | 192 | src_iter = SrcIter(src) | ||
1053 | 193 | pr = file_naming.parse(self.permname) | 196 | pr = file_naming.parse(self.permname) |
1054 | 194 | if pr.compressed: | 197 | if pr.compressed: |
1055 | 198 | src_iter = SrcIter(src) | ||
1056 | 195 | gpg.GzipWriteFile(src_iter, tgt.name, size = sys.maxint) | 199 | gpg.GzipWriteFile(src_iter, tgt.name, size = sys.maxint) |
1057 | 196 | os.unlink(src.name) | 200 | os.unlink(src.name) |
1058 | 197 | else: | 201 | else: |
1059 | 198 | 202 | ||
1060 | === modified file 'duplicity/globals.py' | |||
1061 | --- duplicity/globals.py 2010-08-26 13:01:10 +0000 | |||
1062 | +++ duplicity/globals.py 2010-10-25 15:49:45 +0000 | |||
1063 | @@ -21,7 +21,7 @@ | |||
1064 | 21 | 21 | ||
1065 | 22 | """Store global configuration information""" | 22 | """Store global configuration information""" |
1066 | 23 | 23 | ||
1068 | 24 | import socket, os | 24 | import sys, socket, os |
1069 | 25 | 25 | ||
1070 | 26 | # The current version of duplicity | 26 | # The current version of duplicity |
1071 | 27 | version = "$version" | 27 | version = "$version" |
1072 | @@ -36,16 +36,59 @@ | |||
1073 | 36 | # The symbolic name of the backup being operated upon. | 36 | # The symbolic name of the backup being operated upon. |
1074 | 37 | backup_name = None | 37 | backup_name = None |
1075 | 38 | 38 | ||
1076 | 39 | # On Windows, use SHGetFolderPath for determining program directories | ||
1077 | 40 | if sys.platform == "win32": | ||
1078 | 41 | import ctypes | ||
1079 | 42 | import ctypes.wintypes as wintypes | ||
1080 | 43 | windll = ctypes.windll | ||
1081 | 44 | |||
1082 | 45 | CSIDL_APPDATA = 0x001a | ||
1083 | 46 | CSIDL_LOCAL_APPDATA = 0x001c | ||
1084 | 47 | def get_csidl_folder_path(csidl): | ||
1085 | 48 | SHGetFolderPath = windll.shell32.SHGetFolderPathW | ||
1086 | 49 | SHGetFolderPath.argtypes = [ | ||
1087 | 50 | wintypes.HWND, | ||
1088 | 51 | ctypes.c_int, | ||
1089 | 52 | wintypes.HANDLE, | ||
1090 | 53 | wintypes.DWORD, | ||
1091 | 54 | wintypes.LPWSTR, | ||
1092 | 55 | ] | ||
1093 | 56 | folderpath = wintypes.create_unicode_buffer(wintypes.MAX_PATH) | ||
1094 | 57 | result = SHGetFolderPath(0, csidl, 0, 0, folderpath) | ||
1095 | 58 | if result != 0: | ||
1096 | 59 | raise WindowsError(result, "Unable to get folder path") | ||
1097 | 60 | return folderpath.value | ||
1098 | 61 | |||
1099 | 62 | |||
1100 | 39 | # Set to the Path of the archive directory (the directory which | 63 | # Set to the Path of the archive directory (the directory which |
1101 | 40 | # contains the signatures and manifests of the relevent backup | 64 | # contains the signatures and manifests of the relevent backup |
1102 | 41 | # collection), and for checkpoint state between volumes. | 65 | # collection), and for checkpoint state between volumes. |
1103 | 42 | # NOTE: this gets expanded in duplicity.commandline | 66 | # NOTE: this gets expanded in duplicity.commandline |
1106 | 43 | os.environ["XDG_CACHE_HOME"] = os.getenv("XDG_CACHE_HOME", os.path.expanduser("~/.cache")) | 67 | if sys.platform == "win32": |
1107 | 44 | archive_dir = os.path.expandvars("$XDG_CACHE_HOME/duplicity") | 68 | try: |
1108 | 69 | archive_dir = get_csidl_folder_path(CSIDL_LOCAL_APPDATA) | ||
1109 | 70 | except WindowsError: | ||
1110 | 71 | try: | ||
1111 | 72 | archive_dir = get_csidl_folder_path(CSIDL_APPDATA) | ||
1112 | 73 | except WindowsError: | ||
1113 | 74 | archive_dir = os.getenv("LOCALAPPDATA") or \ | ||
1114 | 75 | os.getenv("APPDATA") or \ | ||
1115 | 76 | os.path.expanduser("~") | ||
1116 | 77 | archive_dir = os.path.join(archive_dir, "Duplicity", "Archives") | ||
1117 | 78 | else: | ||
1118 | 79 | os.environ["XDG_CACHE_HOME"] = os.getenv("XDG_CACHE_HOME", os.path.expanduser("~/.cache")) | ||
1119 | 80 | archive_dir = os.path.expandvars("$XDG_CACHE_HOME/duplicity") | ||
1120 | 45 | 81 | ||
1121 | 46 | # config dir for future use | 82 | # config dir for future use |
1124 | 47 | os.environ["XDG_CONFIG_HOME"] = os.getenv("XDG_CONFIG_HOME", os.path.expanduser("~/.config")) | 83 | if sys.platform == "win32": |
1125 | 48 | config_dir = os.path.expandvars("$XDG_CONFIG_HOME/duplicity") | 84 | try: |
1126 | 85 | config_dir = get_csidl_folder_path(CSIDL_APPDATA) | ||
1127 | 86 | except WindowsError: | ||
1128 | 87 | config_dir = os.getenv("APPDATA") or os.path.expanduser("~") | ||
1129 | 88 | config_dir = os.path.join(archive_dir, "Duplicity", "Config") | ||
1130 | 89 | else: | ||
1131 | 90 | os.environ["XDG_CONFIG_HOME"] = os.getenv("XDG_CONFIG_HOME", os.path.expanduser("~/.config")) | ||
1132 | 91 | config_dir = os.path.expandvars("$XDG_CONFIG_HOME/duplicity") | ||
1133 | 49 | 92 | ||
1134 | 50 | # Restores will try to bring back the state as of the following time. | 93 | # Restores will try to bring back the state as of the following time. |
1135 | 51 | # If it is None, default to current time. | 94 | # If it is None, default to current time. |
1136 | 52 | 95 | ||
1137 | === modified file 'duplicity/manifest.py' | |||
1138 | --- duplicity/manifest.py 2010-07-22 19:15:11 +0000 | |||
1139 | +++ duplicity/manifest.py 2010-10-25 15:49:45 +0000 | |||
1140 | @@ -25,6 +25,7 @@ | |||
1141 | 25 | 25 | ||
1142 | 26 | from duplicity import log | 26 | from duplicity import log |
1143 | 27 | from duplicity import globals | 27 | from duplicity import globals |
1144 | 28 | from duplicity import path | ||
1145 | 28 | from duplicity import util | 29 | from duplicity import util |
1146 | 29 | 30 | ||
1147 | 30 | class ManifestError(Exception): | 31 | class ManifestError(Exception): |
1148 | @@ -67,7 +68,8 @@ | |||
1149 | 67 | if self.hostname: | 68 | if self.hostname: |
1150 | 68 | self.fh.write("Hostname %s\n" % self.hostname) | 69 | self.fh.write("Hostname %s\n" % self.hostname) |
1151 | 69 | if self.local_dirname: | 70 | if self.local_dirname: |
1153 | 70 | self.fh.write("Localdir %s\n" % Quote(self.local_dirname)) | 71 | self.fh.write("Localdir %s\n" % \ |
1154 | 72 | Quote(path.to_posix(self.local_dirname))) | ||
1155 | 71 | return self | 73 | return self |
1156 | 72 | 74 | ||
1157 | 73 | def check_dirinfo(self): | 75 | def check_dirinfo(self): |
1158 | @@ -146,7 +148,8 @@ | |||
1159 | 146 | if self.hostname: | 148 | if self.hostname: |
1160 | 147 | result += "Hostname %s\n" % self.hostname | 149 | result += "Hostname %s\n" % self.hostname |
1161 | 148 | if self.local_dirname: | 150 | if self.local_dirname: |
1163 | 149 | result += "Localdir %s\n" % Quote(self.local_dirname) | 151 | result += "Localdir %s\n" % \ |
1164 | 152 | Quote(path.to_posix(self.local_dirname)) | ||
1165 | 150 | 153 | ||
1166 | 151 | vol_num_list = self.volume_info_dict.keys() | 154 | vol_num_list = self.volume_info_dict.keys() |
1167 | 152 | vol_num_list.sort() | 155 | vol_num_list.sort() |
1168 | @@ -173,6 +176,9 @@ | |||
1169 | 173 | return Unquote(m.group(2)) | 176 | return Unquote(m.group(2)) |
1170 | 174 | self.hostname = get_field("hostname") | 177 | self.hostname = get_field("hostname") |
1171 | 175 | self.local_dirname = get_field("localdir") | 178 | self.local_dirname = get_field("localdir") |
1172 | 179 | if self.local_dirname: | ||
1173 | 180 | self.local_dirname = path.from_posix(self.local_dirname, | ||
1174 | 181 | globals.local_path and globals.local_path.name) | ||
1175 | 176 | 182 | ||
1176 | 177 | next_vi_string_regexp = re.compile("(^|\\n)(volume\\s.*?)" | 183 | next_vi_string_regexp = re.compile("(^|\\n)(volume\\s.*?)" |
1177 | 178 | "(\\nvolume\\s|$)", re.I | re.S) | 184 | "(\\nvolume\\s|$)", re.I | re.S) |
1178 | @@ -221,7 +227,7 @@ | |||
1179 | 221 | Write string version of manifest to given path | 227 | Write string version of manifest to given path |
1180 | 222 | """ | 228 | """ |
1181 | 223 | assert not path.exists() | 229 | assert not path.exists() |
1183 | 224 | fout = path.open("w") | 230 | fout = path.open("wb") |
1184 | 225 | fout.write(self.to_string()) | 231 | fout.write(self.to_string()) |
1185 | 226 | assert not fout.close() | 232 | assert not fout.close() |
1186 | 227 | path.setdata() | 233 | path.setdata() |
1187 | 228 | 234 | ||
1188 | === modified file 'duplicity/patchdir.py' | |||
1189 | --- duplicity/patchdir.py 2010-07-22 19:15:11 +0000 | |||
1190 | +++ duplicity/patchdir.py 2010-10-25 15:49:45 +0000 | |||
1191 | @@ -19,7 +19,9 @@ | |||
1192 | 19 | # along with duplicity; if not, write to the Free Software Foundation, | 19 | # along with duplicity; if not, write to the Free Software Foundation, |
1193 | 20 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | 20 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
1194 | 21 | 21 | ||
1195 | 22 | import os | ||
1196 | 22 | import re #@UnusedImport | 23 | import re #@UnusedImport |
1197 | 24 | import sys | ||
1198 | 23 | import types | 25 | import types |
1199 | 24 | import tempfile | 26 | import tempfile |
1200 | 25 | 27 | ||
1201 | @@ -470,6 +472,19 @@ | |||
1202 | 470 | if not isinstance( current_file, file ): | 472 | if not isinstance( current_file, file ): |
1203 | 471 | # librsync needs true file | 473 | # librsync needs true file |
1204 | 472 | tempfp = tempfile.TemporaryFile( dir=globals.temproot ) | 474 | tempfp = tempfile.TemporaryFile( dir=globals.temproot ) |
1205 | 475 | if os.name == "nt": | ||
1206 | 476 | # Temp wrapper is unnecessary, file opened O_TEMPORARY | ||
1207 | 477 | tempfp = tempfp.file | ||
1208 | 478 | elif os.name != "posix" and sys.platform != "cygwin": | ||
1209 | 479 | # Note to future developers: | ||
1210 | 480 | # librsync needs direct access to the underlying file. | ||
1211 | 481 | # On these systems temporary files are wrapper objects | ||
1212 | 482 | # The wrapper must be retained until file access is finished | ||
1213 | 483 | # so that when it is released the file can be deleted | ||
1214 | 484 | raise NotImplementedError( | ||
1215 | 485 | "No support for direct access of temporary files " + | ||
1216 | 486 | "on this platform") | ||
1217 | 487 | |||
1218 | 473 | misc.copyfileobj( current_file, tempfp ) | 488 | misc.copyfileobj( current_file, tempfp ) |
1219 | 474 | assert not current_file.close() | 489 | assert not current_file.close() |
1220 | 475 | tempfp.seek( 0 ) | 490 | tempfp.seek( 0 ) |
1221 | 476 | 491 | ||
1222 | === modified file 'duplicity/path.py' | |||
1223 | --- duplicity/path.py 2010-07-22 19:15:11 +0000 | |||
1224 | +++ duplicity/path.py 2010-10-25 15:49:45 +0000 | |||
1225 | @@ -41,6 +41,99 @@ | |||
1226 | 41 | _copy_blocksize = 64 * 1024 | 41 | _copy_blocksize = 64 * 1024 |
1227 | 42 | _tmp_path_counter = 1 | 42 | _tmp_path_counter = 1 |
1228 | 43 | 43 | ||
1229 | 44 | def split_all(path): | ||
1230 | 45 | """ | ||
1231 | 46 | Split path into components | ||
1232 | 47 | |||
1233 | 48 | Invariant: os.path.join(*split_all(path)) == path for a normalized path | ||
1234 | 49 | |||
1235 | 50 | @rtype list | ||
1236 | 51 | @return List of components of path, beginning with "/" or a drive specifier | ||
1237 | 52 | if absolute, ending with "" if path ended with a separator | ||
1238 | 53 | """ | ||
1239 | 54 | parts = [] | ||
1240 | 55 | path, part = os.path.split(path) | ||
1241 | 56 | |||
1242 | 57 | # Special case for paths which end with a separator so rest is still split | ||
1243 | 58 | if part == "": | ||
1244 | 59 | parts.append(part) | ||
1245 | 60 | path, part = os.path.split(path) | ||
1246 | 61 | |||
1247 | 62 | while path != "" and part != "": | ||
1248 | 63 | parts.append(part) | ||
1249 | 64 | path, part = os.path.split(path) | ||
1250 | 65 | |||
1251 | 66 | # Append root (path) for absolute path, or first relative component (part) | ||
1252 | 67 | if path != "": | ||
1253 | 68 | parts.append(path) | ||
1254 | 69 | else: | ||
1255 | 70 | parts.append(part) | ||
1256 | 71 | |||
1257 | 72 | parts.reverse() | ||
1258 | 73 | return parts | ||
1259 | 74 | |||
1260 | 75 | |||
1261 | 76 | def from_posix(path, refpath=None): | ||
1262 | 77 | """ | ||
1263 | 78 | Convert a POSIX-style path to the native path representation | ||
1264 | 79 | |||
1265 | 80 | Copy drive specification (if any) from refpath (if given) | ||
1266 | 81 | """ | ||
1267 | 82 | |||
1268 | 83 | # If native path representation is POSIX, no work needs to be done | ||
1269 | 84 | if os.path.__name__ == "posixpath": | ||
1270 | 85 | return path | ||
1271 | 86 | |||
1272 | 87 | parts = path.split("/") | ||
1273 | 88 | if parts[0] == "": | ||
1274 | 89 | parts[0] = os.path.sep | ||
1275 | 90 | |||
1276 | 91 | if refpath is not None: | ||
1277 | 92 | drive = os.path.splitdrive(refpath)[0] | ||
1278 | 93 | if drive: | ||
1279 | 94 | parts.insert(0, drive) | ||
1280 | 95 | |||
1281 | 96 | return os.path.join(*parts) | ||
1282 | 97 | |||
1283 | 98 | |||
1284 | 99 | def to_posix(path): | ||
1285 | 100 | """ | ||
1286 | 101 | Convert a path from the native path representation to a POSIX-style path | ||
1287 | 102 | |||
1288 | 103 | The path is broken into components according to split_all, then recombined | ||
1289 | 104 | with "/" separating components. Any drive specifier is omitted. | ||
1290 | 105 | """ | ||
1291 | 106 | |||
1292 | 107 | # If native path representation is POSIX, no work needs to be done | ||
1293 | 108 | if os.path.__name__ == "posixpath": | ||
1294 | 109 | return path | ||
1295 | 110 | |||
1296 | 111 | parts = split_all(path) | ||
1297 | 112 | if os.path.isabs(path): | ||
1298 | 113 | return "/" + "/".join(parts[1:]) | ||
1299 | 114 | else: | ||
1300 | 115 | return "/".join(parts) | ||
1301 | 116 | |||
1302 | 117 | |||
1303 | 118 | def from_url_path(url_path, is_abs=True): | ||
1304 | 119 | """ | ||
1305 | 120 | Convert the <path> component of a file URL into a path in the native path | ||
1306 | 121 | representation. | ||
1307 | 122 | """ | ||
1308 | 123 | |||
1309 | 124 | parts = url_path.split("/") | ||
1310 | 125 | if is_abs: | ||
1311 | 126 | if os.path.__name__ == "posixpath": | ||
1312 | 127 | parts.insert(0, "/") | ||
1313 | 128 | elif os.path.__name__ == "ntpath": | ||
1314 | 129 | parts[0] += os.path.sep | ||
1315 | 130 | else: | ||
1316 | 131 | raise NotImplementedException( | ||
1317 | 132 | "Method to create an absolute path not known") | ||
1318 | 133 | |||
1319 | 134 | return os.path.join(*parts) | ||
1320 | 135 | |||
1321 | 136 | |||
1322 | 44 | class StatResult: | 137 | class StatResult: |
1323 | 45 | """Used to emulate the output of os.stat() and related""" | 138 | """Used to emulate the output of os.stat() and related""" |
1324 | 46 | # st_mode is required by the TarInfo class, but it's unclear how | 139 | # st_mode is required by the TarInfo class, but it's unclear how |
1325 | @@ -142,7 +235,7 @@ | |||
1326 | 142 | def get_relative_path(self): | 235 | def get_relative_path(self): |
1327 | 143 | """Return relative path, created from index""" | 236 | """Return relative path, created from index""" |
1328 | 144 | if self.index: | 237 | if self.index: |
1330 | 145 | return "/".join(self.index) | 238 | return os.path.join(*self.index) |
1331 | 146 | else: | 239 | else: |
1332 | 147 | return "." | 240 | return "." |
1333 | 148 | 241 | ||
1334 | @@ -435,7 +528,8 @@ | |||
1335 | 435 | def copy_attribs(self, other): | 528 | def copy_attribs(self, other): |
1336 | 436 | """Only copy attributes from self to other""" | 529 | """Only copy attributes from self to other""" |
1337 | 437 | if isinstance(other, Path): | 530 | if isinstance(other, Path): |
1339 | 438 | util.maybe_ignore_errors(lambda: os.chown(other.name, self.stat.st_uid, self.stat.st_gid)) | 531 | if hasattr(os, "chown"): |
1340 | 532 | util.maybe_ignore_errors(lambda: os.chown(other.name, self.stat.st_uid, self.stat.st_gid)) | ||
1341 | 439 | util.maybe_ignore_errors(lambda: os.chmod(other.name, self.mode)) | 533 | util.maybe_ignore_errors(lambda: os.chmod(other.name, self.mode)) |
1342 | 440 | util.maybe_ignore_errors(lambda: os.utime(other.name, (time.time(), self.stat.st_mtime))) | 534 | util.maybe_ignore_errors(lambda: os.utime(other.name, (time.time(), self.stat.st_mtime))) |
1343 | 441 | other.setdata() | 535 | other.setdata() |
1344 | @@ -490,7 +584,7 @@ | |||
1345 | 490 | try: | 584 | try: |
1346 | 491 | self.stat = os.lstat(self.name) | 585 | self.stat = os.lstat(self.name) |
1347 | 492 | except OSError, e: | 586 | except OSError, e: |
1349 | 493 | err_string = errno.errorcode[e[0]] | 587 | err_string = errno.errorcode.get(e[0]) |
1350 | 494 | if err_string == "ENOENT" or err_string == "ENOTDIR" or err_string == "ELOOP": | 588 | if err_string == "ENOENT" or err_string == "ENOTDIR" or err_string == "ELOOP": |
1351 | 495 | self.stat, self.type = None, None # file doesn't exist | 589 | self.stat, self.type = None, None # file doesn't exist |
1352 | 496 | self.mode = None | 590 | self.mode = None |
1353 | @@ -578,11 +672,7 @@ | |||
1354 | 578 | if self.index: | 672 | if self.index: |
1355 | 579 | return Path(self.base, self.index[:-1]) | 673 | return Path(self.base, self.index[:-1]) |
1356 | 580 | else: | 674 | else: |
1362 | 581 | components = self.base.split("/") | 675 | return Path(os.path.dirname(self.base)) |
1358 | 582 | if len(components) == 2 and not components[0]: | ||
1359 | 583 | return Path("/") # already in root directory | ||
1360 | 584 | else: | ||
1361 | 585 | return Path("/".join(components[:-1])) | ||
1363 | 586 | 676 | ||
1364 | 587 | def writefileobj(self, fin): | 677 | def writefileobj(self, fin): |
1365 | 588 | """Copy file object fin to self. Close both when done.""" | 678 | """Copy file object fin to self. Close both when done.""" |
1366 | @@ -672,9 +762,7 @@ | |||
1367 | 672 | 762 | ||
1368 | 673 | def get_filename(self): | 763 | def get_filename(self): |
1369 | 674 | """Return filename of last component""" | 764 | """Return filename of last component""" |
1373 | 675 | components = self.name.split("/") | 765 | return os.path.basename(self.name) |
1371 | 676 | assert components and components[-1] | ||
1372 | 677 | return components[-1] | ||
1374 | 678 | 766 | ||
1375 | 679 | def get_canonical(self): | 767 | def get_canonical(self): |
1376 | 680 | """ | 768 | """ |
1377 | @@ -684,12 +772,9 @@ | |||
1378 | 684 | it's harder to remove "..", as "foo/bar/.." is not necessarily | 772 | it's harder to remove "..", as "foo/bar/.." is not necessarily |
1379 | 685 | "foo", so we can't use path.normpath() | 773 | "foo", so we can't use path.normpath() |
1380 | 686 | """ | 774 | """ |
1387 | 687 | newpath = "/".join(filter(lambda x: x and x != ".", | 775 | pathparts = filter(lambda x: x and x != ".", split_all(self.name)) |
1388 | 688 | self.name.split("/"))) | 776 | if pathparts: |
1389 | 689 | if self.name[0] == "/": | 777 | return os.path.join(*pathparts) |
1384 | 690 | return "/" + newpath | ||
1385 | 691 | elif newpath: | ||
1386 | 692 | return newpath | ||
1390 | 693 | else: | 778 | else: |
1391 | 694 | return "." | 779 | return "." |
1392 | 695 | 780 | ||
1393 | 696 | 781 | ||
1394 | === modified file 'duplicity/selection.py' | |||
1395 | --- duplicity/selection.py 2010-07-22 19:15:11 +0000 | |||
1396 | +++ duplicity/selection.py 2010-10-25 15:49:45 +0000 | |||
1397 | @@ -23,12 +23,15 @@ | |||
1398 | 23 | import re #@UnusedImport | 23 | import re #@UnusedImport |
1399 | 24 | import stat #@UnusedImport | 24 | import stat #@UnusedImport |
1400 | 25 | 25 | ||
1402 | 26 | from duplicity.path import * #@UnusedWildImport | 26 | from duplicity import path |
1403 | 27 | from duplicity import log #@Reimport | 27 | from duplicity import log #@Reimport |
1404 | 28 | from duplicity import globals #@Reimport | 28 | from duplicity import globals #@Reimport |
1405 | 29 | from duplicity import diffdir | 29 | from duplicity import diffdir |
1406 | 30 | from duplicity import util #@Reimport | 30 | from duplicity import util #@Reimport |
1407 | 31 | 31 | ||
1408 | 32 | # For convenience | ||
1409 | 33 | Path = path.Path | ||
1410 | 34 | |||
1411 | 32 | """Iterate exactly the requested files in a directory | 35 | """Iterate exactly the requested files in a directory |
1412 | 33 | 36 | ||
1413 | 34 | Parses includes and excludes to yield correct files. More | 37 | Parses includes and excludes to yield correct files. More |
1414 | @@ -93,6 +96,11 @@ | |||
1415 | 93 | self.rootpath = path | 96 | self.rootpath = path |
1416 | 94 | self.prefix = self.rootpath.name | 97 | self.prefix = self.rootpath.name |
1417 | 95 | 98 | ||
1418 | 99 | # Make sure prefix names a directory so prefix matching doesn't | ||
1419 | 100 | # match partial directory names | ||
1420 | 101 | if os.path.basename(self.prefix) != "": | ||
1421 | 102 | self.prefix = os.path.join(self.prefix, "") | ||
1422 | 103 | |||
1423 | 96 | def set_iter(self): | 104 | def set_iter(self): |
1424 | 97 | """Initialize generator, prepare to iterate.""" | 105 | """Initialize generator, prepare to iterate.""" |
1425 | 98 | self.rootpath.setdata() # this may have changed since Select init | 106 | self.rootpath.setdata() # this may have changed since Select init |
1426 | @@ -381,7 +389,7 @@ | |||
1427 | 381 | if not line.startswith(self.prefix): | 389 | if not line.startswith(self.prefix): |
1428 | 382 | raise FilePrefixError(line) | 390 | raise FilePrefixError(line) |
1429 | 383 | line = line[len(self.prefix):] # Discard prefix | 391 | line = line[len(self.prefix):] # Discard prefix |
1431 | 384 | index = tuple(filter(lambda x: x, line.split("/"))) # remove empties | 392 | index = tuple(path.split_all(line)) # remove empties |
1432 | 385 | return (index, include) | 393 | return (index, include) |
1433 | 386 | 394 | ||
1434 | 387 | def filelist_pair_match(self, path, pair): | 395 | def filelist_pair_match(self, path, pair): |
1435 | @@ -532,8 +540,7 @@ | |||
1436 | 532 | """ | 540 | """ |
1437 | 533 | if not filename.startswith(self.prefix): | 541 | if not filename.startswith(self.prefix): |
1438 | 534 | raise FilePrefixError(filename) | 542 | raise FilePrefixError(filename) |
1441 | 535 | index = tuple(filter(lambda x: x, | 543 | index = tuple(path.split_all(filename[len(self.prefix):])) |
1440 | 536 | filename[len(self.prefix):].split("/"))) | ||
1442 | 537 | return self.glob_get_tuple_sf(index, include) | 544 | return self.glob_get_tuple_sf(index, include) |
1443 | 538 | 545 | ||
1444 | 539 | def glob_get_tuple_sf(self, tuple, include): | 546 | def glob_get_tuple_sf(self, tuple, include): |
1445 | @@ -614,17 +621,14 @@ | |||
1446 | 614 | 621 | ||
1447 | 615 | def glob_get_prefix_res(self, glob_str): | 622 | def glob_get_prefix_res(self, glob_str): |
1448 | 616 | """Return list of regexps equivalent to prefixes of glob_str""" | 623 | """Return list of regexps equivalent to prefixes of glob_str""" |
1450 | 617 | glob_parts = glob_str.split("/") | 624 | glob_parts = path.split_all(glob_str) |
1451 | 618 | if "" in glob_parts[1:-1]: | 625 | if "" in glob_parts[1:-1]: |
1452 | 619 | # "" OK if comes first or last, as in /foo/ | 626 | # "" OK if comes first or last, as in /foo/ |
1453 | 620 | raise GlobbingError("Consecutive '/'s found in globbing string " | 627 | raise GlobbingError("Consecutive '/'s found in globbing string " |
1454 | 621 | + glob_str) | 628 | + glob_str) |
1455 | 622 | 629 | ||
1457 | 623 | prefixes = map(lambda i: "/".join(glob_parts[:i+1]), | 630 | prefixes = map(lambda i: os.path.join(*glob_parts[:i+1]), |
1458 | 624 | range(len(glob_parts))) | 631 | range(len(glob_parts))) |
1459 | 625 | # we must make exception for root "/", only dir to end in slash | ||
1460 | 626 | if prefixes[0] == "": | ||
1461 | 627 | prefixes[0] = "/" | ||
1462 | 628 | return map(self.glob_to_re, prefixes) | 632 | return map(self.glob_to_re, prefixes) |
1463 | 629 | 633 | ||
1464 | 630 | def glob_to_re(self, pat): | 634 | def glob_to_re(self, pat): |
1465 | @@ -638,6 +642,12 @@ | |||
1466 | 638 | by Donovan Baarda. | 642 | by Donovan Baarda. |
1467 | 639 | 643 | ||
1468 | 640 | """ | 644 | """ |
1469 | 645 | # Build regex for non-directory separator characters | ||
1470 | 646 | notsep = os.path.sep | ||
1471 | 647 | if os.path.altsep: | ||
1472 | 648 | notsep += os.path.altsep | ||
1473 | 649 | notsep = "[^" + notsep.replace("\\", "\\\\") + "]" | ||
1474 | 650 | |||
1475 | 641 | i, n, res = 0, len(pat), '' | 651 | i, n, res = 0, len(pat), '' |
1476 | 642 | while i < n: | 652 | while i < n: |
1477 | 643 | c, s = pat[i], pat[i:i+2] | 653 | c, s = pat[i], pat[i:i+2] |
1478 | @@ -646,9 +656,9 @@ | |||
1479 | 646 | res = res + '.*' | 656 | res = res + '.*' |
1480 | 647 | i = i + 1 | 657 | i = i + 1 |
1481 | 648 | elif c == '*': | 658 | elif c == '*': |
1483 | 649 | res = res + '[^/]*' | 659 | res = res + notsep + '*' |
1484 | 650 | elif c == '?': | 660 | elif c == '?': |
1486 | 651 | res = res + '[^/]' | 661 | res = res + notsep |
1487 | 652 | elif c == '[': | 662 | elif c == '[': |
1488 | 653 | j = i | 663 | j = i |
1489 | 654 | if j < n and pat[j] in '!^': | 664 | if j < n and pat[j] in '!^': |
1490 | 655 | 665 | ||
1491 | === modified file 'duplicity/tarfile.py' | |||
1492 | --- duplicity/tarfile.py 2010-07-22 19:15:11 +0000 | |||
1493 | +++ duplicity/tarfile.py 2010-10-25 15:49:45 +0000 | |||
1494 | @@ -1683,8 +1683,10 @@ | |||
1495 | 1683 | def set_pwd_dict(): | 1683 | def set_pwd_dict(): |
1496 | 1684 | """Set global pwd caching dictionaries uid_dict and uname_dict""" | 1684 | """Set global pwd caching dictionaries uid_dict and uname_dict""" |
1497 | 1685 | global uid_dict, uname_dict | 1685 | global uid_dict, uname_dict |
1499 | 1686 | assert uid_dict is None and uname_dict is None and pwd | 1686 | assert uid_dict is None and uname_dict is None |
1500 | 1687 | uid_dict = {}; uname_dict = {} | 1687 | uid_dict = {}; uname_dict = {} |
1501 | 1688 | if pwd is None: | ||
1502 | 1689 | return | ||
1503 | 1688 | for entry in pwd.getpwall(): | 1690 | for entry in pwd.getpwall(): |
1504 | 1689 | uname = entry[0]; uid = entry[2] | 1691 | uname = entry[0]; uid = entry[2] |
1505 | 1690 | uid_dict[uid] = uname | 1692 | uid_dict[uid] = uname |
1506 | @@ -1702,8 +1704,10 @@ | |||
1507 | 1702 | 1704 | ||
1508 | 1703 | def set_grp_dict(): | 1705 | def set_grp_dict(): |
1509 | 1704 | global gid_dict, gname_dict | 1706 | global gid_dict, gname_dict |
1511 | 1705 | assert gid_dict is None and gname_dict is None and grp | 1707 | assert gid_dict is None and gname_dict is None |
1512 | 1706 | gid_dict = {}; gname_dict = {} | 1708 | gid_dict = {}; gname_dict = {} |
1513 | 1709 | if grp is None: | ||
1514 | 1710 | return | ||
1515 | 1707 | for entry in grp.getgrall(): | 1711 | for entry in grp.getgrall(): |
1516 | 1708 | gname = entry[0]; gid = entry[2] | 1712 | gname = entry[0]; gid = entry[2] |
1517 | 1709 | gid_dict[gid] = gname | 1713 | gid_dict[gid] = gname |
1518 | 1710 | 1714 | ||
1519 | === removed file 'po/update-pot' | |||
1520 | --- po/update-pot 2009-09-15 02:13:01 +0000 | |||
1521 | +++ po/update-pot 1970-01-01 00:00:00 +0000 | |||
1522 | @@ -1,4 +0,0 @@ | |||
1523 | 1 | #!/bin/sh | ||
1524 | 2 | |||
1525 | 3 | intltool-update --pot -g duplicity | ||
1526 | 4 | sed -e 's/^#\. TRANSL:/#./' -i duplicity.pot | ||
1527 | 5 | 0 | ||
1528 | === added file 'po/update-pot.py' | |||
1529 | --- po/update-pot.py 1970-01-01 00:00:00 +0000 | |||
1530 | +++ po/update-pot.py 2010-10-25 15:49:45 +0000 | |||
1531 | @@ -0,0 +1,21 @@ | |||
1532 | 1 | #!/usr/bin/env python | ||
1533 | 2 | |||
1534 | 3 | import os | ||
1535 | 4 | import re | ||
1536 | 5 | import sys | ||
1537 | 6 | import tempfile | ||
1538 | 7 | |||
1539 | 8 | retval = os.system('intltool-update --pot -g duplicity') | ||
1540 | 9 | if retval != 0: | ||
1541 | 10 | # intltool-update failed and already wrote errors, propagate failure | ||
1542 | 11 | sys.exit(retval) | ||
1543 | 12 | |||
1544 | 13 | replre = re.compile('^#\. TRANSL:') | ||
1545 | 14 | with open("duplicity.pot", "rb") as potfile: | ||
1546 | 15 | with tempfile.NamedTemporaryFile(delete=False) as tmpfile: | ||
1547 | 16 | tmpfilename = tmpfile.name | ||
1548 | 17 | for line in potfile: | ||
1549 | 18 | tmpfile.write(re.sub(replre, "", line)) | ||
1550 | 19 | |||
1551 | 20 | os.remove("duplicity.pot") | ||
1552 | 21 | os.rename(tmpfilename, "duplicity.pot") |
Rejecting for now. This needs to be reworked at this point to make it viable in the current codebase.