Merge lp:~mterry/duplicity/backend-unification into lp:duplicity/0.6
- backend-unification
- Merge into 0.6-series
Status: | Merged |
---|---|
Merged at revision: | 981 |
Proposed branch: | lp:~mterry/duplicity/backend-unification |
Merge into: | lp:duplicity/0.6 |
Diff against target: |
7230 lines (+2212/-2635) 43 files modified
bin/duplicity (+0/-2) duplicity/backend.py (+293/-279) duplicity/backends/README (+79/-0) duplicity/backends/_boto_multi.py (+2/-2) duplicity/backends/_boto_single.py (+77/-186) duplicity/backends/_cf_cloudfiles.py (+33/-124) duplicity/backends/_cf_pyrax.py (+34/-124) duplicity/backends/_ssh_paramiko.py (+84/-156) duplicity/backends/_ssh_pexpect.py (+131/-168) duplicity/backends/botobackend.py (+6/-8) duplicity/backends/cfbackend.py (+5/-2) duplicity/backends/dpbxbackend.py (+17/-32) duplicity/backends/ftpbackend.py (+11/-20) duplicity/backends/ftpsbackend.py (+10/-24) duplicity/backends/gdocsbackend.py (+84/-143) duplicity/backends/giobackend.py (+79/-114) duplicity/backends/hsibackend.py (+8/-22) duplicity/backends/imapbackend.py (+46/-50) duplicity/backends/localbackend.py (+30/-86) duplicity/backends/megabackend.py (+48/-98) duplicity/backends/par2backend.py (+63/-79) duplicity/backends/rsyncbackend.py (+13/-27) duplicity/backends/sshbackend.py (+7/-2) duplicity/backends/swiftbackend.py (+25/-119) duplicity/backends/tahoebackend.py (+11/-22) duplicity/backends/webdavbackend.py (+46/-60) duplicity/commandline.py (+5/-10) duplicity/errors.py (+6/-2) duplicity/globals.py (+3/-0) po/duplicity.pot (+270/-293) testing/__init__.py (+5/-1) testing/functional/__init__.py (+5/-3) testing/functional/test_badupload.py (+1/-1) testing/manual/backendtest.py (+208/-290) testing/manual/config.py.tmpl (+18/-86) testing/overrides/bin/hsi (+16/-0) testing/overrides/bin/lftp (+23/-0) testing/overrides/bin/ncftpget (+6/-0) testing/overrides/bin/ncftpls (+19/-0) testing/overrides/bin/ncftpput (+6/-0) testing/overrides/bin/tahoe (+10/-0) testing/unit/test_backend.py (+128/-0) testing/unit/test_backend_instance.py (+241/-0) |
To merge this branch: | bzr merge lp:~mterry/duplicity/backend-unification |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
duplicity-team | Pending | ||
Review via email: mp+216764@code.launchpad.net |
Commit message
Description of the change
Reorganize and simplify backend code. Specifically:
- Formalize the expected API between backends and duplicity. See the new file duplicity/
- Add some tests for our backend wrapper class as well as some tests for individual backends. For several backends that have some commands do all the heavy lifting (hsi, tahoe, ftp), I've added fake little mock commands so that we can test them locally. This doesn't truly test our integration with those commands, but at least lets us test the backend glue code itself.
- Removed a lot of duplicate and unused code which backends were using (or not using). This branch drops 700 lines of code (~20%) in duplicity/backends!
- Simplified expectations of backends. Our wrapper code now does all the retrying, and all the exception handling. Backends can 'fire and forget' trusting our wrappers to give the user a reasonable error message. Obviously, backends can also add more details and make nicer error messages. But they don't *have* to.
- Separate out the backend classes from our wrapper class. Now there is no possibility of namespace collision. All our API methods use one underscore. Anything else (zero or two underscores) are for the backend class's use.
- Added the concept of a 'backend prefix' which is used by par2 and gio backends to provide generic support for "schema+" in urls -- like par2+ or gio+. I've since marked the '--gio' flag as deprecated, in favor of 'gio+'. Now you can even nest such backends like par2+gio+
- The switch to control which cloudfiles backend had a typo. I fixed this, but I'm not sure I should have? If we haven't had complaints, maybe we can just drop the old backend.
- I manually tested all the backends we have (except hsi and tahoe -- but those are simple wrappers around commands and I did test those via mocks per above). I also added a bunch more manual backend tests to ./testing/
edso (ed.so) wrote : | # |
Kenneth Loafman (kenneth-loafman) wrote : | # |
ede, I don't see a problem with a missing key.
As to 0.7 -- I'm thinking that some of the latest changes, not bug fixes,
should be removed from 0.6 and applied to 0.7. We could make the next
release, 0.6.24, the last of the 0.6 series. We probably need to start an
email thread to discuss this, but I think it's time.
On Mon, Apr 28, 2014 at 4:51 AM, edso <email address hidden> wrote:
> A+ for effort ;) .. we should probably make that a 0.7 release, stating
> that there might be some hickups (bugs) due to major code revision.
> two more comments below..
>
> On 28.04.2014 04:55, Michael Terry wrote:
> SNIP
> >
> >
> > === modified file 'bin/duplicity'
> > --- bin/duplicity 2014-04-19 19:54:54 +0000
> > +++ bin/duplicity 2014-04-28 02:49:55 +0000
> > @@ -289,8 +289,6 @@
> >
> > def validate_
> > info = backend.
> > - if 'size' not in info:
> > - return # backend didn't know how to query size
>
> this could raise a key not found error. can you really guaratee the key
> will be there?
>
> > size = info['size']
> > if size is None:
> > return # error querying file
> >
> > === modified file 'duplicity/
> > --- duplicity/
> > +++ duplicity/
> > @@ -24,6 +24,7 @@
> SNIP
> >
> > # These URL schemes have a backend with a notion of an RFC "network
> location".
> > # The 'file' and 's3+http' schemes should not be in this list.
> > @@ -69,7 +74,6 @@
> > uses_netloc = ['ftp',
> > 'ftps',
> > 'hsi',
> > - 'rsync',
> > 's3',
> > 'scp', 'ssh', 'sftp',
> > 'webdav', 'webdavs',
>
> any reason why you removed rsync here?
>
> ..ede
>
> --
>
> https:/
> Your team duplicity-team is requested to review the proposed merge of
> lp:~mterry/duplicity/backend-unification into lp:duplicity.
>
> _______
> Mailing list: https:/
> Post to : <email address hidden>
> Unsubscribe : https:/
> More help : https:/
>
edso (ed.so) wrote : | # |
On 28.04.2014 13:21, Kenneth Loafman wrote:
> ede, I don't see a problem with a missing key.
size = info['size']
will throw an error if the dictionary misses they key. we had that earlier. but MT will probably know if he hacked the info routines safely enough to guarantee it's existence.
> As to 0.7 -- I'm thinking that some of the latest changes, not bug fixes,
> should be removed from 0.6 and applied to 0.7. We could make the next
> release, 0.6.24, the last of the 0.6 series. We probably need to start an
> email thread to discuss this, but I think it's time.
that's an alternative route to go, although i am not aware of major bugs since 0.6.23 ..ede
>
>
> On Mon, Apr 28, 2014 at 4:51 AM, edso <email address hidden> wrote:
>
>> A+ for effort ;) .. we should probably make that a 0.7 release, stating
>> that there might be some hickups (bugs) due to major code revision.
>> two more comments below..
>>
>> On 28.04.2014 04:55, Michael Terry wrote:
>> SNIP
>>>
>>>
>>> === modified file 'bin/duplicity'
>>> --- bin/duplicity 2014-04-19 19:54:54 +0000
>>> +++ bin/duplicity 2014-04-28 02:49:55 +0000
>>> @@ -289,8 +289,6 @@
>>>
>>> def validate_
>>> info = backend.
>>> - if 'size' not in info:
>>> - return # backend didn't know how to query size
>>
>> this could raise a key not found error. can you really guaratee the key
>> will be there?
>>
>>> size = info['size']
>>> if size is None:
>>> return # error querying file
>>>
>>> === modified file 'duplicity/
>>> --- duplicity/
>>> +++ duplicity/
>>> @@ -24,6 +24,7 @@
>> SNIP
>>>
>>> # These URL schemes have a backend with a notion of an RFC "network
>> location".
>>> # The 'file' and 's3+http' schemes should not be in this list.
>>> @@ -69,7 +74,6 @@
>>> uses_netloc = ['ftp',
>>> 'ftps',
>>> 'hsi',
>>> - 'rsync',
>>> 's3',
>>> 'scp', 'ssh', 'sftp',
>>> 'webdav', 'webdavs',
>>
>> any reason why you removed rsync here?
>>
>> ..ede
>>
>> --
>>
>> https:/
>> Your team duplicity-team is requested to review the proposed merge of
>> lp:~mterry/duplicity/backend-unification into lp:duplicity.
>>
>> _______
>> Mailing list: https:/
>> Post to : <email address hidden>
>> Unsubscribe : https:/
>> More help : https:/
>>
>
Kenneth Loafman (kenneth-loafman) wrote : | # |
Sorry, was looking above the comment, not below. Not enough coffee!
There are a number of bugs/additions in the current CHANGELOG slated for 0.6.24. Looking at them again, this change is the trigger for 0.7 startup.
Mike, I really like tox. It's really easy to merge changes and retest.
Michael Terry (mterry) wrote : | # |
> this could raise a key not found error. can you really guaratee the key will be there?
Yes, because now query_info calls the backend query function, then makes sure that 'size' exists for each filename passed. This was part of the "allow backends to be dumb" strategy -- if they skip a filename or don't provide 'size', the rest of duplicity doesn't have to care. Guarantees come from the wrapper class, not the backends.
> any reason why you removed rsync here?
Yeah, I changed the rsync backend to support relative/local urls. This was so we could test it as part of our normal automated tests (without needing a server). The list I removed 'rsync' from was a list of backends that require a hostname.
edso (ed.so) wrote : | # |
On 28.04.2014 14:55, Michael Terry wrote:
>> this could raise a key not found error. can you really guaratee the key will be there?
>
> Yes, because now query_info calls the backend query function, then makes sure that 'size' exists for each filename passed. This was part of the "allow backends to be dumb" strategy -- if they skip a filename or don't provide 'size', the rest of duplicity doesn't have to care. Guarantees come from the wrapper class, not the backends.
>
>> any reason why you removed rsync here?
>
> Yeah, I changed the rsync backend to support relative/local urls. This was so we could test it as part of our normal automated tests (without needing a server). The list I removed 'rsync' from was a list of backends that require a hostname.
>
well done.. thx ede
Kenneth Loafman (kenneth-loafman) wrote : | # |
Either of you see a show-stopper to keep 0.6.24 from going out as-is? We
have an impressive list of changes already.
I've resurrected the old 0.7 series, merged in the current 0.6, and will
merge this set of changes in later. Does everyone agree that this is a
good place to make the switch to 0.7? Do we want to make 0.7 the current
focus of development after the release of 0.6.24?
On Mon, Apr 28, 2014 at 9:15 AM, edso <email address hidden> wrote:
> On 28.04.2014 14:55, Michael Terry wrote:
> >> this could raise a key not found error. can you really guaratee the key
> will be there?
> >
> > Yes, because now query_info calls the backend query function, then makes
> sure that 'size' exists for each filename passed. This was part of the
> "allow backends to be dumb" strategy -- if they skip a filename or don't
> provide 'size', the rest of duplicity doesn't have to care. Guarantees
> come from the wrapper class, not the backends.
> >
> >> any reason why you removed rsync here?
> >
> > Yeah, I changed the rsync backend to support relative/local urls. This
> was so we could test it as part of our normal automated tests (without
> needing a server). The list I removed 'rsync' from was a list of backends
> that require a hostname.
> >
>
> well done.. thx ede
>
> --
>
> https:/
> You are subscribed to branch lp:duplicity.
>
Michael Terry (mterry) wrote : | # |
Well, two separate decisions: release 0.6.24 or not and whether we want to go to 0.7.
But I don't have a reason to wait on releasing 0.6.24.
And as for 0.7, I'm fine with that too. Just a number. :)
But this branch itself feels like not a very momentous change. By which I mean, end users shouldn't care about this at all. Doesn't fix any big bugs (except maybe the --cf-backend argument) and doesn't add any new features.
The bump to require python 2.6 feels like a more natural jumping off point, but as edso noted, 0.6.23 already has 2.6isms in it. So not really a change there either.
So... :shrug:
edso (ed.so) wrote : | # |
main concern for me is to have a "stable" version, to point users to if they stumble over a showstopper after Mike's reworks. that can be 0.6.24
i wonder if it'd make sense to also exclude all the other modifications Mike in his current run (since drop-pexpect) as they may also contain some side effects that may be better kept in 0.7dev.
considering the major backend differences of Mike's latest commit i agree to make a 0.7 the new devel focus.
..ede
On 28.04.2014 16:45, Kenneth Loafman wrote:
> Either of you see a show-stopper to keep 0.6.24 from going out as-is? We
> have an impressive list of changes already.
>
> I've resurrected the old 0.7 series, merged in the current 0.6, and will
> merge this set of changes in later. Does everyone agree that this is a
> good place to make the switch to 0.7? Do we want to make 0.7 the current
> focus of development after the release of 0.6.24?
>
>
> On Mon, Apr 28, 2014 at 9:15 AM, edso <email address hidden> wrote:
>
>> On 28.04.2014 14:55, Michael Terry wrote:
>>>> this could raise a key not found error. can you really guaratee the key
>> will be there?
>>>
>>> Yes, because now query_info calls the backend query function, then makes
>> sure that 'size' exists for each filename passed. This was part of the
>> "allow backends to be dumb" strategy -- if they skip a filename or don't
>> provide 'size', the rest of duplicity doesn't have to care. Guarantees
>> come from the wrapper class, not the backends.
>>>
>>>> any reason why you removed rsync here?
>>>
>>> Yeah, I changed the rsync backend to support relative/local urls. This
>> was so we could test it as part of our normal automated tests (without
>> needing a server). The list I removed 'rsync' from was a list of backends
>> that require a hostname.
>>>
>>
>> well done.. thx ede
>>
>> --
>>
>> https:/
>> You are subscribed to branch lp:duplicity.
>>
>
edso (ed.so) wrote : | # |
On 28.04.2014 16:53, Michael Terry wrote:
> Well, two separate decisions: release 0.6.24 or not and whether we want to go to 0.7.
>
> But I don't have a reason to wait on releasing 0.6.24.
>
> And as for 0.7, I'm fine with that too. Just a number. :)
>
> But this branch itself feels like not a very momentous change. By which I mean, end users shouldn't care about this at all. Doesn't fix any big bugs (except maybe the --cf-backend argument) and doesn't add any new features.
>
it's just the sheer amount of modified code that worries me a bit. as i wrote in the other email
also,
1. old backend code e.g. released by somebody privately will not work anymore when dropped into duplicity/backends
2. true, officially switching to 2.6 might be another point, somebody might want to remove the 2.6'isms from the 0.6 branch if they really need duplicity on some oldold distro. supposing we also keep the 2.6 changes only in 0.7 branch.
..ede
Kenneth Loafman (kenneth-loafman) wrote : | # |
Hmmm, this is getting complicated, but I see your points.
We can test against 2.4 and 2.5 using the answers here:
http://
deadsnakes repositories, love the name). So, if we wanted, we could
move the modernizations over to 0.7 and fix 0.6.24 to work on older
Pythons.
That seems like a cleaner and more rational break to me. Thoughts?
On Mon, Apr 28, 2014 at 10:06 AM, edso <email address hidden> wrote:
> On 28.04.2014 16:53, Michael Terry wrote:
> > Well, two separate decisions: release 0.6.24 or not and whether we want
> to go to 0.7.
> >
> > But I don't have a reason to wait on releasing 0.6.24.
> >
> > And as for 0.7, I'm fine with that too. Just a number. :)
> >
> > But this branch itself feels like not a very momentous change. By which
> I mean, end users shouldn't care about this at all. Doesn't fix any big
> bugs (except maybe the --cf-backend argument) and doesn't add any new
> features.
> >
>
> it's just the sheer amount of modified code that worries me a bit. as i
> wrote in the other email
>
> also,
> 1. old backend code e.g. released by somebody privately will not work
> anymore when dropped into duplicity/backends
> 2. true, officially switching to 2.6 might be another point, somebody
> might want to remove the 2.6'isms from the 0.6 branch if they really need
> duplicity on some oldold distro. supposing we also keep the 2.6 changes
> only in 0.7 branch.
>
> ..ede
>
> --
>
> https:/
> You are subscribed to branch lp:duplicity.
>
edso (ed.so) wrote : | # |
exactly my point, from a stability standpoint though :) ..ede
On 28.04.2014 17:18, Kenneth Loafman wrote:
> Hmmm, this is getting complicated, but I see your points.
>
> We can test against 2.4 and 2.5 using the answers here:
> http://
> deadsnakes repositories, love the name). So, if we wanted, we could
> move the modernizations over to 0.7 and fix 0.6.24 to work on older
> Pythons.
>
> That seems like a cleaner and more rational break to me. Thoughts?
>
>
>
> On Mon, Apr 28, 2014 at 10:06 AM, edso <email address hidden> wrote:
>
>> On 28.04.2014 16:53, Michael Terry wrote:
>>> Well, two separate decisions: release 0.6.24 or not and whether we want
>> to go to 0.7.
>>>
>>> But I don't have a reason to wait on releasing 0.6.24.
>>>
>>> And as for 0.7, I'm fine with that too. Just a number. :)
>>>
>>> But this branch itself feels like not a very momentous change. By which
>> I mean, end users shouldn't care about this at all. Doesn't fix any big
>> bugs (except maybe the --cf-backend argument) and doesn't add any new
>> features.
>>>
>>
>> it's just the sheer amount of modified code that worries me a bit. as i
>> wrote in the other email
>>
>> also,
>> 1. old backend code e.g. released by somebody privately will not work
>> anymore when dropped into duplicity/backends
>> 2. true, officially switching to 2.6 might be another point, somebody
>> might want to remove the 2.6'isms from the 0.6 branch if they really need
>> duplicity on some oldold distro. supposing we also keep the 2.6 changes
>> only in 0.7 branch.
>>
>> ..ede
>>
>> --
>>
>> https:/
>> You are subscribed to branch lp:duplicity.
>>
>
Michael Terry (mterry) wrote : | # |
> it's just the sheer amount of modified code that worries me a bit.
> as i wrote in the other email
Fair. I tried to thoroughly test, but I'm human. :)
> 1. old backend code e.g. released by somebody privately will not work
> anymore when dropped into duplicity/backends
This is a good point. I'm curious how many people do that; I can easily imagine it happening.
As for your comments, Ken:
> So, if we wanted, we could move the modernizations over to 0.7 and
> fix 0.6.24 to work on older Pythons.
As much as I hate giving any thoughts to py2.4/2.5, this is probably a nice thing to do. We clearly have interested users still on RHEL5, from my last survey on the mailing list.
If I have time in the next week, I can do this. Else, if you tackle this, note that you shouldn't do the testing on a recently-released distro. Because recent versions of tox/setuptools/
Kenneth Loafman (kenneth-loafman) wrote : | # |
Luckily, I'm still on 12.04. I'll test and see if 2.4/2.5 work.
On Mon, Apr 28, 2014 at 10:25 AM, Michael Terry <<email address hidden>
> wrote:
> > it's just the sheer amount of modified code that worries me a bit.
> > as i wrote in the other email
>
> Fair. I tried to thoroughly test, but I'm human. :)
>
> > 1. old backend code e.g. released by somebody privately will not work
> > anymore when dropped into duplicity/backends
>
> This is a good point. I'm curious how many people do that; I can easily
> imagine it happening.
>
> As for your comments, Ken:
>
> > So, if we wanted, we could move the modernizations over to 0.7 and
> > fix 0.6.24 to work on older Pythons.
>
> As much as I hate giving any thoughts to py2.4/2.5, this is probably a
> nice thing to do. We clearly have interested users still on RHEL5, from my
> last survey on the mailing list.
>
> If I have time in the next week, I can do this. Else, if you tackle this,
> note that you shouldn't do the testing on a recently-released distro.
> Because recent versions of tox/setuptools/
> for <2.6. Maybe try Ubuntu 12.04? I haven't tried an older distro myself
> yet, but ideally you'd be able to add "py24,py25" to the tox.ini file and
> just run tox to confirm that we work with older pythons. If that doesn't
> work, you'd have to go to an earlier release of your distro of choice.
> --
>
> https:/
> You are subscribed to branch lp:duplicity.
>
Preview Diff
1 | === modified file 'bin/duplicity' | |||
2 | --- bin/duplicity 2014-04-19 19:54:54 +0000 | |||
3 | +++ bin/duplicity 2014-04-28 02:49:55 +0000 | |||
4 | @@ -289,8 +289,6 @@ | |||
5 | 289 | 289 | ||
6 | 290 | def validate_block(orig_size, dest_filename): | 290 | def validate_block(orig_size, dest_filename): |
7 | 291 | info = backend.query_info([dest_filename])[dest_filename] | 291 | info = backend.query_info([dest_filename])[dest_filename] |
8 | 292 | if 'size' not in info: | ||
9 | 293 | return # backend didn't know how to query size | ||
10 | 294 | size = info['size'] | 292 | size = info['size'] |
11 | 295 | if size is None: | 293 | if size is None: |
12 | 296 | return # error querying file | 294 | return # error querying file |
13 | 297 | 295 | ||
14 | === modified file 'duplicity/backend.py' | |||
15 | --- duplicity/backend.py 2014-04-25 23:53:46 +0000 | |||
16 | +++ duplicity/backend.py 2014-04-28 02:49:55 +0000 | |||
17 | @@ -24,6 +24,7 @@ | |||
18 | 24 | intended to be used by the backends themselves. | 24 | intended to be used by the backends themselves. |
19 | 25 | """ | 25 | """ |
20 | 26 | 26 | ||
21 | 27 | import errno | ||
22 | 27 | import os | 28 | import os |
23 | 28 | import sys | 29 | import sys |
24 | 29 | import socket | 30 | import socket |
25 | @@ -31,6 +32,7 @@ | |||
26 | 31 | import re | 32 | import re |
27 | 32 | import getpass | 33 | import getpass |
28 | 33 | import gettext | 34 | import gettext |
29 | 35 | import types | ||
30 | 34 | import urllib | 36 | import urllib |
31 | 35 | import urlparse | 37 | import urlparse |
32 | 36 | 38 | ||
33 | @@ -38,11 +40,14 @@ | |||
34 | 38 | from duplicity import file_naming | 40 | from duplicity import file_naming |
35 | 39 | from duplicity import globals | 41 | from duplicity import globals |
36 | 40 | from duplicity import log | 42 | from duplicity import log |
37 | 43 | from duplicity import path | ||
38 | 41 | from duplicity import progress | 44 | from duplicity import progress |
39 | 45 | from duplicity import util | ||
40 | 42 | 46 | ||
41 | 43 | from duplicity.util import exception_traceback | 47 | from duplicity.util import exception_traceback |
42 | 44 | 48 | ||
44 | 45 | from duplicity.errors import BackendException, FatalBackendError | 49 | from duplicity.errors import BackendException |
45 | 50 | from duplicity.errors import FatalBackendException | ||
46 | 46 | from duplicity.errors import TemporaryLoadException | 51 | from duplicity.errors import TemporaryLoadException |
47 | 47 | from duplicity.errors import ConflictingScheme | 52 | from duplicity.errors import ConflictingScheme |
48 | 48 | from duplicity.errors import InvalidBackendURL | 53 | from duplicity.errors import InvalidBackendURL |
49 | @@ -54,8 +59,8 @@ | |||
50 | 54 | # todo: this should really NOT be done here | 59 | # todo: this should really NOT be done here |
51 | 55 | socket.setdefaulttimeout(globals.timeout) | 60 | socket.setdefaulttimeout(globals.timeout) |
52 | 56 | 61 | ||
53 | 57 | _forced_backend = None | ||
54 | 58 | _backends = {} | 62 | _backends = {} |
55 | 63 | _backend_prefixes = {} | ||
56 | 59 | 64 | ||
57 | 60 | # These URL schemes have a backend with a notion of an RFC "network location". | 65 | # These URL schemes have a backend with a notion of an RFC "network location". |
58 | 61 | # The 'file' and 's3+http' schemes should not be in this list. | 66 | # The 'file' and 's3+http' schemes should not be in this list. |
59 | @@ -69,7 +74,6 @@ | |||
60 | 69 | uses_netloc = ['ftp', | 74 | uses_netloc = ['ftp', |
61 | 70 | 'ftps', | 75 | 'ftps', |
62 | 71 | 'hsi', | 76 | 'hsi', |
63 | 72 | 'rsync', | ||
64 | 73 | 's3', | 77 | 's3', |
65 | 74 | 'scp', 'ssh', 'sftp', | 78 | 'scp', 'ssh', 'sftp', |
66 | 75 | 'webdav', 'webdavs', | 79 | 'webdav', 'webdavs', |
67 | @@ -96,8 +100,6 @@ | |||
68 | 96 | if fn.endswith("backend.py"): | 100 | if fn.endswith("backend.py"): |
69 | 97 | fn = fn[:-3] | 101 | fn = fn[:-3] |
70 | 98 | imp = "duplicity.backends.%s" % (fn,) | 102 | imp = "duplicity.backends.%s" % (fn,) |
71 | 99 | # ignore gio as it is explicitly loaded in commandline.parse_cmdline_options() | ||
72 | 100 | if fn == "giobackend": continue | ||
73 | 101 | try: | 103 | try: |
74 | 102 | __import__(imp) | 104 | __import__(imp) |
75 | 103 | res = "Succeeded" | 105 | res = "Succeeded" |
76 | @@ -110,14 +112,6 @@ | |||
77 | 110 | continue | 112 | continue |
78 | 111 | 113 | ||
79 | 112 | 114 | ||
80 | 113 | def force_backend(backend): | ||
81 | 114 | """ | ||
82 | 115 | Forces the use of a particular backend, regardless of schema | ||
83 | 116 | """ | ||
84 | 117 | global _forced_backend | ||
85 | 118 | _forced_backend = backend | ||
86 | 119 | |||
87 | 120 | |||
88 | 121 | def register_backend(scheme, backend_factory): | 115 | def register_backend(scheme, backend_factory): |
89 | 122 | """ | 116 | """ |
90 | 123 | Register a given backend factory responsible for URL:s with the | 117 | Register a given backend factory responsible for URL:s with the |
91 | @@ -144,6 +138,32 @@ | |||
92 | 144 | _backends[scheme] = backend_factory | 138 | _backends[scheme] = backend_factory |
93 | 145 | 139 | ||
94 | 146 | 140 | ||
95 | 141 | def register_backend_prefix(scheme, backend_factory): | ||
96 | 142 | """ | ||
97 | 143 | Register a given backend factory responsible for URL:s with the | ||
98 | 144 | given scheme prefix. | ||
99 | 145 | |||
100 | 146 | The backend must be a callable which, when called with a URL as | ||
101 | 147 | the single parameter, returns an object implementing the backend | ||
102 | 148 | protocol (i.e., a subclass of Backend). | ||
103 | 149 | |||
104 | 150 | Typically the callable will be the Backend subclass itself. | ||
105 | 151 | |||
106 | 152 | This function is not thread-safe and is intended to be called | ||
107 | 153 | during module importation or start-up. | ||
108 | 154 | """ | ||
109 | 155 | global _backend_prefixes | ||
110 | 156 | |||
111 | 157 | assert callable(backend_factory), "backend factory must be callable" | ||
112 | 158 | |||
113 | 159 | if scheme in _backend_prefixes: | ||
114 | 160 | raise ConflictingScheme("the prefix %s already has a backend " | ||
115 | 161 | "associated with it" | ||
116 | 162 | "" % (scheme,)) | ||
117 | 163 | |||
118 | 164 | _backend_prefixes[scheme] = backend_factory | ||
119 | 165 | |||
120 | 166 | |||
121 | 147 | def is_backend_url(url_string): | 167 | def is_backend_url(url_string): |
122 | 148 | """ | 168 | """ |
123 | 149 | @return Whether the given string looks like a backend URL. | 169 | @return Whether the given string looks like a backend URL. |
124 | @@ -157,9 +177,9 @@ | |||
125 | 157 | return False | 177 | return False |
126 | 158 | 178 | ||
127 | 159 | 179 | ||
129 | 160 | def get_backend(url_string): | 180 | def get_backend_object(url_string): |
130 | 161 | """ | 181 | """ |
132 | 162 | Instantiate a backend suitable for the given URL, or return None | 182 | Find the right backend class instance for the given URL, or return None |
133 | 163 | if the given string looks like a local path rather than a URL. | 183 | if the given string looks like a local path rather than a URL. |
134 | 164 | 184 | ||
135 | 165 | Raise InvalidBackendURL if the URL is not a valid URL. | 185 | Raise InvalidBackendURL if the URL is not a valid URL. |
136 | @@ -167,22 +187,44 @@ | |||
137 | 167 | if not is_backend_url(url_string): | 187 | if not is_backend_url(url_string): |
138 | 168 | return None | 188 | return None |
139 | 169 | 189 | ||
140 | 190 | global _backends, _backend_prefixes | ||
141 | 191 | |||
142 | 170 | pu = ParsedUrl(url_string) | 192 | pu = ParsedUrl(url_string) |
143 | 171 | |||
144 | 172 | # Implicit local path | ||
145 | 173 | assert pu.scheme, "should be a backend url according to is_backend_url" | 193 | assert pu.scheme, "should be a backend url according to is_backend_url" |
146 | 174 | 194 | ||
158 | 175 | global _backends, _forced_backend | 195 | factory = None |
159 | 176 | 196 | ||
160 | 177 | if _forced_backend: | 197 | for prefix in _backend_prefixes: |
161 | 178 | return _forced_backend(pu) | 198 | if url_string.startswith(prefix + '+'): |
162 | 179 | elif not pu.scheme in _backends: | 199 | factory = _backend_prefixes[prefix] |
163 | 180 | raise UnsupportedBackendScheme(url_string) | 200 | pu = ParsedUrl(url_string.lstrip(prefix + '+')) |
164 | 181 | else: | 201 | break |
165 | 182 | try: | 202 | |
166 | 183 | return _backends[pu.scheme](pu) | 203 | if factory is None: |
167 | 184 | except ImportError: | 204 | if not pu.scheme in _backends: |
168 | 185 | raise BackendException(_("Could not initialize backend: %s") % str(sys.exc_info()[1])) | 205 | raise UnsupportedBackendScheme(url_string) |
169 | 206 | else: | ||
170 | 207 | factory = _backends[pu.scheme] | ||
171 | 208 | |||
172 | 209 | try: | ||
173 | 210 | return factory(pu) | ||
174 | 211 | except ImportError: | ||
175 | 212 | raise BackendException(_("Could not initialize backend: %s") % str(sys.exc_info()[1])) | ||
176 | 213 | |||
177 | 214 | |||
178 | 215 | def get_backend(url_string): | ||
179 | 216 | """ | ||
180 | 217 | Instantiate a backend suitable for the given URL, or return None | ||
181 | 218 | if the given string looks like a local path rather than a URL. | ||
182 | 219 | |||
183 | 220 | Raise InvalidBackendURL if the URL is not a valid URL. | ||
184 | 221 | """ | ||
185 | 222 | if globals.use_gio: | ||
186 | 223 | url_string = 'gio+' + url_string | ||
187 | 224 | obj = get_backend_object(url_string) | ||
188 | 225 | if obj: | ||
189 | 226 | obj = BackendWrapper(obj) | ||
190 | 227 | return obj | ||
191 | 186 | 228 | ||
192 | 187 | 229 | ||
193 | 188 | class ParsedUrl: | 230 | class ParsedUrl: |
194 | @@ -296,165 +338,74 @@ | |||
195 | 296 | # Replace the full network location with the stripped copy. | 338 | # Replace the full network location with the stripped copy. |
196 | 297 | return parsed_url.geturl().replace(parsed_url.netloc, straight_netloc, 1) | 339 | return parsed_url.geturl().replace(parsed_url.netloc, straight_netloc, 1) |
197 | 298 | 340 | ||
231 | 299 | 341 | def _get_code_from_exception(backend, operation, e): | |
232 | 300 | # Decorator for backend operation functions to simplify writing one that | 342 | if isinstance(e, BackendException) and e.code != log.ErrorCode.backend_error: |
233 | 301 | # retries. Make sure to add a keyword argument 'raise_errors' to your function | 343 | return e.code |
234 | 302 | # and if it is true, raise an exception on an error. If false, fatal-log it. | 344 | elif hasattr(backend, '_error_code'): |
235 | 303 | def retry(fn): | 345 | return backend._error_code(operation, e) or log.ErrorCode.backend_error |
236 | 304 | def iterate(*args): | 346 | elif hasattr(e, 'errno'): |
237 | 305 | for n in range(1, globals.num_retries): | 347 | # A few backends return such errors (local, paramiko, etc) |
238 | 306 | try: | 348 | if e.errno == errno.EACCES: |
239 | 307 | kwargs = {"raise_errors" : True} | 349 | return log.ErrorCode.backend_permission_denied |
240 | 308 | return fn(*args, **kwargs) | 350 | elif e.errno == errno.ENOENT: |
241 | 309 | except Exception as e: | 351 | return log.ErrorCode.backend_not_found |
242 | 310 | log.Warn(_("Attempt %s failed: %s: %s") | 352 | elif e.errno == errno.ENOSPC: |
243 | 311 | % (n, e.__class__.__name__, str(e))) | 353 | return log.ErrorCode.backend_no_space |
244 | 312 | log.Debug(_("Backtrace of previous error: %s") | 354 | return log.ErrorCode.backend_error |
245 | 313 | % exception_traceback()) | 355 | |
246 | 314 | if isinstance(e, TemporaryLoadException): | 356 | def retry(operation, fatal=True): |
247 | 315 | time.sleep(30) # wait longer before trying again | 357 | # Decorators with arguments introduce a new level of indirection. So we |
248 | 316 | else: | 358 | # have to return a decorator function (which itself returns a function!) |
249 | 317 | time.sleep(10) # wait a bit before trying again | 359 | def outer_retry(fn): |
250 | 318 | # Now try one last time, but fatal-log instead of raising errors | 360 | def inner_retry(self, *args): |
251 | 319 | kwargs = {"raise_errors" : False} | 361 | for n in range(1, globals.num_retries + 1): |
219 | 320 | return fn(*args, **kwargs) | ||
220 | 321 | return iterate | ||
221 | 322 | |||
222 | 323 | # same as above, a bit dumber and always dies fatally if last trial fails | ||
223 | 324 | # hence no need for the raise_errors var ;), we really catch everything here | ||
224 | 325 | # as we don't know what the underlying code comes up with and we really *do* | ||
225 | 326 | # want to retry globals.num_retries times under all circumstances | ||
226 | 327 | def retry_fatal(fn): | ||
227 | 328 | def _retry_fatal(self, *args): | ||
228 | 329 | try: | ||
229 | 330 | n = 0 | ||
230 | 331 | for n in range(1, globals.num_retries): | ||
252 | 332 | try: | 362 | try: |
253 | 333 | self.retry_count = n | ||
254 | 334 | return fn(self, *args) | 363 | return fn(self, *args) |
256 | 335 | except FatalBackendError as e: | 364 | except FatalBackendException as e: |
257 | 336 | # die on fatal errors | 365 | # die on fatal errors |
258 | 337 | raise e | 366 | raise e |
259 | 338 | except Exception as e: | 367 | except Exception as e: |
260 | 339 | # retry on anything else | 368 | # retry on anything else |
261 | 340 | log.Warn(_("Attempt %s failed. %s: %s") | ||
262 | 341 | % (n, e.__class__.__name__, str(e))) | ||
263 | 342 | log.Debug(_("Backtrace of previous error: %s") | 369 | log.Debug(_("Backtrace of previous error: %s") |
264 | 343 | % exception_traceback()) | 370 | % exception_traceback()) |
278 | 344 | time.sleep(10) # wait a bit before trying again | 371 | at_end = n == globals.num_retries |
279 | 345 | # final trial, die on exception | 372 | code = _get_code_from_exception(self.backend, operation, e) |
280 | 346 | self.retry_count = n+1 | 373 | if code == log.ErrorCode.backend_not_found: |
281 | 347 | return fn(self, *args) | 374 | # If we tried to do something, but the file just isn't there, |
282 | 348 | except Exception as e: | 375 | # no need to retry. |
283 | 349 | log.Debug(_("Backtrace of previous error: %s") | 376 | at_end = True |
284 | 350 | % exception_traceback()) | 377 | if at_end and fatal: |
285 | 351 | log.FatalError(_("Giving up after %s attempts. %s: %s") | 378 | def make_filename(f): |
286 | 352 | % (self.retry_count, e.__class__.__name__, str(e)), | 379 | if isinstance(f, path.ROPath): |
287 | 353 | log.ErrorCode.backend_error) | 380 | return util.escape(f.name) |
288 | 354 | self.retry_count = 0 | 381 | else: |
289 | 355 | 382 | return util.escape(f) | |
290 | 356 | return _retry_fatal | 383 | extra = ' '.join([operation] + [make_filename(x) for x in args if x]) |
291 | 384 | log.FatalError(_("Giving up after %s attempts. %s: %s") | ||
292 | 385 | % (n, e.__class__.__name__, | ||
293 | 386 | str(e)), code=code, extra=extra) | ||
294 | 387 | else: | ||
295 | 388 | log.Warn(_("Attempt %s failed. %s: %s") | ||
296 | 389 | % (n, e.__class__.__name__, str(e))) | ||
297 | 390 | if not at_end: | ||
298 | 391 | if isinstance(e, TemporaryLoadException): | ||
299 | 392 | time.sleep(90) # wait longer before trying again | ||
300 | 393 | else: | ||
301 | 394 | time.sleep(30) # wait a bit before trying again | ||
302 | 395 | if hasattr(self.backend, '_retry_cleanup'): | ||
303 | 396 | self.backend._retry_cleanup() | ||
304 | 397 | |||
305 | 398 | return inner_retry | ||
306 | 399 | return outer_retry | ||
307 | 400 | |||
308 | 357 | 401 | ||
309 | 358 | class Backend(object): | 402 | class Backend(object): |
310 | 359 | """ | 403 | """ |
325 | 360 | Represents a generic duplicity backend, capable of storing and | 404 | See README in backends directory for information on how to write a backend. |
312 | 361 | retrieving files. | ||
313 | 362 | |||
314 | 363 | Concrete sub-classes are expected to implement: | ||
315 | 364 | |||
316 | 365 | - put | ||
317 | 366 | - get | ||
318 | 367 | - list | ||
319 | 368 | - delete | ||
320 | 369 | - close (if needed) | ||
321 | 370 | |||
322 | 371 | Optional: | ||
323 | 372 | |||
324 | 373 | - move | ||
326 | 374 | """ | 405 | """ |
327 | 375 | |||
328 | 376 | def __init__(self, parsed_url): | 406 | def __init__(self, parsed_url): |
329 | 377 | self.parsed_url = parsed_url | 407 | self.parsed_url = parsed_url |
330 | 378 | 408 | ||
331 | 379 | def put(self, source_path, remote_filename = None): | ||
332 | 380 | """ | ||
333 | 381 | Transfer source_path (Path object) to remote_filename (string) | ||
334 | 382 | |||
335 | 383 | If remote_filename is None, get the filename from the last | ||
336 | 384 | path component of pathname. | ||
337 | 385 | """ | ||
338 | 386 | raise NotImplementedError() | ||
339 | 387 | |||
340 | 388 | def move(self, source_path, remote_filename = None): | ||
341 | 389 | """ | ||
342 | 390 | Move source_path (Path object) to remote_filename (string) | ||
343 | 391 | |||
344 | 392 | Same as put(), but unlinks source_path in the process. This allows the | ||
345 | 393 | local backend to do this more efficiently using rename. | ||
346 | 394 | """ | ||
347 | 395 | self.put(source_path, remote_filename) | ||
348 | 396 | source_path.delete() | ||
349 | 397 | |||
350 | 398 | def get(self, remote_filename, local_path): | ||
351 | 399 | """Retrieve remote_filename and place in local_path""" | ||
352 | 400 | raise NotImplementedError() | ||
353 | 401 | |||
354 | 402 | def list(self): | ||
355 | 403 | """ | ||
356 | 404 | Return list of filenames (byte strings) present in backend | ||
357 | 405 | """ | ||
358 | 406 | def tobytes(filename): | ||
359 | 407 | "Convert a (maybe unicode) filename to bytes" | ||
360 | 408 | if isinstance(filename, unicode): | ||
361 | 409 | # There shouldn't be any encoding errors for files we care | ||
362 | 410 | # about, since duplicity filenames are ascii. But user files | ||
363 | 411 | # may be in the same directory. So just replace characters. | ||
364 | 412 | return filename.encode(sys.getfilesystemencoding(), 'replace') | ||
365 | 413 | else: | ||
366 | 414 | return filename | ||
367 | 415 | |||
368 | 416 | if hasattr(self, '_list'): | ||
369 | 417 | # Make sure that duplicity internals only ever see byte strings | ||
370 | 418 | # for filenames, no matter what the backend thinks it is talking. | ||
371 | 419 | return [tobytes(x) for x in self._list()] | ||
372 | 420 | else: | ||
373 | 421 | raise NotImplementedError() | ||
374 | 422 | |||
375 | 423 | def delete(self, filename_list): | ||
376 | 424 | """ | ||
377 | 425 | Delete each filename in filename_list, in order if possible. | ||
378 | 426 | """ | ||
379 | 427 | raise NotImplementedError() | ||
380 | 428 | |||
381 | 429 | # Should never cause FatalError. | ||
382 | 430 | # Returns a dictionary of dictionaries. The outer dictionary maps | ||
383 | 431 | # filenames to metadata dictionaries. Supported metadata are: | ||
384 | 432 | # | ||
385 | 433 | # 'size': if >= 0, size of file | ||
386 | 434 | # if -1, file is not found | ||
387 | 435 | # if None, error querying file | ||
388 | 436 | # | ||
389 | 437 | # Returned dictionary is guaranteed to contain a metadata dictionary for | ||
390 | 438 | # each filename, but not all metadata are guaranteed to be present. | ||
391 | 439 | def query_info(self, filename_list, raise_errors=True): | ||
392 | 440 | """ | ||
393 | 441 | Return metadata about each filename in filename_list | ||
394 | 442 | """ | ||
395 | 443 | info = {} | ||
396 | 444 | if hasattr(self, '_query_list_info'): | ||
397 | 445 | info = self._query_list_info(filename_list) | ||
398 | 446 | elif hasattr(self, '_query_file_info'): | ||
399 | 447 | for filename in filename_list: | ||
400 | 448 | info[filename] = self._query_file_info(filename) | ||
401 | 449 | |||
402 | 450 | # Fill out any missing entries (may happen if backend has no support | ||
403 | 451 | # or its query_list support is lazy) | ||
404 | 452 | for filename in filename_list: | ||
405 | 453 | if filename not in info: | ||
406 | 454 | info[filename] = {} | ||
407 | 455 | |||
408 | 456 | return info | ||
409 | 457 | |||
410 | 458 | """ use getpass by default, inherited backends may overwrite this behaviour """ | 409 | """ use getpass by default, inherited backends may overwrite this behaviour """ |
411 | 459 | use_getpass = True | 410 | use_getpass = True |
412 | 460 | 411 | ||
413 | @@ -493,27 +444,7 @@ | |||
414 | 493 | else: | 444 | else: |
415 | 494 | return commandline | 445 | return commandline |
416 | 495 | 446 | ||
438 | 496 | """ | 447 | def __subprocess_popen(self, commandline): |
418 | 497 | DEPRECATED: | ||
419 | 498 | run_command(_persist) - legacy wrappers for subprocess_popen(_persist) | ||
420 | 499 | """ | ||
421 | 500 | def run_command(self, commandline): | ||
422 | 501 | return self.subprocess_popen(commandline) | ||
423 | 502 | def run_command_persist(self, commandline): | ||
424 | 503 | return self.subprocess_popen_persist(commandline) | ||
425 | 504 | |||
426 | 505 | """ | ||
427 | 506 | DEPRECATED: | ||
428 | 507 | popen(_persist) - legacy wrappers for subprocess_popen(_persist) | ||
429 | 508 | """ | ||
430 | 509 | def popen(self, commandline): | ||
431 | 510 | result, stdout, stderr = self.subprocess_popen(commandline) | ||
432 | 511 | return stdout | ||
433 | 512 | def popen_persist(self, commandline): | ||
434 | 513 | result, stdout, stderr = self.subprocess_popen_persist(commandline) | ||
435 | 514 | return stdout | ||
436 | 515 | |||
437 | 516 | def _subprocess_popen(self, commandline): | ||
439 | 517 | """ | 448 | """ |
440 | 518 | For internal use. | 449 | For internal use. |
441 | 519 | Execute the given command line, interpreted as a shell command. | 450 | Execute the given command line, interpreted as a shell command. |
442 | @@ -525,6 +456,10 @@ | |||
443 | 525 | 456 | ||
444 | 526 | return p.returncode, stdout, stderr | 457 | return p.returncode, stdout, stderr |
445 | 527 | 458 | ||
446 | 459 | """ a dictionary for breaking exceptions, syntax is | ||
447 | 460 | { 'command' : [ code1, code2 ], ... } see ftpbackend for an example """ | ||
448 | 461 | popen_breaks = {} | ||
449 | 462 | |||
450 | 528 | def subprocess_popen(self, commandline): | 463 | def subprocess_popen(self, commandline): |
451 | 529 | """ | 464 | """ |
452 | 530 | Execute the given command line with error check. | 465 | Execute the given command line with error check. |
453 | @@ -534,54 +469,179 @@ | |||
454 | 534 | """ | 469 | """ |
455 | 535 | private = self.munge_password(commandline) | 470 | private = self.munge_password(commandline) |
456 | 536 | log.Info(_("Reading results of '%s'") % private) | 471 | log.Info(_("Reading results of '%s'") % private) |
458 | 537 | result, stdout, stderr = self._subprocess_popen(commandline) | 472 | result, stdout, stderr = self.__subprocess_popen(commandline) |
459 | 538 | if result != 0: | 473 | if result != 0: |
460 | 539 | raise BackendException("Error running '%s'" % private) | ||
461 | 540 | return result, stdout, stderr | ||
462 | 541 | |||
463 | 542 | """ a dictionary for persist breaking exceptions, syntax is | ||
464 | 543 | { 'command' : [ code1, code2 ], ... } see ftpbackend for an example """ | ||
465 | 544 | popen_persist_breaks = {} | ||
466 | 545 | |||
467 | 546 | def subprocess_popen_persist(self, commandline): | ||
468 | 547 | """ | ||
469 | 548 | Execute the given command line with error check. | ||
470 | 549 | Retries globals.num_retries times with 30s delay. | ||
471 | 550 | Returns int Exitcode, string StdOut, string StdErr | ||
472 | 551 | |||
473 | 552 | Raise a BackendException on failure. | ||
474 | 553 | """ | ||
475 | 554 | private = self.munge_password(commandline) | ||
476 | 555 | |||
477 | 556 | for n in range(1, globals.num_retries+1): | ||
478 | 557 | # sleep before retry | ||
479 | 558 | if n > 1: | ||
480 | 559 | time.sleep(30) | ||
481 | 560 | log.Info(_("Reading results of '%s'") % private) | ||
482 | 561 | result, stdout, stderr = self._subprocess_popen(commandline) | ||
483 | 562 | if result == 0: | ||
484 | 563 | return result, stdout, stderr | ||
485 | 564 | |||
486 | 565 | try: | 474 | try: |
487 | 566 | m = re.search("^\s*([\S]+)", commandline) | 475 | m = re.search("^\s*([\S]+)", commandline) |
488 | 567 | cmd = m.group(1) | 476 | cmd = m.group(1) |
490 | 568 | ignores = self.popen_persist_breaks[ cmd ] | 477 | ignores = self.popen_breaks[ cmd ] |
491 | 569 | ignores.index(result) | 478 | ignores.index(result) |
492 | 570 | """ ignore a predefined set of error codes """ | 479 | """ ignore a predefined set of error codes """ |
493 | 571 | return 0, '', '' | 480 | return 0, '', '' |
494 | 572 | except (KeyError, ValueError): | 481 | except (KeyError, ValueError): |
507 | 573 | pass | 482 | raise BackendException("Error running '%s': returned %d, with output:\n%s" % |
508 | 574 | 483 | (private, result, stdout + '\n' + stderr)) | |
509 | 575 | log.Warn(ngettext("Running '%s' failed with code %d (attempt #%d)", | 484 | return result, stdout, stderr |
510 | 576 | "Running '%s' failed with code %d (attempt #%d)", n) % | 485 | |
511 | 577 | (private, result, n)) | 486 | |
512 | 578 | if stdout or stderr: | 487 | class BackendWrapper(object): |
513 | 579 | log.Warn(_("Error is:\n%s") % stderr + (stderr and stdout and "\n") + stdout) | 488 | """ |
514 | 580 | 489 | Represents a generic duplicity backend, capable of storing and | |
515 | 581 | log.Warn(ngettext("Giving up trying to execute '%s' after %d attempt", | 490 | retrieving files. |
516 | 582 | "Giving up trying to execute '%s' after %d attempts", | 491 | """ |
517 | 583 | globals.num_retries) % (private, globals.num_retries)) | 492 | |
518 | 584 | raise BackendException("Error running '%s'" % private) | 493 | def __init__(self, backend): |
519 | 494 | self.backend = backend | ||
520 | 495 | |||
521 | 496 | def __do_put(self, source_path, remote_filename): | ||
522 | 497 | if hasattr(self.backend, '_put'): | ||
523 | 498 | log.Info(_("Writing %s") % remote_filename) | ||
524 | 499 | self.backend._put(source_path, remote_filename) | ||
525 | 500 | else: | ||
526 | 501 | raise NotImplementedError() | ||
527 | 502 | |||
528 | 503 | @retry('put', fatal=True) | ||
529 | 504 | def put(self, source_path, remote_filename=None): | ||
530 | 505 | """ | ||
531 | 506 | Transfer source_path (Path object) to remote_filename (string) | ||
532 | 507 | |||
533 | 508 | If remote_filename is None, get the filename from the last | ||
534 | 509 | path component of pathname. | ||
535 | 510 | """ | ||
536 | 511 | if not remote_filename: | ||
537 | 512 | remote_filename = source_path.get_filename() | ||
538 | 513 | self.__do_put(source_path, remote_filename) | ||
539 | 514 | |||
540 | 515 | @retry('move', fatal=True) | ||
541 | 516 | def move(self, source_path, remote_filename=None): | ||
542 | 517 | """ | ||
543 | 518 | Move source_path (Path object) to remote_filename (string) | ||
544 | 519 | |||
545 | 520 | Same as put(), but unlinks source_path in the process. This allows the | ||
546 | 521 | local backend to do this more efficiently using rename. | ||
547 | 522 | """ | ||
548 | 523 | if not remote_filename: | ||
549 | 524 | remote_filename = source_path.get_filename() | ||
550 | 525 | if hasattr(self.backend, '_move'): | ||
551 | 526 | if self.backend._move(source_path, remote_filename) is not False: | ||
552 | 527 | source_path.setdata() | ||
553 | 528 | return | ||
554 | 529 | self.__do_put(source_path, remote_filename) | ||
555 | 530 | source_path.delete() | ||
556 | 531 | |||
557 | 532 | @retry('get', fatal=True) | ||
558 | 533 | def get(self, remote_filename, local_path): | ||
559 | 534 | """Retrieve remote_filename and place in local_path""" | ||
560 | 535 | if hasattr(self.backend, '_get'): | ||
561 | 536 | self.backend._get(remote_filename, local_path) | ||
562 | 537 | if not local_path.exists(): | ||
563 | 538 | raise BackendException(_("File %s not found locally after get " | ||
564 | 539 | "from backend") % util.ufn(local_path.name)) | ||
565 | 540 | local_path.setdata() | ||
566 | 541 | else: | ||
567 | 542 | raise NotImplementedError() | ||
568 | 543 | |||
569 | 544 | @retry('list', fatal=True) | ||
570 | 545 | def list(self): | ||
571 | 546 | """ | ||
572 | 547 | Return list of filenames (byte strings) present in backend | ||
573 | 548 | """ | ||
574 | 549 | def tobytes(filename): | ||
575 | 550 | "Convert a (maybe unicode) filename to bytes" | ||
576 | 551 | if isinstance(filename, unicode): | ||
577 | 552 | # There shouldn't be any encoding errors for files we care | ||
578 | 553 | # about, since duplicity filenames are ascii. But user files | ||
579 | 554 | # may be in the same directory. So just replace characters. | ||
580 | 555 | return filename.encode(sys.getfilesystemencoding(), 'replace') | ||
581 | 556 | else: | ||
582 | 557 | return filename | ||
583 | 558 | |||
584 | 559 | if hasattr(self.backend, '_list'): | ||
585 | 560 | # Make sure that duplicity internals only ever see byte strings | ||
586 | 561 | # for filenames, no matter what the backend thinks it is talking. | ||
587 | 562 | return [tobytes(x) for x in self.backend._list()] | ||
588 | 563 | else: | ||
589 | 564 | raise NotImplementedError() | ||
590 | 565 | |||
591 | 566 | def delete(self, filename_list): | ||
592 | 567 | """ | ||
593 | 568 | Delete each filename in filename_list, in order if possible. | ||
594 | 569 | """ | ||
595 | 570 | assert type(filename_list) is not types.StringType | ||
596 | 571 | if hasattr(self.backend, '_delete_list'): | ||
597 | 572 | self._do_delete_list(filename_list) | ||
598 | 573 | elif hasattr(self.backend, '_delete'): | ||
599 | 574 | for filename in filename_list: | ||
600 | 575 | self._do_delete(filename) | ||
601 | 576 | else: | ||
602 | 577 | raise NotImplementedError() | ||
603 | 578 | |||
604 | 579 | @retry('delete', fatal=False) | ||
605 | 580 | def _do_delete_list(self, filename_list): | ||
606 | 581 | self.backend._delete_list(filename_list) | ||
607 | 582 | |||
608 | 583 | @retry('delete', fatal=False) | ||
609 | 584 | def _do_delete(self, filename): | ||
610 | 585 | self.backend._delete(filename) | ||
611 | 586 | |||
612 | 587 | # Should never cause FatalError. | ||
613 | 588 | # Returns a dictionary of dictionaries. The outer dictionary maps | ||
614 | 589 | # filenames to metadata dictionaries. Supported metadata are: | ||
615 | 590 | # | ||
616 | 591 | # 'size': if >= 0, size of file | ||
617 | 592 | # if -1, file is not found | ||
618 | 593 | # if None, error querying file | ||
619 | 594 | # | ||
620 | 595 | # Returned dictionary is guaranteed to contain a metadata dictionary for | ||
621 | 596 | # each filename, and all metadata are guaranteed to be present. | ||
622 | 597 | def query_info(self, filename_list): | ||
623 | 598 | """ | ||
624 | 599 | Return metadata about each filename in filename_list | ||
625 | 600 | """ | ||
626 | 601 | info = {} | ||
627 | 602 | if hasattr(self.backend, '_query_list'): | ||
628 | 603 | info = self._do_query_list(filename_list) | ||
629 | 604 | if info is None: | ||
630 | 605 | info = {} | ||
631 | 606 | elif hasattr(self.backend, '_query'): | ||
632 | 607 | for filename in filename_list: | ||
633 | 608 | info[filename] = self._do_query(filename) | ||
634 | 609 | |||
635 | 610 | # Fill out any missing entries (may happen if backend has no support | ||
636 | 611 | # or its query_list support is lazy) | ||
637 | 612 | for filename in filename_list: | ||
638 | 613 | if filename not in info or info[filename] is None: | ||
639 | 614 | info[filename] = {} | ||
640 | 615 | for metadata in ['size']: | ||
641 | 616 | info[filename].setdefault(metadata, None) | ||
642 | 617 | |||
643 | 618 | return info | ||
644 | 619 | |||
645 | 620 | @retry('query', fatal=False) | ||
646 | 621 | def _do_query_list(self, filename_list): | ||
647 | 622 | info = self.backend._query_list(filename_list) | ||
648 | 623 | if info is None: | ||
649 | 624 | info = {} | ||
650 | 625 | return info | ||
651 | 626 | |||
652 | 627 | @retry('query', fatal=False) | ||
653 | 628 | def _do_query(self, filename): | ||
654 | 629 | try: | ||
655 | 630 | return self.backend._query(filename) | ||
656 | 631 | except Exception as e: | ||
657 | 632 | code = _get_code_from_exception(self.backend, 'query', e) | ||
658 | 633 | if code == log.ErrorCode.backend_not_found: | ||
659 | 634 | return {'size': -1} | ||
660 | 635 | else: | ||
661 | 636 | raise e | ||
662 | 637 | |||
663 | 638 | def close(self): | ||
664 | 639 | """ | ||
665 | 640 | Close the backend, releasing any resources held and | ||
666 | 641 | invalidating any file objects obtained from the backend. | ||
667 | 642 | """ | ||
668 | 643 | if hasattr(self.backend, '_close'): | ||
669 | 644 | self.backend._close() | ||
670 | 585 | 645 | ||
671 | 586 | def get_fileobj_read(self, filename, parseresults = None): | 646 | def get_fileobj_read(self, filename, parseresults = None): |
672 | 587 | """ | 647 | """ |
673 | @@ -598,37 +658,6 @@ | |||
674 | 598 | tdp.setdata() | 658 | tdp.setdata() |
675 | 599 | return tdp.filtered_open_with_delete("rb") | 659 | return tdp.filtered_open_with_delete("rb") |
676 | 600 | 660 | ||
677 | 601 | def get_fileobj_write(self, filename, | ||
678 | 602 | parseresults = None, | ||
679 | 603 | sizelist = None): | ||
680 | 604 | """ | ||
681 | 605 | Return fileobj opened for writing, which will cause the file | ||
682 | 606 | to be written to the backend on close(). | ||
683 | 607 | |||
684 | 608 | The file will be encoded as specified in parseresults (or as | ||
685 | 609 | read from the filename), and stored in a temp file until it | ||
686 | 610 | can be copied over and deleted. | ||
687 | 611 | |||
688 | 612 | If sizelist is not None, it should be set to an empty list. | ||
689 | 613 | The number of bytes will be inserted into the list. | ||
690 | 614 | """ | ||
691 | 615 | if not parseresults: | ||
692 | 616 | parseresults = file_naming.parse(filename) | ||
693 | 617 | assert parseresults, u"Filename %s not correctly parsed" % util.ufn(filename) | ||
694 | 618 | tdp = dup_temp.new_tempduppath(parseresults) | ||
695 | 619 | |||
696 | 620 | def close_file_hook(): | ||
697 | 621 | """This is called when returned fileobj is closed""" | ||
698 | 622 | self.put(tdp, filename) | ||
699 | 623 | if sizelist is not None: | ||
700 | 624 | tdp.setdata() | ||
701 | 625 | sizelist.append(tdp.getsize()) | ||
702 | 626 | tdp.delete() | ||
703 | 627 | |||
704 | 628 | fh = dup_temp.FileobjHooked(tdp.filtered_open("wb")) | ||
705 | 629 | fh.addhook(close_file_hook) | ||
706 | 630 | return fh | ||
707 | 631 | |||
708 | 632 | def get_data(self, filename, parseresults = None): | 661 | def get_data(self, filename, parseresults = None): |
709 | 633 | """ | 662 | """ |
710 | 634 | Retrieve a file from backend, process it, return contents. | 663 | Retrieve a file from backend, process it, return contents. |
711 | @@ -637,18 +666,3 @@ | |||
712 | 637 | buf = fin.read() | 666 | buf = fin.read() |
713 | 638 | assert not fin.close() | 667 | assert not fin.close() |
714 | 639 | return buf | 668 | return buf |
715 | 640 | |||
716 | 641 | def put_data(self, buffer, filename, parseresults = None): | ||
717 | 642 | """ | ||
718 | 643 | Put buffer into filename on backend after processing. | ||
719 | 644 | """ | ||
720 | 645 | fout = self.get_fileobj_write(filename, parseresults) | ||
721 | 646 | fout.write(buffer) | ||
722 | 647 | assert not fout.close() | ||
723 | 648 | |||
724 | 649 | def close(self): | ||
725 | 650 | """ | ||
726 | 651 | Close the backend, releasing any resources held and | ||
727 | 652 | invalidating any file objects obtained from the backend. | ||
728 | 653 | """ | ||
729 | 654 | pass | ||
730 | 655 | 669 | ||
731 | === added file 'duplicity/backends/README' | |||
732 | --- duplicity/backends/README 1970-01-01 00:00:00 +0000 | |||
733 | +++ duplicity/backends/README 2014-04-28 02:49:55 +0000 | |||
734 | @@ -0,0 +1,79 @@ | |||
735 | 1 | = How to write a backend, in five easy steps! = | ||
736 | 2 | |||
737 | 3 | There are five main methods you want to implement: | ||
738 | 4 | |||
739 | 5 | __init__ - Initial setup | ||
740 | 6 | _get | ||
741 | 7 | - Get one file | ||
742 | 8 | - Retried if an exception is thrown | ||
743 | 9 | _put | ||
744 | 10 | - Upload one file | ||
745 | 11 | - Retried if an exception is thrown | ||
746 | 12 | _list | ||
747 | 13 | - List all files in the backend | ||
748 | 14 | - Return a list of filenames | ||
749 | 15 | - Retried if an exception is thrown | ||
750 | 16 | _delete | ||
751 | 17 | - Delete one file | ||
752 | 18 | - Retried if an exception is thrown | ||
753 | 19 | |||
754 | 20 | There are other methods you may optionally implement: | ||
755 | 21 | |||
756 | 22 | _delete_list | ||
757 | 23 | - Delete list of files | ||
758 | 24 | - This is used in preference of _delete if defined | ||
759 | 25 | - Must gracefully handle individual file errors itself | ||
760 | 26 | - Retried if an exception is thrown | ||
761 | 27 | _query | ||
762 | 28 | - Query metadata of one file | ||
763 | 29 | - Return a dict with a 'size' key, and a file size value (-1 for not found) | ||
764 | 30 | - Retried if an exception is thrown | ||
765 | 31 | _query_list | ||
766 | 32 | - Query metadata of a list of files | ||
767 | 33 | - Return a dict of filenames mapping to a dict with a 'size' key, | ||
768 | 34 | and a file size value (-1 for not found) | ||
769 | 35 | - This is used in preference of _query if defined | ||
770 | 36 | - Must gracefully handle individual file errors itself | ||
771 | 37 | - Retried if an exception is thrown | ||
772 | 38 | _retry_cleanup | ||
773 | 39 | - If the backend wants to do any bookkeeping or connection resetting inbetween | ||
774 | 40 | retries, do it here. | ||
775 | 41 | _error_code | ||
776 | 42 | - Passed an exception thrown by your backend, return a log.ErrorCode that | ||
777 | 43 | corresponds to that exception | ||
778 | 44 | _move | ||
779 | 45 | - If your backend can more optimally move a local file into its backend, | ||
780 | 46 | implement this. If it's not implemented or returns False, _put will be | ||
781 | 47 | called instead (and duplicity will delete the source file after). | ||
782 | 48 | - Retried if an exception is thrown | ||
783 | 49 | _close | ||
784 | 50 | - If your backend needs to clean up after itself, do that here. | ||
785 | 51 | |||
786 | 52 | == Subclassing == | ||
787 | 53 | |||
788 | 54 | Always subclass from duplicity.backend.Backend | ||
789 | 55 | |||
790 | 56 | == Registering == | ||
791 | 57 | |||
792 | 58 | You can register your class as a single backend like so: | ||
793 | 59 | |||
794 | 60 | duplicity.backend.register_backend("foo", FooBackend) | ||
795 | 61 | |||
796 | 62 | This will allow a URL like so: foo://hostname/path | ||
797 | 63 | |||
798 | 64 | Or you can register your class as a meta backend like so: | ||
799 | 65 | duplicity.backend.register_backend_prefix("bar", BarBackend) | ||
800 | 66 | |||
801 | 67 | Which will allow a URL like so: bar+foo://hostname/path and your class will | ||
802 | 68 | be passed the inner URL to either interpret how you like or create a new | ||
803 | 69 | inner backend instance with duplicity.backend.get_backend_object(url). | ||
804 | 70 | |||
805 | 71 | == Naming == | ||
806 | 72 | |||
807 | 73 | Any method that duplicity calls will start with one underscore. Please use | ||
808 | 74 | zero or two underscores in your method names to avoid conflicts. | ||
809 | 75 | |||
810 | 76 | == Testing == | ||
811 | 77 | |||
812 | 78 | Use "./testing/manual/backendtest.py foo://hostname/path" to test your new | ||
813 | 79 | backend. It will load your backend from your current branch. | ||
814 | 0 | 80 | ||
815 | === modified file 'duplicity/backends/_boto_multi.py' | |||
816 | --- duplicity/backends/_boto_multi.py 2014-04-17 21:54:04 +0000 | |||
817 | +++ duplicity/backends/_boto_multi.py 2014-04-28 02:49:55 +0000 | |||
818 | @@ -98,8 +98,8 @@ | |||
819 | 98 | 98 | ||
820 | 99 | self._pool = multiprocessing.Pool(processes=number_of_procs) | 99 | self._pool = multiprocessing.Pool(processes=number_of_procs) |
821 | 100 | 100 | ||
824 | 101 | def close(self): | 101 | def _close(self): |
825 | 102 | BotoSingleBackend.close(self) | 102 | BotoSingleBackend._close(self) |
826 | 103 | log.Debug("Closing pool") | 103 | log.Debug("Closing pool") |
827 | 104 | self._pool.terminate() | 104 | self._pool.terminate() |
828 | 105 | self._pool.join() | 105 | self._pool.join() |
829 | 106 | 106 | ||
830 | === modified file 'duplicity/backends/_boto_single.py' | |||
831 | --- duplicity/backends/_boto_single.py 2014-04-25 23:20:12 +0000 | |||
832 | +++ duplicity/backends/_boto_single.py 2014-04-28 02:49:55 +0000 | |||
833 | @@ -25,9 +25,7 @@ | |||
834 | 25 | import duplicity.backend | 25 | import duplicity.backend |
835 | 26 | from duplicity import globals | 26 | from duplicity import globals |
836 | 27 | from duplicity import log | 27 | from duplicity import log |
840 | 28 | from duplicity.errors import * #@UnusedWildImport | 28 | from duplicity.errors import FatalBackendException, BackendException |
838 | 29 | from duplicity.util import exception_traceback | ||
839 | 30 | from duplicity.backend import retry | ||
841 | 31 | from duplicity import progress | 29 | from duplicity import progress |
842 | 32 | 30 | ||
843 | 33 | BOTO_MIN_VERSION = "2.1.1" | 31 | BOTO_MIN_VERSION = "2.1.1" |
844 | @@ -163,7 +161,7 @@ | |||
845 | 163 | self.resetConnection() | 161 | self.resetConnection() |
846 | 164 | self._listed_keys = {} | 162 | self._listed_keys = {} |
847 | 165 | 163 | ||
849 | 166 | def close(self): | 164 | def _close(self): |
850 | 167 | del self._listed_keys | 165 | del self._listed_keys |
851 | 168 | self._listed_keys = {} | 166 | self._listed_keys = {} |
852 | 169 | self.bucket = None | 167 | self.bucket = None |
853 | @@ -185,137 +183,69 @@ | |||
854 | 185 | self.conn = get_connection(self.scheme, self.parsed_url, self.storage_uri) | 183 | self.conn = get_connection(self.scheme, self.parsed_url, self.storage_uri) |
855 | 186 | self.bucket = self.conn.lookup(self.bucket_name) | 184 | self.bucket = self.conn.lookup(self.bucket_name) |
856 | 187 | 185 | ||
858 | 188 | def put(self, source_path, remote_filename=None): | 186 | def _retry_cleanup(self): |
859 | 187 | self.resetConnection() | ||
860 | 188 | |||
861 | 189 | def _put(self, source_path, remote_filename): | ||
862 | 189 | from boto.s3.connection import Location | 190 | from boto.s3.connection import Location |
863 | 190 | if globals.s3_european_buckets: | 191 | if globals.s3_european_buckets: |
864 | 191 | if not globals.s3_use_new_style: | 192 | if not globals.s3_use_new_style: |
875 | 192 | log.FatalError("European bucket creation was requested, but not new-style " | 193 | raise FatalBackendException("European bucket creation was requested, but not new-style " |
876 | 193 | "bucket addressing (--s3-use-new-style)", | 194 | "bucket addressing (--s3-use-new-style)", |
877 | 194 | log.ErrorCode.s3_bucket_not_style) | 195 | code=log.ErrorCode.s3_bucket_not_style) |
878 | 195 | #Network glitch may prevent first few attempts of creating/looking up a bucket | 196 | |
879 | 196 | for n in range(1, globals.num_retries+1): | 197 | if self.bucket is None: |
870 | 197 | if self.bucket: | ||
871 | 198 | break | ||
872 | 199 | if n > 1: | ||
873 | 200 | time.sleep(30) | ||
874 | 201 | self.resetConnection() | ||
880 | 202 | try: | 198 | try: |
890 | 203 | try: | 199 | self.bucket = self.conn.get_bucket(self.bucket_name, validate=True) |
891 | 204 | self.bucket = self.conn.get_bucket(self.bucket_name, validate=True) | 200 | except Exception as e: |
892 | 205 | except Exception as e: | 201 | if "NoSuchBucket" in str(e): |
893 | 206 | if "NoSuchBucket" in str(e): | 202 | if globals.s3_european_buckets: |
894 | 207 | if globals.s3_european_buckets: | 203 | self.bucket = self.conn.create_bucket(self.bucket_name, |
895 | 208 | self.bucket = self.conn.create_bucket(self.bucket_name, | 204 | location=Location.EU) |
887 | 209 | location=Location.EU) | ||
888 | 210 | else: | ||
889 | 211 | self.bucket = self.conn.create_bucket(self.bucket_name) | ||
896 | 212 | else: | 205 | else: |
903 | 213 | raise e | 206 | self.bucket = self.conn.create_bucket(self.bucket_name) |
904 | 214 | except Exception as e: | 207 | else: |
905 | 215 | log.Warn("Failed to create bucket (attempt #%d) '%s' failed (reason: %s: %s)" | 208 | raise |
900 | 216 | "" % (n, self.bucket_name, | ||
901 | 217 | e.__class__.__name__, | ||
902 | 218 | str(e))) | ||
906 | 219 | 209 | ||
907 | 220 | if not remote_filename: | ||
908 | 221 | remote_filename = source_path.get_filename() | ||
909 | 222 | key = self.bucket.new_key(self.key_prefix + remote_filename) | 210 | key = self.bucket.new_key(self.key_prefix + remote_filename) |
910 | 223 | 211 | ||
956 | 224 | for n in range(1, globals.num_retries+1): | 212 | if globals.s3_use_rrs: |
957 | 225 | if n > 1: | 213 | storage_class = 'REDUCED_REDUNDANCY' |
958 | 226 | # sleep before retry (new connection to a **hopeful** new host, so no need to wait so long) | 214 | else: |
959 | 227 | time.sleep(10) | 215 | storage_class = 'STANDARD' |
960 | 228 | 216 | log.Info("Uploading %s/%s to %s Storage" % (self.straight_url, remote_filename, storage_class)) | |
961 | 229 | if globals.s3_use_rrs: | 217 | if globals.s3_use_sse: |
962 | 230 | storage_class = 'REDUCED_REDUNDANCY' | 218 | headers = { |
963 | 231 | else: | 219 | 'Content-Type': 'application/octet-stream', |
964 | 232 | storage_class = 'STANDARD' | 220 | 'x-amz-storage-class': storage_class, |
965 | 233 | log.Info("Uploading %s/%s to %s Storage" % (self.straight_url, remote_filename, storage_class)) | 221 | 'x-amz-server-side-encryption': 'AES256' |
966 | 234 | try: | 222 | } |
967 | 235 | if globals.s3_use_sse: | 223 | else: |
968 | 236 | headers = { | 224 | headers = { |
969 | 237 | 'Content-Type': 'application/octet-stream', | 225 | 'Content-Type': 'application/octet-stream', |
970 | 238 | 'x-amz-storage-class': storage_class, | 226 | 'x-amz-storage-class': storage_class |
971 | 239 | 'x-amz-server-side-encryption': 'AES256' | 227 | } |
972 | 240 | } | 228 | |
973 | 241 | else: | 229 | upload_start = time.time() |
974 | 242 | headers = { | 230 | self.upload(source_path.name, key, headers) |
975 | 243 | 'Content-Type': 'application/octet-stream', | 231 | upload_end = time.time() |
976 | 244 | 'x-amz-storage-class': storage_class | 232 | total_s = abs(upload_end-upload_start) or 1 # prevent a zero value! |
977 | 245 | } | 233 | rough_upload_speed = os.path.getsize(source_path.name)/total_s |
978 | 246 | 234 | log.Debug("Uploaded %s/%s to %s Storage at roughly %f bytes/second" % (self.straight_url, remote_filename, storage_class, rough_upload_speed)) | |
979 | 247 | upload_start = time.time() | 235 | |
980 | 248 | self.upload(source_path.name, key, headers) | 236 | def _get(self, remote_filename, local_path): |
936 | 249 | upload_end = time.time() | ||
937 | 250 | total_s = abs(upload_end-upload_start) or 1 # prevent a zero value! | ||
938 | 251 | rough_upload_speed = os.path.getsize(source_path.name)/total_s | ||
939 | 252 | self.resetConnection() | ||
940 | 253 | log.Debug("Uploaded %s/%s to %s Storage at roughly %f bytes/second" % (self.straight_url, remote_filename, storage_class, rough_upload_speed)) | ||
941 | 254 | return | ||
942 | 255 | except Exception as e: | ||
943 | 256 | log.Warn("Upload '%s/%s' failed (attempt #%d, reason: %s: %s)" | ||
944 | 257 | "" % (self.straight_url, | ||
945 | 258 | remote_filename, | ||
946 | 259 | n, | ||
947 | 260 | e.__class__.__name__, | ||
948 | 261 | str(e))) | ||
949 | 262 | log.Debug("Backtrace of previous error: %s" % (exception_traceback(),)) | ||
950 | 263 | self.resetConnection() | ||
951 | 264 | log.Warn("Giving up trying to upload %s/%s after %d attempts" % | ||
952 | 265 | (self.straight_url, remote_filename, globals.num_retries)) | ||
953 | 266 | raise BackendException("Error uploading %s/%s" % (self.straight_url, remote_filename)) | ||
954 | 267 | |||
955 | 268 | def get(self, remote_filename, local_path): | ||
981 | 269 | key_name = self.key_prefix + remote_filename | 237 | key_name = self.key_prefix + remote_filename |
982 | 270 | self.pre_process_download(remote_filename, wait=True) | 238 | self.pre_process_download(remote_filename, wait=True) |
983 | 271 | key = self._listed_keys[key_name] | 239 | key = self._listed_keys[key_name] |
1006 | 272 | for n in range(1, globals.num_retries+1): | 240 | self.resetConnection() |
1007 | 273 | if n > 1: | 241 | key.get_contents_to_filename(local_path.name) |
986 | 274 | # sleep before retry (new connection to a **hopeful** new host, so no need to wait so long) | ||
987 | 275 | time.sleep(10) | ||
988 | 276 | log.Info("Downloading %s/%s" % (self.straight_url, remote_filename)) | ||
989 | 277 | try: | ||
990 | 278 | self.resetConnection() | ||
991 | 279 | key.get_contents_to_filename(local_path.name) | ||
992 | 280 | local_path.setdata() | ||
993 | 281 | return | ||
994 | 282 | except Exception as e: | ||
995 | 283 | log.Warn("Download %s/%s failed (attempt #%d, reason: %s: %s)" | ||
996 | 284 | "" % (self.straight_url, | ||
997 | 285 | remote_filename, | ||
998 | 286 | n, | ||
999 | 287 | e.__class__.__name__, | ||
1000 | 288 | str(e)), 1) | ||
1001 | 289 | log.Debug("Backtrace of previous error: %s" % (exception_traceback(),)) | ||
1002 | 290 | |||
1003 | 291 | log.Warn("Giving up trying to download %s/%s after %d attempts" % | ||
1004 | 292 | (self.straight_url, remote_filename, globals.num_retries)) | ||
1005 | 293 | raise BackendException("Error downloading %s/%s" % (self.straight_url, remote_filename)) | ||
1008 | 294 | 242 | ||
1009 | 295 | def _list(self): | 243 | def _list(self): |
1010 | 296 | if not self.bucket: | 244 | if not self.bucket: |
1011 | 297 | raise BackendException("No connection to backend") | 245 | raise BackendException("No connection to backend") |
1033 | 298 | 246 | return self.list_filenames_in_bucket() | |
1034 | 299 | for n in range(1, globals.num_retries+1): | 247 | |
1035 | 300 | if n > 1: | 248 | def list_filenames_in_bucket(self): |
1015 | 301 | # sleep before retry | ||
1016 | 302 | time.sleep(30) | ||
1017 | 303 | self.resetConnection() | ||
1018 | 304 | log.Info("Listing %s" % self.straight_url) | ||
1019 | 305 | try: | ||
1020 | 306 | return self._list_filenames_in_bucket() | ||
1021 | 307 | except Exception as e: | ||
1022 | 308 | log.Warn("List %s failed (attempt #%d, reason: %s: %s)" | ||
1023 | 309 | "" % (self.straight_url, | ||
1024 | 310 | n, | ||
1025 | 311 | e.__class__.__name__, | ||
1026 | 312 | str(e)), 1) | ||
1027 | 313 | log.Debug("Backtrace of previous error: %s" % (exception_traceback(),)) | ||
1028 | 314 | log.Warn("Giving up trying to list %s after %d attempts" % | ||
1029 | 315 | (self.straight_url, globals.num_retries)) | ||
1030 | 316 | raise BackendException("Error listng %s" % self.straight_url) | ||
1031 | 317 | |||
1032 | 318 | def _list_filenames_in_bucket(self): | ||
1036 | 319 | # We add a 'd' to the prefix to make sure it is not null (for boto) and | 249 | # We add a 'd' to the prefix to make sure it is not null (for boto) and |
1037 | 320 | # to optimize the listing of our filenames, which always begin with 'd'. | 250 | # to optimize the listing of our filenames, which always begin with 'd'. |
1038 | 321 | # This will cause a failure in the regression tests as below: | 251 | # This will cause a failure in the regression tests as below: |
1039 | @@ -336,76 +266,37 @@ | |||
1040 | 336 | pass | 266 | pass |
1041 | 337 | return filename_list | 267 | return filename_list |
1042 | 338 | 268 | ||
1047 | 339 | def delete(self, filename_list): | 269 | def _delete(self, filename): |
1048 | 340 | for filename in filename_list: | 270 | self.bucket.delete_key(self.key_prefix + filename) |
1045 | 341 | self.bucket.delete_key(self.key_prefix + filename) | ||
1046 | 342 | log.Debug("Deleted %s/%s" % (self.straight_url, filename)) | ||
1049 | 343 | 271 | ||
1067 | 344 | @retry | 272 | def _query(self, filename): |
1068 | 345 | def _query_file_info(self, filename, raise_errors=False): | 273 | key = self.bucket.lookup(self.key_prefix + filename) |
1069 | 346 | try: | 274 | if key is None: |
1070 | 347 | key = self.bucket.lookup(self.key_prefix + filename) | 275 | return {'size': -1} |
1071 | 348 | if key is None: | 276 | return {'size': key.size} |
1055 | 349 | return {'size': -1} | ||
1056 | 350 | return {'size': key.size} | ||
1057 | 351 | except Exception as e: | ||
1058 | 352 | log.Warn("Query %s/%s failed: %s" | ||
1059 | 353 | "" % (self.straight_url, | ||
1060 | 354 | filename, | ||
1061 | 355 | str(e))) | ||
1062 | 356 | self.resetConnection() | ||
1063 | 357 | if raise_errors: | ||
1064 | 358 | raise e | ||
1065 | 359 | else: | ||
1066 | 360 | return {'size': None} | ||
1072 | 361 | 277 | ||
1073 | 362 | def upload(self, filename, key, headers): | 278 | def upload(self, filename, key, headers): |
1079 | 363 | key.set_contents_from_filename(filename, headers, | 279 | key.set_contents_from_filename(filename, headers, |
1080 | 364 | cb=progress.report_transfer, | 280 | cb=progress.report_transfer, |
1081 | 365 | num_cb=(max(2, 8 * globals.volsize / (1024 * 1024))) | 281 | num_cb=(max(2, 8 * globals.volsize / (1024 * 1024))) |
1082 | 366 | ) # Max num of callbacks = 8 times x megabyte | 282 | ) # Max num of callbacks = 8 times x megabyte |
1083 | 367 | key.close() | 283 | key.close() |
1084 | 368 | 284 | ||
1086 | 369 | def pre_process_download(self, files_to_download, wait=False): | 285 | def pre_process_download(self, remote_filename, wait=False): |
1087 | 370 | # Used primarily to move files in Glacier to S3 | 286 | # Used primarily to move files in Glacier to S3 |
1090 | 371 | if isinstance(files_to_download, (bytes, str, unicode)): | 287 | key_name = self.key_prefix + remote_filename |
1091 | 372 | files_to_download = [files_to_download] | 288 | if not self._listed_keys.get(key_name, False): |
1092 | 289 | self._listed_keys[key_name] = list(self.bucket.list(key_name))[0] | ||
1093 | 290 | key = self._listed_keys[key_name] | ||
1094 | 373 | 291 | ||
1101 | 374 | for remote_filename in files_to_download: | 292 | if key.storage_class == "GLACIER": |
1102 | 375 | success = False | 293 | # We need to move the file out of glacier |
1103 | 376 | for n in range(1, globals.num_retries+1): | 294 | if not self.bucket.get_key(key.key).ongoing_restore: |
1104 | 377 | if n > 1: | 295 | log.Info("File %s is in Glacier storage, restoring to S3" % remote_filename) |
1105 | 378 | # sleep before retry (new connection to a **hopeful** new host, so no need to wait so long) | 296 | key.restore(days=1) # Shouldn't need this again after 1 day |
1106 | 379 | time.sleep(10) | 297 | if wait: |
1107 | 298 | log.Info("Waiting for file %s to restore from Glacier" % remote_filename) | ||
1108 | 299 | while self.bucket.get_key(key.key).ongoing_restore: | ||
1109 | 300 | time.sleep(60) | ||
1110 | 380 | self.resetConnection() | 301 | self.resetConnection() |
1142 | 381 | try: | 302 | log.Info("File %s was successfully restored from Glacier" % remote_filename) |
1112 | 382 | key_name = self.key_prefix + remote_filename | ||
1113 | 383 | if not self._listed_keys.get(key_name, False): | ||
1114 | 384 | self._listed_keys[key_name] = list(self.bucket.list(key_name))[0] | ||
1115 | 385 | key = self._listed_keys[key_name] | ||
1116 | 386 | |||
1117 | 387 | if key.storage_class == "GLACIER": | ||
1118 | 388 | # We need to move the file out of glacier | ||
1119 | 389 | if not self.bucket.get_key(key.key).ongoing_restore: | ||
1120 | 390 | log.Info("File %s is in Glacier storage, restoring to S3" % remote_filename) | ||
1121 | 391 | key.restore(days=1) # Shouldn't need this again after 1 day | ||
1122 | 392 | if wait: | ||
1123 | 393 | log.Info("Waiting for file %s to restore from Glacier" % remote_filename) | ||
1124 | 394 | while self.bucket.get_key(key.key).ongoing_restore: | ||
1125 | 395 | time.sleep(60) | ||
1126 | 396 | self.resetConnection() | ||
1127 | 397 | log.Info("File %s was successfully restored from Glacier" % remote_filename) | ||
1128 | 398 | success = True | ||
1129 | 399 | break | ||
1130 | 400 | except Exception as e: | ||
1131 | 401 | log.Warn("Restoration from Glacier for file %s/%s failed (attempt #%d, reason: %s: %s)" | ||
1132 | 402 | "" % (self.straight_url, | ||
1133 | 403 | remote_filename, | ||
1134 | 404 | n, | ||
1135 | 405 | e.__class__.__name__, | ||
1136 | 406 | str(e)), 1) | ||
1137 | 407 | log.Debug("Backtrace of previous error: %s" % (exception_traceback(),)) | ||
1138 | 408 | if not success: | ||
1139 | 409 | log.Warn("Giving up trying to restore %s/%s after %d attempts" % | ||
1140 | 410 | (self.straight_url, remote_filename, globals.num_retries)) | ||
1141 | 411 | raise BackendException("Error restoring %s/%s from Glacier to S3" % (self.straight_url, remote_filename)) | ||
1143 | 412 | 303 | ||
1144 | === modified file 'duplicity/backends/_cf_cloudfiles.py' | |||
1145 | --- duplicity/backends/_cf_cloudfiles.py 2014-04-17 22:03:10 +0000 | |||
1146 | +++ duplicity/backends/_cf_cloudfiles.py 2014-04-28 02:49:55 +0000 | |||
1147 | @@ -19,14 +19,10 @@ | |||
1148 | 19 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | 19 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
1149 | 20 | 20 | ||
1150 | 21 | import os | 21 | import os |
1151 | 22 | import time | ||
1152 | 23 | 22 | ||
1153 | 24 | import duplicity.backend | 23 | import duplicity.backend |
1154 | 25 | from duplicity import globals | ||
1155 | 26 | from duplicity import log | 24 | from duplicity import log |
1159 | 27 | from duplicity.errors import * #@UnusedWildImport | 25 | from duplicity.errors import BackendException |
1157 | 28 | from duplicity.util import exception_traceback | ||
1158 | 29 | from duplicity.backend import retry | ||
1160 | 30 | 26 | ||
1161 | 31 | class CloudFilesBackend(duplicity.backend.Backend): | 27 | class CloudFilesBackend(duplicity.backend.Backend): |
1162 | 32 | """ | 28 | """ |
1163 | @@ -69,124 +65,37 @@ | |||
1164 | 69 | log.ErrorCode.connection_failed) | 65 | log.ErrorCode.connection_failed) |
1165 | 70 | self.container = conn.create_container(container) | 66 | self.container = conn.create_container(container) |
1166 | 71 | 67 | ||
1213 | 72 | def put(self, source_path, remote_filename = None): | 68 | def _error_code(self, operation, e): |
1214 | 73 | if not remote_filename: | 69 | from cloudfiles.errors import NoSuchObject |
1215 | 74 | remote_filename = source_path.get_filename() | 70 | if isinstance(e, NoSuchObject): |
1216 | 75 | 71 | return log.ErrorCode.backend_not_found | |
1217 | 76 | for n in range(1, globals.num_retries+1): | 72 | elif isinstance(e, self.resp_exc): |
1218 | 77 | log.Info("Uploading '%s/%s' " % (self.container, remote_filename)) | 73 | if e.status == 404: |
1219 | 78 | try: | 74 | return log.ErrorCode.backend_not_found |
1220 | 79 | sobject = self.container.create_object(remote_filename) | 75 | |
1221 | 80 | sobject.load_from_filename(source_path.name) | 76 | def _put(self, source_path, remote_filename): |
1222 | 81 | return | 77 | sobject = self.container.create_object(remote_filename) |
1223 | 82 | except self.resp_exc as error: | 78 | sobject.load_from_filename(source_path.name) |
1224 | 83 | log.Warn("Upload of '%s' failed (attempt %d): CloudFiles returned: %s %s" | 79 | |
1225 | 84 | % (remote_filename, n, error.status, error.reason)) | 80 | def _get(self, remote_filename, local_path): |
1226 | 85 | except Exception as e: | 81 | sobject = self.container.create_object(remote_filename) |
1227 | 86 | log.Warn("Upload of '%s' failed (attempt %s): %s: %s" | 82 | with open(local_path.name, 'wb') as f: |
1228 | 87 | % (remote_filename, n, e.__class__.__name__, str(e))) | 83 | for chunk in sobject.stream(): |
1229 | 88 | log.Debug("Backtrace of previous error: %s" | 84 | f.write(chunk) |
1184 | 89 | % exception_traceback()) | ||
1185 | 90 | time.sleep(30) | ||
1186 | 91 | log.Warn("Giving up uploading '%s' after %s attempts" | ||
1187 | 92 | % (remote_filename, globals.num_retries)) | ||
1188 | 93 | raise BackendException("Error uploading '%s'" % remote_filename) | ||
1189 | 94 | |||
1190 | 95 | def get(self, remote_filename, local_path): | ||
1191 | 96 | for n in range(1, globals.num_retries+1): | ||
1192 | 97 | log.Info("Downloading '%s/%s'" % (self.container, remote_filename)) | ||
1193 | 98 | try: | ||
1194 | 99 | sobject = self.container.create_object(remote_filename) | ||
1195 | 100 | f = open(local_path.name, 'w') | ||
1196 | 101 | for chunk in sobject.stream(): | ||
1197 | 102 | f.write(chunk) | ||
1198 | 103 | local_path.setdata() | ||
1199 | 104 | return | ||
1200 | 105 | except self.resp_exc as resperr: | ||
1201 | 106 | log.Warn("Download of '%s' failed (attempt %s): CloudFiles returned: %s %s" | ||
1202 | 107 | % (remote_filename, n, resperr.status, resperr.reason)) | ||
1203 | 108 | except Exception as e: | ||
1204 | 109 | log.Warn("Download of '%s' failed (attempt %s): %s: %s" | ||
1205 | 110 | % (remote_filename, n, e.__class__.__name__, str(e))) | ||
1206 | 111 | log.Debug("Backtrace of previous error: %s" | ||
1207 | 112 | % exception_traceback()) | ||
1208 | 113 | time.sleep(30) | ||
1209 | 114 | log.Warn("Giving up downloading '%s' after %s attempts" | ||
1210 | 115 | % (remote_filename, globals.num_retries)) | ||
1211 | 116 | raise BackendException("Error downloading '%s/%s'" | ||
1212 | 117 | % (self.container, remote_filename)) | ||
1230 | 118 | 85 | ||
1231 | 119 | def _list(self): | 86 | def _list(self): |
1305 | 120 | for n in range(1, globals.num_retries+1): | 87 | # Cloud Files will return a max of 10,000 objects. We have |
1306 | 121 | log.Info("Listing '%s'" % (self.container)) | 88 | # to make multiple requests to get them all. |
1307 | 122 | try: | 89 | objs = self.container.list_objects() |
1308 | 123 | # Cloud Files will return a max of 10,000 objects. We have | 90 | keys = objs |
1309 | 124 | # to make multiple requests to get them all. | 91 | while len(objs) == 10000: |
1310 | 125 | objs = self.container.list_objects() | 92 | objs = self.container.list_objects(marker=keys[-1]) |
1311 | 126 | keys = objs | 93 | keys += objs |
1312 | 127 | while len(objs) == 10000: | 94 | return keys |
1313 | 128 | objs = self.container.list_objects(marker=keys[-1]) | 95 | |
1314 | 129 | keys += objs | 96 | def _delete(self, filename): |
1315 | 130 | return keys | 97 | self.container.delete_object(filename) |
1316 | 131 | except self.resp_exc as resperr: | 98 | |
1317 | 132 | log.Warn("Listing of '%s' failed (attempt %s): CloudFiles returned: %s %s" | 99 | def _query(self, filename): |
1318 | 133 | % (self.container, n, resperr.status, resperr.reason)) | 100 | sobject = self.container.get_object(filename) |
1319 | 134 | except Exception as e: | 101 | return {'size': sobject.size} |
1247 | 135 | log.Warn("Listing of '%s' failed (attempt %s): %s: %s" | ||
1248 | 136 | % (self.container, n, e.__class__.__name__, str(e))) | ||
1249 | 137 | log.Debug("Backtrace of previous error: %s" | ||
1250 | 138 | % exception_traceback()) | ||
1251 | 139 | time.sleep(30) | ||
1252 | 140 | log.Warn("Giving up listing of '%s' after %s attempts" | ||
1253 | 141 | % (self.container, globals.num_retries)) | ||
1254 | 142 | raise BackendException("Error listing '%s'" | ||
1255 | 143 | % (self.container)) | ||
1256 | 144 | |||
1257 | 145 | def delete_one(self, remote_filename): | ||
1258 | 146 | for n in range(1, globals.num_retries+1): | ||
1259 | 147 | log.Info("Deleting '%s/%s'" % (self.container, remote_filename)) | ||
1260 | 148 | try: | ||
1261 | 149 | self.container.delete_object(remote_filename) | ||
1262 | 150 | return | ||
1263 | 151 | except self.resp_exc as resperr: | ||
1264 | 152 | if n > 1 and resperr.status == 404: | ||
1265 | 153 | # We failed on a timeout, but delete succeeded on the server | ||
1266 | 154 | log.Warn("Delete of '%s' missing after retry - must have succeded earler" % remote_filename ) | ||
1267 | 155 | return | ||
1268 | 156 | log.Warn("Delete of '%s' failed (attempt %s): CloudFiles returned: %s %s" | ||
1269 | 157 | % (remote_filename, n, resperr.status, resperr.reason)) | ||
1270 | 158 | except Exception as e: | ||
1271 | 159 | log.Warn("Delete of '%s' failed (attempt %s): %s: %s" | ||
1272 | 160 | % (remote_filename, n, e.__class__.__name__, str(e))) | ||
1273 | 161 | log.Debug("Backtrace of previous error: %s" | ||
1274 | 162 | % exception_traceback()) | ||
1275 | 163 | time.sleep(30) | ||
1276 | 164 | log.Warn("Giving up deleting '%s' after %s attempts" | ||
1277 | 165 | % (remote_filename, globals.num_retries)) | ||
1278 | 166 | raise BackendException("Error deleting '%s/%s'" | ||
1279 | 167 | % (self.container, remote_filename)) | ||
1280 | 168 | |||
1281 | 169 | def delete(self, filename_list): | ||
1282 | 170 | for file in filename_list: | ||
1283 | 171 | self.delete_one(file) | ||
1284 | 172 | log.Debug("Deleted '%s/%s'" % (self.container, file)) | ||
1285 | 173 | |||
1286 | 174 | @retry | ||
1287 | 175 | def _query_file_info(self, filename, raise_errors=False): | ||
1288 | 176 | from cloudfiles.errors import NoSuchObject | ||
1289 | 177 | try: | ||
1290 | 178 | sobject = self.container.get_object(filename) | ||
1291 | 179 | return {'size': sobject.size} | ||
1292 | 180 | except NoSuchObject: | ||
1293 | 181 | return {'size': -1} | ||
1294 | 182 | except Exception as e: | ||
1295 | 183 | log.Warn("Error querying '%s/%s': %s" | ||
1296 | 184 | "" % (self.container, | ||
1297 | 185 | filename, | ||
1298 | 186 | str(e))) | ||
1299 | 187 | if raise_errors: | ||
1300 | 188 | raise e | ||
1301 | 189 | else: | ||
1302 | 190 | return {'size': None} | ||
1303 | 191 | |||
1304 | 192 | duplicity.backend.register_backend("cf+http", CloudFilesBackend) | ||
1320 | 193 | 102 | ||
1321 | === modified file 'duplicity/backends/_cf_pyrax.py' | |||
1322 | --- duplicity/backends/_cf_pyrax.py 2014-04-17 22:03:10 +0000 | |||
1323 | +++ duplicity/backends/_cf_pyrax.py 2014-04-28 02:49:55 +0000 | |||
1324 | @@ -19,14 +19,11 @@ | |||
1325 | 19 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | 19 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
1326 | 20 | 20 | ||
1327 | 21 | import os | 21 | import os |
1328 | 22 | import time | ||
1329 | 23 | 22 | ||
1330 | 24 | import duplicity.backend | 23 | import duplicity.backend |
1331 | 25 | from duplicity import globals | ||
1332 | 26 | from duplicity import log | 24 | from duplicity import log |
1336 | 27 | from duplicity.errors import * # @UnusedWildImport | 25 | from duplicity.errors import BackendException |
1337 | 28 | from duplicity.util import exception_traceback | 26 | |
1335 | 29 | from duplicity.backend import retry | ||
1338 | 30 | 27 | ||
1339 | 31 | class PyraxBackend(duplicity.backend.Backend): | 28 | class PyraxBackend(duplicity.backend.Backend): |
1340 | 32 | """ | 29 | """ |
1341 | @@ -69,126 +66,39 @@ | |||
1342 | 69 | 66 | ||
1343 | 70 | self.client_exc = pyrax.exceptions.ClientException | 67 | self.client_exc = pyrax.exceptions.ClientException |
1344 | 71 | self.nso_exc = pyrax.exceptions.NoSuchObject | 68 | self.nso_exc = pyrax.exceptions.NoSuchObject |
1345 | 72 | self.cloudfiles = pyrax.cloudfiles | ||
1346 | 73 | self.container = pyrax.cloudfiles.create_container(container) | 69 | self.container = pyrax.cloudfiles.create_container(container) |
1347 | 74 | 70 | ||
1394 | 75 | def put(self, source_path, remote_filename = None): | 71 | def _error_code(self, operation, e): |
1395 | 76 | if not remote_filename: | 72 | if isinstance(e, self.nso_exc): |
1396 | 77 | remote_filename = source_path.get_filename() | 73 | return log.ErrorCode.backend_not_found |
1397 | 78 | 74 | elif isinstance(e, self.client_exc): | |
1398 | 79 | for n in range(1, globals.num_retries + 1): | 75 | if e.status == 404: |
1399 | 80 | log.Info("Uploading '%s/%s' " % (self.container, remote_filename)) | 76 | return log.ErrorCode.backend_not_found |
1400 | 81 | try: | 77 | elif hasattr(e, 'http_status'): |
1401 | 82 | self.container.upload_file(source_path.name, remote_filename) | 78 | if e.http_status == 404: |
1402 | 83 | return | 79 | return log.ErrorCode.backend_not_found |
1403 | 84 | except self.client_exc as error: | 80 | |
1404 | 85 | log.Warn("Upload of '%s' failed (attempt %d): pyrax returned: %s %s" | 81 | def _put(self, source_path, remote_filename): |
1405 | 86 | % (remote_filename, n, error.__class__.__name__, error.message)) | 82 | self.container.upload_file(source_path.name, remote_filename) |
1406 | 87 | except Exception as e: | 83 | |
1407 | 88 | log.Warn("Upload of '%s' failed (attempt %s): %s: %s" | 84 | def _get(self, remote_filename, local_path): |
1408 | 89 | % (remote_filename, n, e.__class__.__name__, str(e))) | 85 | sobject = self.container.get_object(remote_filename) |
1409 | 90 | log.Debug("Backtrace of previous error: %s" | 86 | with open(local_path.name, 'wb') as f: |
1410 | 91 | % exception_traceback()) | 87 | f.write(sobject.get()) |
1365 | 92 | time.sleep(30) | ||
1366 | 93 | log.Warn("Giving up uploading '%s' after %s attempts" | ||
1367 | 94 | % (remote_filename, globals.num_retries)) | ||
1368 | 95 | raise BackendException("Error uploading '%s'" % remote_filename) | ||
1369 | 96 | |||
1370 | 97 | def get(self, remote_filename, local_path): | ||
1371 | 98 | for n in range(1, globals.num_retries + 1): | ||
1372 | 99 | log.Info("Downloading '%s/%s'" % (self.container, remote_filename)) | ||
1373 | 100 | try: | ||
1374 | 101 | sobject = self.container.get_object(remote_filename) | ||
1375 | 102 | f = open(local_path.name, 'w') | ||
1376 | 103 | f.write(sobject.get()) | ||
1377 | 104 | local_path.setdata() | ||
1378 | 105 | return | ||
1379 | 106 | except self.nso_exc: | ||
1380 | 107 | return | ||
1381 | 108 | except self.client_exc as resperr: | ||
1382 | 109 | log.Warn("Download of '%s' failed (attempt %s): pyrax returned: %s %s" | ||
1383 | 110 | % (remote_filename, n, resperr.__class__.__name__, resperr.message)) | ||
1384 | 111 | except Exception as e: | ||
1385 | 112 | log.Warn("Download of '%s' failed (attempt %s): %s: %s" | ||
1386 | 113 | % (remote_filename, n, e.__class__.__name__, str(e))) | ||
1387 | 114 | log.Debug("Backtrace of previous error: %s" | ||
1388 | 115 | % exception_traceback()) | ||
1389 | 116 | time.sleep(30) | ||
1390 | 117 | log.Warn("Giving up downloading '%s' after %s attempts" | ||
1391 | 118 | % (remote_filename, globals.num_retries)) | ||
1392 | 119 | raise BackendException("Error downloading '%s/%s'" | ||
1393 | 120 | % (self.container, remote_filename)) | ||
1411 | 121 | 88 | ||
1412 | 122 | def _list(self): | 89 | def _list(self): |
1485 | 123 | for n in range(1, globals.num_retries + 1): | 90 | # Cloud Files will return a max of 10,000 objects. We have |
1486 | 124 | log.Info("Listing '%s'" % (self.container)) | 91 | # to make multiple requests to get them all. |
1487 | 125 | try: | 92 | objs = self.container.get_object_names() |
1488 | 126 | # Cloud Files will return a max of 10,000 objects. We have | 93 | keys = objs |
1489 | 127 | # to make multiple requests to get them all. | 94 | while len(objs) == 10000: |
1490 | 128 | objs = self.container.get_object_names() | 95 | objs = self.container.get_object_names(marker = keys[-1]) |
1491 | 129 | keys = objs | 96 | keys += objs |
1492 | 130 | while len(objs) == 10000: | 97 | return keys |
1493 | 131 | objs = self.container.get_object_names(marker = keys[-1]) | 98 | |
1494 | 132 | keys += objs | 99 | def _delete(self, filename): |
1495 | 133 | return keys | 100 | self.container.delete_object(filename) |
1496 | 134 | except self.client_exc as resperr: | 101 | |
1497 | 135 | log.Warn("Listing of '%s' failed (attempt %s): pyrax returned: %s %s" | 102 | def _query(self, filename): |
1498 | 136 | % (self.container, n, resperr.__class__.__name__, resperr.message)) | 103 | sobject = self.container.get_object(filename) |
1499 | 137 | except Exception as e: | 104 | return {'size': sobject.total_bytes} |
1428 | 138 | log.Warn("Listing of '%s' failed (attempt %s): %s: %s" | ||
1429 | 139 | % (self.container, n, e.__class__.__name__, str(e))) | ||
1430 | 140 | log.Debug("Backtrace of previous error: %s" | ||
1431 | 141 | % exception_traceback()) | ||
1432 | 142 | time.sleep(30) | ||
1433 | 143 | log.Warn("Giving up listing of '%s' after %s attempts" | ||
1434 | 144 | % (self.container, globals.num_retries)) | ||
1435 | 145 | raise BackendException("Error listing '%s'" | ||
1436 | 146 | % (self.container)) | ||
1437 | 147 | |||
1438 | 148 | def delete_one(self, remote_filename): | ||
1439 | 149 | for n in range(1, globals.num_retries + 1): | ||
1440 | 150 | log.Info("Deleting '%s/%s'" % (self.container, remote_filename)) | ||
1441 | 151 | try: | ||
1442 | 152 | self.container.delete_object(remote_filename) | ||
1443 | 153 | return | ||
1444 | 154 | except self.client_exc as resperr: | ||
1445 | 155 | if n > 1 and resperr.status == 404: | ||
1446 | 156 | # We failed on a timeout, but delete succeeded on the server | ||
1447 | 157 | log.Warn("Delete of '%s' missing after retry - must have succeded earler" % remote_filename) | ||
1448 | 158 | return | ||
1449 | 159 | log.Warn("Delete of '%s' failed (attempt %s): pyrax returned: %s %s" | ||
1450 | 160 | % (remote_filename, n, resperr.__class__.__name__, resperr.message)) | ||
1451 | 161 | except Exception as e: | ||
1452 | 162 | log.Warn("Delete of '%s' failed (attempt %s): %s: %s" | ||
1453 | 163 | % (remote_filename, n, e.__class__.__name__, str(e))) | ||
1454 | 164 | log.Debug("Backtrace of previous error: %s" | ||
1455 | 165 | % exception_traceback()) | ||
1456 | 166 | time.sleep(30) | ||
1457 | 167 | log.Warn("Giving up deleting '%s' after %s attempts" | ||
1458 | 168 | % (remote_filename, globals.num_retries)) | ||
1459 | 169 | raise BackendException("Error deleting '%s/%s'" | ||
1460 | 170 | % (self.container, remote_filename)) | ||
1461 | 171 | |||
1462 | 172 | def delete(self, filename_list): | ||
1463 | 173 | for file_ in filename_list: | ||
1464 | 174 | self.delete_one(file_) | ||
1465 | 175 | log.Debug("Deleted '%s/%s'" % (self.container, file_)) | ||
1466 | 176 | |||
1467 | 177 | @retry | ||
1468 | 178 | def _query_file_info(self, filename, raise_errors = False): | ||
1469 | 179 | try: | ||
1470 | 180 | sobject = self.container.get_object(filename) | ||
1471 | 181 | return {'size': sobject.total_bytes} | ||
1472 | 182 | except self.nso_exc: | ||
1473 | 183 | return {'size': -1} | ||
1474 | 184 | except Exception as e: | ||
1475 | 185 | log.Warn("Error querying '%s/%s': %s" | ||
1476 | 186 | "" % (self.container, | ||
1477 | 187 | filename, | ||
1478 | 188 | str(e))) | ||
1479 | 189 | if raise_errors: | ||
1480 | 190 | raise e | ||
1481 | 191 | else: | ||
1482 | 192 | return {'size': None} | ||
1483 | 193 | |||
1484 | 194 | duplicity.backend.register_backend("cf+http", PyraxBackend) | ||
1500 | 195 | 105 | ||
1501 | === modified file 'duplicity/backends/_ssh_paramiko.py' | |||
1502 | --- duplicity/backends/_ssh_paramiko.py 2014-04-17 20:50:57 +0000 | |||
1503 | +++ duplicity/backends/_ssh_paramiko.py 2014-04-28 02:49:55 +0000 | |||
1504 | @@ -28,7 +28,6 @@ | |||
1505 | 28 | import os | 28 | import os |
1506 | 29 | import errno | 29 | import errno |
1507 | 30 | import sys | 30 | import sys |
1508 | 31 | import time | ||
1509 | 32 | import getpass | 31 | import getpass |
1510 | 33 | import logging | 32 | import logging |
1511 | 34 | from binascii import hexlify | 33 | from binascii import hexlify |
1512 | @@ -36,7 +35,7 @@ | |||
1513 | 36 | import duplicity.backend | 35 | import duplicity.backend |
1514 | 37 | from duplicity import globals | 36 | from duplicity import globals |
1515 | 38 | from duplicity import log | 37 | from duplicity import log |
1517 | 39 | from duplicity.errors import * | 38 | from duplicity.errors import BackendException |
1518 | 40 | 39 | ||
1519 | 41 | read_blocksize=65635 # for doing scp retrievals, where we need to read ourselves | 40 | read_blocksize=65635 # for doing scp retrievals, where we need to read ourselves |
1520 | 42 | 41 | ||
1521 | @@ -232,7 +231,6 @@ | |||
1522 | 232 | except Exception as e: | 231 | except Exception as e: |
1523 | 233 | raise BackendException("sftp negotiation failed: %s" % e) | 232 | raise BackendException("sftp negotiation failed: %s" % e) |
1524 | 234 | 233 | ||
1525 | 235 | |||
1526 | 236 | # move to the appropriate directory, possibly after creating it and its parents | 234 | # move to the appropriate directory, possibly after creating it and its parents |
1527 | 237 | dirs = self.remote_dir.split(os.sep) | 235 | dirs = self.remote_dir.split(os.sep) |
1528 | 238 | if len(dirs) > 0: | 236 | if len(dirs) > 0: |
1529 | @@ -257,157 +255,91 @@ | |||
1530 | 257 | except Exception as e: | 255 | except Exception as e: |
1531 | 258 | raise BackendException("sftp chdir to %s failed: %s" % (self.sftp.normalize(".")+"/"+d,e)) | 256 | raise BackendException("sftp chdir to %s failed: %s" % (self.sftp.normalize(".")+"/"+d,e)) |
1532 | 259 | 257 | ||
1639 | 260 | def put(self, source_path, remote_filename = None): | 258 | def _put(self, source_path, remote_filename): |
1640 | 261 | """transfers a single file to the remote side. | 259 | if globals.use_scp: |
1641 | 262 | In scp mode unavoidable quoting issues will make this fail if the remote directory or file name | 260 | f=file(source_path.name,'rb') |
1642 | 263 | contain single quotes.""" | 261 | try: |
1643 | 264 | if not remote_filename: | 262 | chan=self.client.get_transport().open_session() |
1644 | 265 | remote_filename = source_path.get_filename() | 263 | chan.settimeout(globals.timeout) |
1645 | 266 | 264 | chan.exec_command("scp -t '%s'" % self.remote_dir) # scp in sink mode uses the arg as base directory | |
1646 | 267 | for n in range(1, globals.num_retries+1): | 265 | except Exception as e: |
1647 | 268 | if n > 1: | 266 | raise BackendException("scp execution failed: %s" % e) |
1648 | 269 | # sleep before retry | 267 | # scp protocol: one 0x0 after startup, one after the Create meta, one after saving |
1649 | 270 | time.sleep(self.retry_delay) | 268 | # if there's a problem: 0x1 or 0x02 and some error text |
1650 | 271 | try: | 269 | response=chan.recv(1) |
1651 | 272 | if (globals.use_scp): | 270 | if (response!="\0"): |
1652 | 273 | f=file(source_path.name,'rb') | 271 | raise BackendException("scp remote error: %s" % chan.recv(-1)) |
1653 | 274 | try: | 272 | fstat=os.stat(source_path.name) |
1654 | 275 | chan=self.client.get_transport().open_session() | 273 | chan.send('C%s %d %s\n' %(oct(fstat.st_mode)[-4:], fstat.st_size, remote_filename)) |
1655 | 276 | chan.settimeout(globals.timeout) | 274 | response=chan.recv(1) |
1656 | 277 | chan.exec_command("scp -t '%s'" % self.remote_dir) # scp in sink mode uses the arg as base directory | 275 | if (response!="\0"): |
1657 | 278 | except Exception as e: | 276 | raise BackendException("scp remote error: %s" % chan.recv(-1)) |
1658 | 279 | raise BackendException("scp execution failed: %s" % e) | 277 | chan.sendall(f.read()+'\0') |
1659 | 280 | # scp protocol: one 0x0 after startup, one after the Create meta, one after saving | 278 | f.close() |
1660 | 281 | # if there's a problem: 0x1 or 0x02 and some error text | 279 | response=chan.recv(1) |
1661 | 282 | response=chan.recv(1) | 280 | if (response!="\0"): |
1662 | 283 | if (response!="\0"): | 281 | raise BackendException("scp remote error: %s" % chan.recv(-1)) |
1663 | 284 | raise BackendException("scp remote error: %s" % chan.recv(-1)) | 282 | chan.close() |
1664 | 285 | fstat=os.stat(source_path.name) | 283 | else: |
1665 | 286 | chan.send('C%s %d %s\n' %(oct(fstat.st_mode)[-4:], fstat.st_size, remote_filename)) | 284 | self.sftp.put(source_path.name,remote_filename) |
1666 | 287 | response=chan.recv(1) | 285 | |
1667 | 288 | if (response!="\0"): | 286 | def _get(self, remote_filename, local_path): |
1668 | 289 | raise BackendException("scp remote error: %s" % chan.recv(-1)) | 287 | if globals.use_scp: |
1669 | 290 | chan.sendall(f.read()+'\0') | 288 | try: |
1670 | 291 | f.close() | 289 | chan=self.client.get_transport().open_session() |
1671 | 292 | response=chan.recv(1) | 290 | chan.settimeout(globals.timeout) |
1672 | 293 | if (response!="\0"): | 291 | chan.exec_command("scp -f '%s/%s'" % (self.remote_dir,remote_filename)) |
1673 | 294 | raise BackendException("scp remote error: %s" % chan.recv(-1)) | 292 | except Exception as e: |
1674 | 295 | chan.close() | 293 | raise BackendException("scp execution failed: %s" % e) |
1675 | 296 | return | 294 | |
1676 | 297 | else: | 295 | chan.send('\0') # overall ready indicator |
1677 | 298 | try: | 296 | msg=chan.recv(-1) |
1678 | 299 | self.sftp.put(source_path.name,remote_filename) | 297 | m=re.match(r"C([0-7]{4})\s+(\d+)\s+(\S.*)$",msg) |
1679 | 300 | return | 298 | if (m==None or m.group(3)!=remote_filename): |
1680 | 301 | except Exception as e: | 299 | raise BackendException("scp get %s failed: incorrect response '%s'" % (remote_filename,msg)) |
1681 | 302 | raise BackendException("sftp put of %s (as %s) failed: %s" % (source_path.name,remote_filename,e)) | 300 | chan.recv(1) # dispose of the newline trailing the C message |
1682 | 303 | except Exception as e: | 301 | |
1683 | 304 | log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay)) | 302 | size=int(m.group(2)) |
1684 | 305 | raise BackendException("Giving up trying to upload '%s' after %d attempts" % (remote_filename,n)) | 303 | togo=size |
1685 | 306 | 304 | f=file(local_path.name,'wb') | |
1686 | 307 | 305 | chan.send('\0') # ready for data | |
1687 | 308 | def get(self, remote_filename, local_path): | 306 | try: |
1688 | 309 | """retrieves a single file from the remote side. | 307 | while togo>0: |
1689 | 310 | In scp mode unavoidable quoting issues will make this fail if the remote directory or file names | 308 | if togo>read_blocksize: |
1690 | 311 | contain single quotes.""" | 309 | blocksize = read_blocksize |
1691 | 312 | 310 | else: | |
1692 | 313 | for n in range(1, globals.num_retries+1): | 311 | blocksize = togo |
1693 | 314 | if n > 1: | 312 | buff=chan.recv(blocksize) |
1694 | 315 | # sleep before retry | 313 | f.write(buff) |
1695 | 316 | time.sleep(self.retry_delay) | 314 | togo-=len(buff) |
1696 | 317 | try: | 315 | except Exception as e: |
1697 | 318 | if (globals.use_scp): | 316 | raise BackendException("scp get %s failed: %s" % (remote_filename,e)) |
1698 | 319 | try: | 317 | |
1699 | 320 | chan=self.client.get_transport().open_session() | 318 | msg=chan.recv(1) # check the final status |
1700 | 321 | chan.settimeout(globals.timeout) | 319 | if msg!='\0': |
1701 | 322 | chan.exec_command("scp -f '%s/%s'" % (self.remote_dir,remote_filename)) | 320 | raise BackendException("scp get %s failed: %s" % (remote_filename,chan.recv(-1))) |
1702 | 323 | except Exception as e: | 321 | f.close() |
1703 | 324 | raise BackendException("scp execution failed: %s" % e) | 322 | chan.send('\0') # send final done indicator |
1704 | 325 | 323 | chan.close() | |
1705 | 326 | chan.send('\0') # overall ready indicator | 324 | else: |
1706 | 327 | msg=chan.recv(-1) | 325 | self.sftp.get(remote_filename,local_path.name) |
1601 | 328 | m=re.match(r"C([0-7]{4})\s+(\d+)\s+(\S.*)$",msg) | ||
1602 | 329 | if (m==None or m.group(3)!=remote_filename): | ||
1603 | 330 | raise BackendException("scp get %s failed: incorrect response '%s'" % (remote_filename,msg)) | ||
1604 | 331 | chan.recv(1) # dispose of the newline trailing the C message | ||
1605 | 332 | |||
1606 | 333 | size=int(m.group(2)) | ||
1607 | 334 | togo=size | ||
1608 | 335 | f=file(local_path.name,'wb') | ||
1609 | 336 | chan.send('\0') # ready for data | ||
1610 | 337 | try: | ||
1611 | 338 | while togo>0: | ||
1612 | 339 | if togo>read_blocksize: | ||
1613 | 340 | blocksize = read_blocksize | ||
1614 | 341 | else: | ||
1615 | 342 | blocksize = togo | ||
1616 | 343 | buff=chan.recv(blocksize) | ||
1617 | 344 | f.write(buff) | ||
1618 | 345 | togo-=len(buff) | ||
1619 | 346 | except Exception as e: | ||
1620 | 347 | raise BackendException("scp get %s failed: %s" % (remote_filename,e)) | ||
1621 | 348 | |||
1622 | 349 | msg=chan.recv(1) # check the final status | ||
1623 | 350 | if msg!='\0': | ||
1624 | 351 | raise BackendException("scp get %s failed: %s" % (remote_filename,chan.recv(-1))) | ||
1625 | 352 | f.close() | ||
1626 | 353 | chan.send('\0') # send final done indicator | ||
1627 | 354 | chan.close() | ||
1628 | 355 | return | ||
1629 | 356 | else: | ||
1630 | 357 | try: | ||
1631 | 358 | self.sftp.get(remote_filename,local_path.name) | ||
1632 | 359 | return | ||
1633 | 360 | except Exception as e: | ||
1634 | 361 | raise BackendException("sftp get of %s (to %s) failed: %s" % (remote_filename,local_path.name,e)) | ||
1635 | 362 | local_path.setdata() | ||
1636 | 363 | except Exception as e: | ||
1637 | 364 | log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay)) | ||
1638 | 365 | raise BackendException("Giving up trying to download '%s' after %d attempts" % (remote_filename,n)) | ||
1707 | 366 | 326 | ||
1708 | 367 | def _list(self): | 327 | def _list(self): |
1752 | 368 | """lists the contents of the one-and-only duplicity dir on the remote side. | 328 | # In scp mode unavoidable quoting issues will make this fail if the |
1753 | 369 | In scp mode unavoidable quoting issues will make this fail if the directory name | 329 | # directory name contains single quotes. |
1754 | 370 | contains single quotes.""" | 330 | if globals.use_scp: |
1755 | 371 | for n in range(1, globals.num_retries+1): | 331 | output = self.runremote("ls -1 '%s'" % self.remote_dir, False, "scp dir listing ") |
1756 | 372 | if n > 1: | 332 | return output.splitlines() |
1757 | 373 | # sleep before retry | 333 | else: |
1758 | 374 | time.sleep(self.retry_delay) | 334 | return self.sftp.listdir() |
1759 | 375 | try: | 335 | |
1760 | 376 | if (globals.use_scp): | 336 | def _delete(self, filename): |
1761 | 377 | output=self.runremote("ls -1 '%s'" % self.remote_dir,False,"scp dir listing ") | 337 | # In scp mode unavoidable quoting issues will cause failures if |
1762 | 378 | return output.splitlines() | 338 | # filenames containing single quotes are encountered. |
1763 | 379 | else: | 339 | if globals.use_scp: |
1764 | 380 | try: | 340 | self.runremote("rm '%s/%s'" % (self.remote_dir, filename), False, "scp rm ") |
1765 | 381 | return self.sftp.listdir() | 341 | else: |
1766 | 382 | except Exception as e: | 342 | self.sftp.remove(filename) |
1724 | 383 | raise BackendException("sftp listing of %s failed: %s" % (self.sftp.getcwd(),e)) | ||
1725 | 384 | except Exception as e: | ||
1726 | 385 | log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay)) | ||
1727 | 386 | raise BackendException("Giving up trying to list '%s' after %d attempts" % (self.remote_dir,n)) | ||
1728 | 387 | |||
1729 | 388 | def delete(self, filename_list): | ||
1730 | 389 | """deletes all files in the list on the remote side. In scp mode unavoidable quoting issues | ||
1731 | 390 | will cause failures if filenames containing single quotes are encountered.""" | ||
1732 | 391 | for fn in filename_list: | ||
1733 | 392 | # Try to delete each file several times before giving up completely. | ||
1734 | 393 | for n in range(1, globals.num_retries+1): | ||
1735 | 394 | try: | ||
1736 | 395 | if (globals.use_scp): | ||
1737 | 396 | self.runremote("rm '%s/%s'" % (self.remote_dir,fn),False,"scp rm ") | ||
1738 | 397 | else: | ||
1739 | 398 | try: | ||
1740 | 399 | self.sftp.remove(fn) | ||
1741 | 400 | except Exception as e: | ||
1742 | 401 | raise BackendException("sftp rm %s failed: %s" % (fn,e)) | ||
1743 | 402 | |||
1744 | 403 | # If we get here, we deleted this file successfully. Move on to the next one. | ||
1745 | 404 | break | ||
1746 | 405 | except Exception as e: | ||
1747 | 406 | if n == globals.num_retries: | ||
1748 | 407 | log.FatalError(str(e), log.ErrorCode.backend_error) | ||
1749 | 408 | else: | ||
1750 | 409 | log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay)) | ||
1751 | 410 | time.sleep(self.retry_delay) | ||
1767 | 411 | 343 | ||
1768 | 412 | def runremote(self,cmd,ignoreexitcode=False,errorprefix=""): | 344 | def runremote(self,cmd,ignoreexitcode=False,errorprefix=""): |
1769 | 413 | """small convenience function that opens a shell channel, runs remote command and returns | 345 | """small convenience function that opens a shell channel, runs remote command and returns |
1770 | @@ -438,7 +370,3 @@ | |||
1771 | 438 | raise BackendException("could not load '%s', maybe corrupt?" % (file)) | 370 | raise BackendException("could not load '%s', maybe corrupt?" % (file)) |
1772 | 439 | 371 | ||
1773 | 440 | return sshconfig.lookup(host) | 372 | return sshconfig.lookup(host) |
1774 | 441 | |||
1775 | 442 | duplicity.backend.register_backend("sftp", SSHParamikoBackend) | ||
1776 | 443 | duplicity.backend.register_backend("scp", SSHParamikoBackend) | ||
1777 | 444 | duplicity.backend.register_backend("ssh", SSHParamikoBackend) | ||
1778 | 445 | 373 | ||
1779 | === modified file 'duplicity/backends/_ssh_pexpect.py' | |||
1780 | --- duplicity/backends/_ssh_pexpect.py 2014-04-25 23:20:12 +0000 | |||
1781 | +++ duplicity/backends/_ssh_pexpect.py 2014-04-28 02:49:55 +0000 | |||
1782 | @@ -24,18 +24,20 @@ | |||
1783 | 24 | # have the same syntax. Also these strings will be executed by the | 24 | # have the same syntax. Also these strings will be executed by the |
1784 | 25 | # shell, so shouldn't have strange characters in them. | 25 | # shell, so shouldn't have strange characters in them. |
1785 | 26 | 26 | ||
1786 | 27 | from future_builtins import map | ||
1787 | 28 | |||
1788 | 27 | import re | 29 | import re |
1789 | 28 | import string | 30 | import string |
1790 | 29 | import time | ||
1791 | 30 | import os | 31 | import os |
1792 | 31 | 32 | ||
1793 | 32 | import duplicity.backend | 33 | import duplicity.backend |
1794 | 33 | from duplicity import globals | 34 | from duplicity import globals |
1795 | 34 | from duplicity import log | 35 | from duplicity import log |
1797 | 35 | from duplicity.errors import * #@UnusedWildImport | 36 | from duplicity.errors import BackendException |
1798 | 36 | 37 | ||
1799 | 37 | class SSHPExpectBackend(duplicity.backend.Backend): | 38 | class SSHPExpectBackend(duplicity.backend.Backend): |
1801 | 38 | """This backend copies files using scp. List not supported""" | 39 | """This backend copies files using scp. List not supported. Filenames |
1802 | 40 | should not need any quoting or this will break.""" | ||
1803 | 39 | def __init__(self, parsed_url): | 41 | def __init__(self, parsed_url): |
1804 | 40 | """scpBackend initializer""" | 42 | """scpBackend initializer""" |
1805 | 41 | duplicity.backend.Backend.__init__(self, parsed_url) | 43 | duplicity.backend.Backend.__init__(self, parsed_url) |
1806 | @@ -76,74 +78,67 @@ | |||
1807 | 76 | def run_scp_command(self, commandline): | 78 | def run_scp_command(self, commandline): |
1808 | 77 | """ Run an scp command, responding to password prompts """ | 79 | """ Run an scp command, responding to password prompts """ |
1809 | 78 | import pexpect | 80 | import pexpect |
1878 | 79 | for n in range(1, globals.num_retries+1): | 81 | log.Info("Running '%s'" % commandline) |
1879 | 80 | if n > 1: | 82 | child = pexpect.spawn(commandline, timeout = None) |
1880 | 81 | # sleep before retry | 83 | if globals.ssh_askpass: |
1881 | 82 | time.sleep(self.retry_delay) | 84 | state = "authorizing" |
1882 | 83 | log.Info("Running '%s' (attempt #%d)" % (commandline, n)) | 85 | else: |
1883 | 84 | child = pexpect.spawn(commandline, timeout = None) | 86 | state = "copying" |
1884 | 85 | if globals.ssh_askpass: | 87 | while 1: |
1885 | 86 | state = "authorizing" | 88 | if state == "authorizing": |
1886 | 87 | else: | 89 | match = child.expect([pexpect.EOF, |
1887 | 88 | state = "copying" | 90 | "(?i)timeout, server not responding", |
1888 | 89 | while 1: | 91 | "(?i)pass(word|phrase .*):", |
1889 | 90 | if state == "authorizing": | 92 | "(?i)permission denied", |
1890 | 91 | match = child.expect([pexpect.EOF, | 93 | "authenticity"]) |
1891 | 92 | "(?i)timeout, server not responding", | 94 | log.Debug("State = %s, Before = '%s'" % (state, child.before.strip())) |
1892 | 93 | "(?i)pass(word|phrase .*):", | 95 | if match == 0: |
1893 | 94 | "(?i)permission denied", | 96 | log.Warn("Failed to authenticate") |
1894 | 95 | "authenticity"]) | 97 | break |
1895 | 96 | log.Debug("State = %s, Before = '%s'" % (state, child.before.strip())) | 98 | elif match == 1: |
1896 | 97 | if match == 0: | 99 | log.Warn("Timeout waiting to authenticate") |
1897 | 98 | log.Warn("Failed to authenticate") | 100 | break |
1898 | 99 | break | 101 | elif match == 2: |
1899 | 100 | elif match == 1: | 102 | child.sendline(self.password) |
1900 | 101 | log.Warn("Timeout waiting to authenticate") | 103 | state = "copying" |
1901 | 102 | break | 104 | elif match == 3: |
1902 | 103 | elif match == 2: | 105 | log.Warn("Invalid SSH password") |
1903 | 104 | child.sendline(self.password) | 106 | break |
1904 | 105 | state = "copying" | 107 | elif match == 4: |
1905 | 106 | elif match == 3: | 108 | log.Warn("Remote host authentication failed (missing known_hosts entry?)") |
1906 | 107 | log.Warn("Invalid SSH password") | 109 | break |
1907 | 108 | break | 110 | elif state == "copying": |
1908 | 109 | elif match == 4: | 111 | match = child.expect([pexpect.EOF, |
1909 | 110 | log.Warn("Remote host authentication failed (missing known_hosts entry?)") | 112 | "(?i)timeout, server not responding", |
1910 | 111 | break | 113 | "stalled", |
1911 | 112 | elif state == "copying": | 114 | "authenticity", |
1912 | 113 | match = child.expect([pexpect.EOF, | 115 | "ETA"]) |
1913 | 114 | "(?i)timeout, server not responding", | 116 | log.Debug("State = %s, Before = '%s'" % (state, child.before.strip())) |
1914 | 115 | "stalled", | 117 | if match == 0: |
1915 | 116 | "authenticity", | 118 | break |
1916 | 117 | "ETA"]) | 119 | elif match == 1: |
1917 | 118 | log.Debug("State = %s, Before = '%s'" % (state, child.before.strip())) | 120 | log.Warn("Timeout waiting for response") |
1918 | 119 | if match == 0: | 121 | break |
1919 | 120 | break | 122 | elif match == 2: |
1920 | 121 | elif match == 1: | 123 | state = "stalled" |
1921 | 122 | log.Warn("Timeout waiting for response") | 124 | elif match == 3: |
1922 | 123 | break | 125 | log.Warn("Remote host authentication failed (missing known_hosts entry?)") |
1923 | 124 | elif match == 2: | 126 | break |
1924 | 125 | state = "stalled" | 127 | elif state == "stalled": |
1925 | 126 | elif match == 3: | 128 | match = child.expect([pexpect.EOF, |
1926 | 127 | log.Warn("Remote host authentication failed (missing known_hosts entry?)") | 129 | "(?i)timeout, server not responding", |
1927 | 128 | break | 130 | "ETA"]) |
1928 | 129 | elif state == "stalled": | 131 | log.Debug("State = %s, Before = '%s'" % (state, child.before.strip())) |
1929 | 130 | match = child.expect([pexpect.EOF, | 132 | if match == 0: |
1930 | 131 | "(?i)timeout, server not responding", | 133 | break |
1931 | 132 | "ETA"]) | 134 | elif match == 1: |
1932 | 133 | log.Debug("State = %s, Before = '%s'" % (state, child.before.strip())) | 135 | log.Warn("Stalled for too long, aborted copy") |
1933 | 134 | if match == 0: | 136 | break |
1934 | 135 | break | 137 | elif match == 2: |
1935 | 136 | elif match == 1: | 138 | state = "copying" |
1936 | 137 | log.Warn("Stalled for too long, aborted copy") | 139 | child.close(force = True) |
1937 | 138 | break | 140 | if child.exitstatus != 0: |
1938 | 139 | elif match == 2: | 141 | raise BackendException("Error running '%s'" % commandline) |
1871 | 140 | state = "copying" | ||
1872 | 141 | child.close(force = True) | ||
1873 | 142 | if child.exitstatus == 0: | ||
1874 | 143 | return | ||
1875 | 144 | log.Warn("Running '%s' failed (attempt #%d)" % (commandline, n)) | ||
1876 | 145 | log.Warn("Giving up trying to execute '%s' after %d attempts" % (commandline, globals.num_retries)) | ||
1877 | 146 | raise BackendException("Error running '%s'" % commandline) | ||
1939 | 147 | 142 | ||
1940 | 148 | def run_sftp_command(self, commandline, commands): | 143 | def run_sftp_command(self, commandline, commands): |
1941 | 149 | """ Run an sftp command, responding to password prompts, passing commands from list """ | 144 | """ Run an sftp command, responding to password prompts, passing commands from list """ |
1942 | @@ -160,76 +155,69 @@ | |||
1943 | 160 | "Couldn't delete file", | 155 | "Couldn't delete file", |
1944 | 161 | "open(.*): Failure"] | 156 | "open(.*): Failure"] |
1945 | 162 | max_response_len = max([len(p) for p in responses[1:]]) | 157 | max_response_len = max([len(p) for p in responses[1:]]) |
1995 | 163 | for n in range(1, globals.num_retries+1): | 158 | log.Info("Running '%s'" % (commandline)) |
1996 | 164 | if n > 1: | 159 | child = pexpect.spawn(commandline, timeout = None, maxread=maxread) |
1997 | 165 | # sleep before retry | 160 | cmdloc = 0 |
1998 | 166 | time.sleep(self.retry_delay) | 161 | passprompt = 0 |
1999 | 167 | log.Info("Running '%s' (attempt #%d)" % (commandline, n)) | 162 | while 1: |
2000 | 168 | child = pexpect.spawn(commandline, timeout = None, maxread=maxread) | 163 | msg = "" |
2001 | 169 | cmdloc = 0 | 164 | match = child.expect(responses, |
2002 | 170 | passprompt = 0 | 165 | searchwindowsize=maxread+max_response_len) |
2003 | 171 | while 1: | 166 | log.Debug("State = sftp, Before = '%s'" % (child.before.strip())) |
2004 | 172 | msg = "" | 167 | if match == 0: |
2005 | 173 | match = child.expect(responses, | 168 | break |
2006 | 174 | searchwindowsize=maxread+max_response_len) | 169 | elif match == 1: |
2007 | 175 | log.Debug("State = sftp, Before = '%s'" % (child.before.strip())) | 170 | msg = "Timeout waiting for response" |
2008 | 176 | if match == 0: | 171 | break |
2009 | 177 | break | 172 | if match == 2: |
2010 | 178 | elif match == 1: | 173 | if cmdloc < len(commands): |
2011 | 179 | msg = "Timeout waiting for response" | 174 | command = commands[cmdloc] |
2012 | 180 | break | 175 | log.Info("sftp command: '%s'" % (command,)) |
2013 | 181 | if match == 2: | 176 | child.sendline(command) |
2014 | 182 | if cmdloc < len(commands): | 177 | cmdloc += 1 |
2015 | 183 | command = commands[cmdloc] | 178 | else: |
2016 | 184 | log.Info("sftp command: '%s'" % (command,)) | 179 | command = 'quit' |
2017 | 185 | child.sendline(command) | 180 | child.sendline(command) |
2018 | 186 | cmdloc += 1 | 181 | res = child.before |
2019 | 187 | else: | 182 | elif match == 3: |
2020 | 188 | command = 'quit' | 183 | passprompt += 1 |
2021 | 189 | child.sendline(command) | 184 | child.sendline(self.password) |
2022 | 190 | res = child.before | 185 | if (passprompt>1): |
2023 | 191 | elif match == 3: | 186 | raise BackendException("Invalid SSH password.") |
2024 | 192 | passprompt += 1 | 187 | elif match == 4: |
2025 | 193 | child.sendline(self.password) | 188 | if not child.before.strip().startswith("mkdir"): |
2026 | 194 | if (passprompt>1): | 189 | msg = "Permission denied" |
2027 | 195 | raise BackendException("Invalid SSH password.") | 190 | break |
2028 | 196 | elif match == 4: | 191 | elif match == 5: |
2029 | 197 | if not child.before.strip().startswith("mkdir"): | 192 | msg = "Host key authenticity could not be verified (missing known_hosts entry?)" |
2030 | 198 | msg = "Permission denied" | 193 | break |
2031 | 199 | break | 194 | elif match == 6: |
2032 | 200 | elif match == 5: | 195 | if not child.before.strip().startswith("rm"): |
2033 | 201 | msg = "Host key authenticity could not be verified (missing known_hosts entry?)" | 196 | msg = "Remote file or directory does not exist in command='%s'" % (commandline,) |
2034 | 202 | break | 197 | break |
2035 | 203 | elif match == 6: | 198 | elif match == 7: |
2036 | 204 | if not child.before.strip().startswith("rm"): | 199 | if not child.before.strip().startswith("Removing"): |
1988 | 205 | msg = "Remote file or directory does not exist in command='%s'" % (commandline,) | ||
1989 | 206 | break | ||
1990 | 207 | elif match == 7: | ||
1991 | 208 | if not child.before.strip().startswith("Removing"): | ||
1992 | 209 | msg = "Could not delete file in command='%s'" % (commandline,) | ||
1993 | 210 | break; | ||
1994 | 211 | elif match == 8: | ||
2037 | 212 | msg = "Could not delete file in command='%s'" % (commandline,) | 200 | msg = "Could not delete file in command='%s'" % (commandline,) |
2047 | 213 | break | 201 | break; |
2048 | 214 | elif match == 9: | 202 | elif match == 8: |
2049 | 215 | msg = "Could not open file in command='%s'" % (commandline,) | 203 | msg = "Could not delete file in command='%s'" % (commandline,) |
2050 | 216 | break | 204 | break |
2051 | 217 | child.close(force = True) | 205 | elif match == 9: |
2052 | 218 | if child.exitstatus == 0: | 206 | msg = "Could not open file in command='%s'" % (commandline,) |
2053 | 219 | return res | 207 | break |
2054 | 220 | log.Warn("Running '%s' with commands:\n %s\n failed (attempt #%d): %s" % (commandline, "\n ".join(commands), n, msg)) | 208 | child.close(force = True) |
2055 | 221 | raise BackendException("Giving up trying to execute '%s' with commands:\n %s\n after %d attempts" % (commandline, "\n ".join(commands), globals.num_retries)) | 209 | if child.exitstatus == 0: |
2056 | 210 | return res | ||
2057 | 211 | else: | ||
2058 | 212 | raise BackendException("Error running '%s': %s" % (commandline, msg)) | ||
2059 | 222 | 213 | ||
2061 | 223 | def put(self, source_path, remote_filename = None): | 214 | def _put(self, source_path, remote_filename): |
2062 | 224 | if globals.use_scp: | 215 | if globals.use_scp: |
2064 | 225 | self.put_scp(source_path, remote_filename = remote_filename) | 216 | self.put_scp(source_path, remote_filename) |
2065 | 226 | else: | 217 | else: |
2067 | 227 | self.put_sftp(source_path, remote_filename = remote_filename) | 218 | self.put_sftp(source_path, remote_filename) |
2068 | 228 | 219 | ||
2073 | 229 | def put_sftp(self, source_path, remote_filename = None): | 220 | def put_sftp(self, source_path, remote_filename): |
2070 | 230 | """Use sftp to copy source_dir/filename to remote computer""" | ||
2071 | 231 | if not remote_filename: | ||
2072 | 232 | remote_filename = source_path.get_filename() | ||
2074 | 233 | commands = ["put \"%s\" \"%s.%s.part\"" % | 221 | commands = ["put \"%s\" \"%s.%s.part\"" % |
2075 | 234 | (source_path.name, self.remote_prefix, remote_filename), | 222 | (source_path.name, self.remote_prefix, remote_filename), |
2076 | 235 | "rename \"%s.%s.part\" \"%s%s\"" % | 223 | "rename \"%s.%s.part\" \"%s%s\"" % |
2077 | @@ -239,53 +227,36 @@ | |||
2078 | 239 | self.host_string)) | 227 | self.host_string)) |
2079 | 240 | self.run_sftp_command(commandline, commands) | 228 | self.run_sftp_command(commandline, commands) |
2080 | 241 | 229 | ||
2085 | 242 | def put_scp(self, source_path, remote_filename = None): | 230 | def put_scp(self, source_path, remote_filename): |
2082 | 243 | """Use scp to copy source_dir/filename to remote computer""" | ||
2083 | 244 | if not remote_filename: | ||
2084 | 245 | remote_filename = source_path.get_filename() | ||
2086 | 246 | commandline = "%s %s %s %s:%s%s" % \ | 231 | commandline = "%s %s %s %s:%s%s" % \ |
2087 | 247 | (self.scp_command, globals.ssh_options, source_path.name, self.host_string, | 232 | (self.scp_command, globals.ssh_options, source_path.name, self.host_string, |
2088 | 248 | self.remote_prefix, remote_filename) | 233 | self.remote_prefix, remote_filename) |
2089 | 249 | self.run_scp_command(commandline) | 234 | self.run_scp_command(commandline) |
2090 | 250 | 235 | ||
2092 | 251 | def get(self, remote_filename, local_path): | 236 | def _get(self, remote_filename, local_path): |
2093 | 252 | if globals.use_scp: | 237 | if globals.use_scp: |
2094 | 253 | self.get_scp(remote_filename, local_path) | 238 | self.get_scp(remote_filename, local_path) |
2095 | 254 | else: | 239 | else: |
2096 | 255 | self.get_sftp(remote_filename, local_path) | 240 | self.get_sftp(remote_filename, local_path) |
2097 | 256 | 241 | ||
2098 | 257 | def get_sftp(self, remote_filename, local_path): | 242 | def get_sftp(self, remote_filename, local_path): |
2099 | 258 | """Use sftp to get a remote file""" | ||
2100 | 259 | commands = ["get \"%s%s\" \"%s\"" % | 243 | commands = ["get \"%s%s\" \"%s\"" % |
2101 | 260 | (self.remote_prefix, remote_filename, local_path.name)] | 244 | (self.remote_prefix, remote_filename, local_path.name)] |
2102 | 261 | commandline = ("%s %s %s" % (self.sftp_command, | 245 | commandline = ("%s %s %s" % (self.sftp_command, |
2103 | 262 | globals.ssh_options, | 246 | globals.ssh_options, |
2104 | 263 | self.host_string)) | 247 | self.host_string)) |
2105 | 264 | self.run_sftp_command(commandline, commands) | 248 | self.run_sftp_command(commandline, commands) |
2106 | 265 | local_path.setdata() | ||
2107 | 266 | if not local_path.exists(): | ||
2108 | 267 | raise BackendException("File %s not found locally after get " | ||
2109 | 268 | "from backend" % local_path.name) | ||
2110 | 269 | 249 | ||
2111 | 270 | def get_scp(self, remote_filename, local_path): | 250 | def get_scp(self, remote_filename, local_path): |
2112 | 271 | """Use scp to get a remote file""" | ||
2113 | 272 | commandline = "%s %s %s:%s%s %s" % \ | 251 | commandline = "%s %s %s:%s%s %s" % \ |
2114 | 273 | (self.scp_command, globals.ssh_options, self.host_string, self.remote_prefix, | 252 | (self.scp_command, globals.ssh_options, self.host_string, self.remote_prefix, |
2115 | 274 | remote_filename, local_path.name) | 253 | remote_filename, local_path.name) |
2116 | 275 | self.run_scp_command(commandline) | 254 | self.run_scp_command(commandline) |
2117 | 276 | local_path.setdata() | ||
2118 | 277 | if not local_path.exists(): | ||
2119 | 278 | raise BackendException("File %s not found locally after get " | ||
2120 | 279 | "from backend" % local_path.name) | ||
2121 | 280 | 255 | ||
2122 | 281 | def _list(self): | 256 | def _list(self): |
2130 | 282 | """ | 257 | # Note that this command can get confused when dealing with |
2131 | 283 | List files available for scp | 258 | # files with newlines in them, as the embedded newlines cannot |
2132 | 284 | 259 | # be distinguished from the file boundaries. | |
2126 | 285 | Note that this command can get confused when dealing with | ||
2127 | 286 | files with newlines in them, as the embedded newlines cannot | ||
2128 | 287 | be distinguished from the file boundaries. | ||
2129 | 288 | """ | ||
2133 | 289 | dirs = self.remote_dir.split(os.sep) | 260 | dirs = self.remote_dir.split(os.sep) |
2134 | 290 | if len(dirs) > 0: | 261 | if len(dirs) > 0: |
2135 | 291 | if not dirs[0] : | 262 | if not dirs[0] : |
2136 | @@ -304,16 +275,8 @@ | |||
2137 | 304 | 275 | ||
2138 | 305 | return [x for x in map(string.strip, l) if x] | 276 | return [x for x in map(string.strip, l) if x] |
2139 | 306 | 277 | ||
2144 | 307 | def delete(self, filename_list): | 278 | def _delete(self, filename): |
2141 | 308 | """ | ||
2142 | 309 | Runs sftp rm to delete files. Files must not require quoting. | ||
2143 | 310 | """ | ||
2145 | 311 | commands = ["cd \"%s\"" % (self.remote_dir,)] | 279 | commands = ["cd \"%s\"" % (self.remote_dir,)] |
2148 | 312 | for fn in filename_list: | 280 | commands.append("rm \"%s\"" % filename) |
2147 | 313 | commands.append("rm \"%s\"" % fn) | ||
2149 | 314 | commandline = ("%s %s %s" % (self.sftp_command, globals.ssh_options, self.host_string)) | 281 | commandline = ("%s %s %s" % (self.sftp_command, globals.ssh_options, self.host_string)) |
2150 | 315 | self.run_sftp_command(commandline, commands) | 282 | self.run_sftp_command(commandline, commands) |
2151 | 316 | |||
2152 | 317 | duplicity.backend.register_backend("ssh", SSHPExpectBackend) | ||
2153 | 318 | duplicity.backend.register_backend("scp", SSHPExpectBackend) | ||
2154 | 319 | duplicity.backend.register_backend("sftp", SSHPExpectBackend) | ||
2155 | 320 | 283 | ||
2156 | === modified file 'duplicity/backends/botobackend.py' | |||
2157 | --- duplicity/backends/botobackend.py 2014-04-17 21:54:04 +0000 | |||
2158 | +++ duplicity/backends/botobackend.py 2014-04-28 02:49:55 +0000 | |||
2159 | @@ -22,14 +22,12 @@ | |||
2160 | 22 | 22 | ||
2161 | 23 | import duplicity.backend | 23 | import duplicity.backend |
2162 | 24 | from duplicity import globals | 24 | from duplicity import globals |
2163 | 25 | from ._boto_multi import BotoBackend as BotoMultiUploadBackend | ||
2164 | 26 | from ._boto_single import BotoBackend as BotoSingleUploadBackend | ||
2165 | 27 | 25 | ||
2166 | 28 | if globals.s3_use_multiprocessing: | 26 | if globals.s3_use_multiprocessing: |
2170 | 29 | duplicity.backend.register_backend("gs", BotoMultiUploadBackend) | 27 | from ._boto_multi import BotoBackend |
2168 | 30 | duplicity.backend.register_backend("s3", BotoMultiUploadBackend) | ||
2169 | 31 | duplicity.backend.register_backend("s3+http", BotoMultiUploadBackend) | ||
2171 | 32 | else: | 28 | else: |
2175 | 33 | duplicity.backend.register_backend("gs", BotoSingleUploadBackend) | 29 | from ._boto_single import BotoBackend |
2176 | 34 | duplicity.backend.register_backend("s3", BotoSingleUploadBackend) | 30 | |
2177 | 35 | duplicity.backend.register_backend("s3+http", BotoSingleUploadBackend) | 31 | duplicity.backend.register_backend("gs", BotoBackend) |
2178 | 32 | duplicity.backend.register_backend("s3", BotoBackend) | ||
2179 | 33 | duplicity.backend.register_backend("s3+http", BotoBackend) | ||
2180 | 36 | 34 | ||
2181 | === modified file 'duplicity/backends/cfbackend.py' | |||
2182 | --- duplicity/backends/cfbackend.py 2014-04-17 21:54:04 +0000 | |||
2183 | +++ duplicity/backends/cfbackend.py 2014-04-28 02:49:55 +0000 | |||
2184 | @@ -18,10 +18,13 @@ | |||
2185 | 18 | # along with duplicity; if not, write to the Free Software Foundation, | 18 | # along with duplicity; if not, write to the Free Software Foundation, |
2186 | 19 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | 19 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
2187 | 20 | 20 | ||
2188 | 21 | import duplicity.backend | ||
2189 | 21 | from duplicity import globals | 22 | from duplicity import globals |
2190 | 22 | 23 | ||
2191 | 23 | if (globals.cf_backend and | 24 | if (globals.cf_backend and |
2192 | 24 | globals.cf_backend.lower().strip() == 'pyrax'): | 25 | globals.cf_backend.lower().strip() == 'pyrax'): |
2194 | 25 | from . import _cf_pyrax | 26 | from ._cf_pyrax import PyraxBackend as CFBackend |
2195 | 26 | else: | 27 | else: |
2197 | 27 | from . import _cf_cloudfiles | 28 | from ._cf_cloudfiles import CloudFilesBackend as CFBackend |
2198 | 29 | |||
2199 | 30 | duplicity.backend.register_backend("cf+http", CFBackend) | ||
2200 | 28 | 31 | ||
2201 | === modified file 'duplicity/backends/dpbxbackend.py' | |||
2202 | --- duplicity/backends/dpbxbackend.py 2014-04-17 21:49:37 +0000 | |||
2203 | +++ duplicity/backends/dpbxbackend.py 2014-04-28 02:49:55 +0000 | |||
2204 | @@ -32,14 +32,10 @@ | |||
2205 | 32 | from functools import reduce | 32 | from functools import reduce |
2206 | 33 | 33 | ||
2207 | 34 | import traceback, StringIO | 34 | import traceback, StringIO |
2208 | 35 | from exceptions import Exception | ||
2209 | 36 | 35 | ||
2210 | 37 | import duplicity.backend | 36 | import duplicity.backend |
2211 | 38 | from duplicity import globals | ||
2212 | 39 | from duplicity import log | 37 | from duplicity import log |
2216 | 40 | from duplicity.errors import * | 38 | from duplicity.errors import BackendException |
2214 | 41 | from duplicity import tempdir | ||
2215 | 42 | from duplicity.backend import retry_fatal | ||
2217 | 43 | 39 | ||
2218 | 44 | 40 | ||
2219 | 45 | # This application key is registered in my name (jno at pisem dot net). | 41 | # This application key is registered in my name (jno at pisem dot net). |
2220 | @@ -76,14 +72,14 @@ | |||
2221 | 76 | def wrapper(self, *args): | 72 | def wrapper(self, *args): |
2222 | 77 | from dropbox import rest | 73 | from dropbox import rest |
2223 | 78 | if login_required and not self.sess.is_linked(): | 74 | if login_required and not self.sess.is_linked(): |
2225 | 79 | log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin) | 75 | raise BackendException("dpbx Cannot login: check your credentials", log.ErrorCode.dpbx_nologin) |
2226 | 80 | return | 76 | return |
2227 | 81 | 77 | ||
2228 | 82 | try: | 78 | try: |
2229 | 83 | return f(self, *args) | 79 | return f(self, *args) |
2230 | 84 | except TypeError as e: | 80 | except TypeError as e: |
2231 | 85 | log_exception(e) | 81 | log_exception(e) |
2233 | 86 | log.FatalError('dpbx type error "%s"' % (e,), log.ErrorCode.backend_code_error) | 82 | raise BackendException('dpbx type error "%s"' % (e,)) |
2234 | 87 | except rest.ErrorResponse as e: | 83 | except rest.ErrorResponse as e: |
2235 | 88 | msg = e.user_error_msg or str(e) | 84 | msg = e.user_error_msg or str(e) |
2236 | 89 | log.Error('dpbx error: %s' % (msg,), log.ErrorCode.backend_command_error) | 85 | log.Error('dpbx error: %s' % (msg,), log.ErrorCode.backend_command_error) |
2237 | @@ -165,25 +161,22 @@ | |||
2238 | 165 | if not self.sess.is_linked(): # stil not logged in | 161 | if not self.sess.is_linked(): # stil not logged in |
2239 | 166 | log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin) | 162 | log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin) |
2240 | 167 | 163 | ||
2242 | 168 | @retry_fatal | 164 | def _error_code(self, operation, e): |
2243 | 165 | from dropbox import rest | ||
2244 | 166 | if isinstance(e, rest.ErrorResponse): | ||
2245 | 167 | if e.status == 404: | ||
2246 | 168 | return log.ErrorCode.backend_not_found | ||
2247 | 169 | |||
2248 | 169 | @command() | 170 | @command() |
2254 | 170 | def put(self, source_path, remote_filename = None): | 171 | def _put(self, source_path, remote_filename): |
2250 | 171 | """Transfer source_path to remote_filename""" | ||
2251 | 172 | if not remote_filename: | ||
2252 | 173 | remote_filename = source_path.get_filename() | ||
2253 | 174 | |||
2255 | 175 | remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')) | 172 | remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')) |
2256 | 176 | remote_path = os.path.join(remote_dir, remote_filename).rstrip() | 173 | remote_path = os.path.join(remote_dir, remote_filename).rstrip() |
2257 | 177 | |||
2258 | 178 | from_file = open(source_path.name, "rb") | 174 | from_file = open(source_path.name, "rb") |
2259 | 179 | |||
2260 | 180 | resp = self.api_client.put_file(remote_path, from_file) | 175 | resp = self.api_client.put_file(remote_path, from_file) |
2261 | 181 | log.Debug( 'dpbx,put(%s,%s): %s'%(source_path.name, remote_path, resp)) | 176 | log.Debug( 'dpbx,put(%s,%s): %s'%(source_path.name, remote_path, resp)) |
2262 | 182 | 177 | ||
2263 | 183 | @retry_fatal | ||
2264 | 184 | @command() | 178 | @command() |
2267 | 185 | def get(self, remote_filename, local_path): | 179 | def _get(self, remote_filename, local_path): |
2266 | 186 | """Get remote filename, saving it to local_path""" | ||
2268 | 187 | remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip() | 180 | remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip() |
2269 | 188 | 181 | ||
2270 | 189 | to_file = open( local_path.name, 'wb' ) | 182 | to_file = open( local_path.name, 'wb' ) |
2271 | @@ -196,10 +189,8 @@ | |||
2272 | 196 | 189 | ||
2273 | 197 | local_path.setdata() | 190 | local_path.setdata() |
2274 | 198 | 191 | ||
2275 | 199 | @retry_fatal | ||
2276 | 200 | @command() | 192 | @command() |
2279 | 201 | def _list(self,none=None): | 193 | def _list(self): |
2278 | 202 | """List files in directory""" | ||
2280 | 203 | # Do a long listing to avoid connection reset | 194 | # Do a long listing to avoid connection reset |
2281 | 204 | remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip() | 195 | remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip() |
2282 | 205 | resp = self.api_client.metadata(remote_dir) | 196 | resp = self.api_client.metadata(remote_dir) |
2283 | @@ -214,21 +205,15 @@ | |||
2284 | 214 | l.append(name.encode(encoding)) | 205 | l.append(name.encode(encoding)) |
2285 | 215 | return l | 206 | return l |
2286 | 216 | 207 | ||
2287 | 217 | @retry_fatal | ||
2288 | 218 | @command() | 208 | @command() |
2294 | 219 | def delete(self, filename_list): | 209 | def _delete(self, filename): |
2290 | 220 | """Delete files in filename_list""" | ||
2291 | 221 | if not filename_list : | ||
2292 | 222 | log.Debug('dpbx.delete(): no op') | ||
2293 | 223 | return | ||
2295 | 224 | remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip() | 210 | remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip() |
2300 | 225 | for filename in filename_list: | 211 | remote_name = os.path.join( remote_dir, filename ) |
2301 | 226 | remote_name = os.path.join( remote_dir, filename ) | 212 | resp = self.api_client.file_delete( remote_name ) |
2302 | 227 | resp = self.api_client.file_delete( remote_name ) | 213 | log.Debug('dpbx.delete(%s): %s'%(remote_name,resp)) |
2299 | 228 | log.Debug('dpbx.delete(%s): %s'%(remote_name,resp)) | ||
2303 | 229 | 214 | ||
2304 | 230 | @command() | 215 | @command() |
2306 | 231 | def close(self): | 216 | def _close(self): |
2307 | 232 | """close backend session? no! just "flush" the data""" | 217 | """close backend session? no! just "flush" the data""" |
2308 | 233 | info = self.api_client.account_info() | 218 | info = self.api_client.account_info() |
2309 | 234 | log.Debug('dpbx.close():') | 219 | log.Debug('dpbx.close():') |
2310 | 235 | 220 | ||
2311 | === modified file 'duplicity/backends/ftpbackend.py' | |||
2312 | --- duplicity/backends/ftpbackend.py 2014-04-25 23:20:12 +0000 | |||
2313 | +++ duplicity/backends/ftpbackend.py 2014-04-28 02:49:55 +0000 | |||
2314 | @@ -25,7 +25,6 @@ | |||
2315 | 25 | import duplicity.backend | 25 | import duplicity.backend |
2316 | 26 | from duplicity import globals | 26 | from duplicity import globals |
2317 | 27 | from duplicity import log | 27 | from duplicity import log |
2318 | 28 | from duplicity.errors import * #@UnusedWildImport | ||
2319 | 29 | from duplicity import tempdir | 28 | from duplicity import tempdir |
2320 | 30 | 29 | ||
2321 | 31 | class FTPBackend(duplicity.backend.Backend): | 30 | class FTPBackend(duplicity.backend.Backend): |
2322 | @@ -65,7 +64,7 @@ | |||
2323 | 65 | # This squelches the "file not found" result from ncftpls when | 64 | # This squelches the "file not found" result from ncftpls when |
2324 | 66 | # the ftp backend looks for a collection that does not exist. | 65 | # the ftp backend looks for a collection that does not exist. |
2325 | 67 | # version 3.2.2 has error code 5, 1280 is some legacy value | 66 | # version 3.2.2 has error code 5, 1280 is some legacy value |
2327 | 68 | self.popen_persist_breaks[ 'ncftpls' ] = [ 5, 1280 ] | 67 | self.popen_breaks[ 'ncftpls' ] = [ 5, 1280 ] |
2328 | 69 | 68 | ||
2329 | 70 | # Use an explicit directory name. | 69 | # Use an explicit directory name. |
2330 | 71 | if self.url_string[-1] != '/': | 70 | if self.url_string[-1] != '/': |
2331 | @@ -88,36 +87,28 @@ | |||
2332 | 88 | if parsed_url.port != None and parsed_url.port != 21: | 87 | if parsed_url.port != None and parsed_url.port != 21: |
2333 | 89 | self.flags += " -P '%s'" % (parsed_url.port) | 88 | self.flags += " -P '%s'" % (parsed_url.port) |
2334 | 90 | 89 | ||
2339 | 91 | def put(self, source_path, remote_filename = None): | 90 | def _put(self, source_path, remote_filename): |
2336 | 92 | """Transfer source_path to remote_filename""" | ||
2337 | 93 | if not remote_filename: | ||
2338 | 94 | remote_filename = source_path.get_filename() | ||
2340 | 95 | remote_path = os.path.join(urllib.unquote(self.parsed_url.path.lstrip('/')), remote_filename).rstrip() | 91 | remote_path = os.path.join(urllib.unquote(self.parsed_url.path.lstrip('/')), remote_filename).rstrip() |
2341 | 96 | commandline = "ncftpput %s -m -V -C '%s' '%s'" % \ | 92 | commandline = "ncftpput %s -m -V -C '%s' '%s'" % \ |
2342 | 97 | (self.flags, source_path.name, remote_path) | 93 | (self.flags, source_path.name, remote_path) |
2344 | 98 | self.run_command_persist(commandline) | 94 | self.subprocess_popen(commandline) |
2345 | 99 | 95 | ||
2348 | 100 | def get(self, remote_filename, local_path): | 96 | def _get(self, remote_filename, local_path): |
2347 | 101 | """Get remote filename, saving it to local_path""" | ||
2349 | 102 | remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip() | 97 | remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip() |
2350 | 103 | commandline = "ncftpget %s -V -C '%s' '%s' '%s'" % \ | 98 | commandline = "ncftpget %s -V -C '%s' '%s' '%s'" % \ |
2351 | 104 | (self.flags, self.parsed_url.hostname, remote_path.lstrip('/'), local_path.name) | 99 | (self.flags, self.parsed_url.hostname, remote_path.lstrip('/'), local_path.name) |
2354 | 105 | self.run_command_persist(commandline) | 100 | self.subprocess_popen(commandline) |
2353 | 106 | local_path.setdata() | ||
2355 | 107 | 101 | ||
2356 | 108 | def _list(self): | 102 | def _list(self): |
2357 | 109 | """List files in directory""" | ||
2358 | 110 | # Do a long listing to avoid connection reset | 103 | # Do a long listing to avoid connection reset |
2359 | 111 | commandline = "ncftpls %s -l '%s'" % (self.flags, self.url_string) | 104 | commandline = "ncftpls %s -l '%s'" % (self.flags, self.url_string) |
2361 | 112 | l = self.popen_persist(commandline).split('\n') | 105 | _, l, _ = self.subprocess_popen(commandline) |
2362 | 113 | # Look for our files as the last element of a long list line | 106 | # Look for our files as the last element of a long list line |
2364 | 114 | return [x.split()[-1] for x in l if x and not x.startswith("total ")] | 107 | return [x.split()[-1] for x in l.split('\n') if x and not x.startswith("total ")] |
2365 | 115 | 108 | ||
2372 | 116 | def delete(self, filename_list): | 109 | def _delete(self, filename): |
2373 | 117 | """Delete files in filename_list""" | 110 | commandline = "ncftpls %s -l -X 'DELE %s' '%s'" % \ |
2374 | 118 | for filename in filename_list: | 111 | (self.flags, filename, self.url_string) |
2375 | 119 | commandline = "ncftpls %s -l -X 'DELE %s' '%s'" % \ | 112 | self.subprocess_popen(commandline) |
2370 | 120 | (self.flags, filename, self.url_string) | ||
2371 | 121 | self.popen_persist(commandline) | ||
2376 | 122 | 113 | ||
2377 | 123 | duplicity.backend.register_backend("ftp", FTPBackend) | 114 | duplicity.backend.register_backend("ftp", FTPBackend) |
2378 | 124 | 115 | ||
2379 | === modified file 'duplicity/backends/ftpsbackend.py' | |||
2380 | --- duplicity/backends/ftpsbackend.py 2014-04-25 23:20:12 +0000 | |||
2381 | +++ duplicity/backends/ftpsbackend.py 2014-04-28 02:49:55 +0000 | |||
2382 | @@ -28,7 +28,6 @@ | |||
2383 | 28 | import duplicity.backend | 28 | import duplicity.backend |
2384 | 29 | from duplicity import globals | 29 | from duplicity import globals |
2385 | 30 | from duplicity import log | 30 | from duplicity import log |
2386 | 31 | from duplicity.errors import * | ||
2387 | 32 | from duplicity import tempdir | 31 | from duplicity import tempdir |
2388 | 33 | 32 | ||
2389 | 34 | class FTPSBackend(duplicity.backend.Backend): | 33 | class FTPSBackend(duplicity.backend.Backend): |
2390 | @@ -85,42 +84,29 @@ | |||
2391 | 85 | os.write(self.tempfile, "user %s %s\n" % (self.parsed_url.username, self.password)) | 84 | os.write(self.tempfile, "user %s %s\n" % (self.parsed_url.username, self.password)) |
2392 | 86 | os.close(self.tempfile) | 85 | os.close(self.tempfile) |
2393 | 87 | 86 | ||
2400 | 88 | self.flags = "-f %s" % self.tempname | 87 | def _put(self, source_path, remote_filename): |
2395 | 89 | |||
2396 | 90 | def put(self, source_path, remote_filename = None): | ||
2397 | 91 | """Transfer source_path to remote_filename""" | ||
2398 | 92 | if not remote_filename: | ||
2399 | 93 | remote_filename = source_path.get_filename() | ||
2401 | 94 | remote_path = os.path.join(urllib.unquote(self.parsed_url.path.lstrip('/')), remote_filename).rstrip() | 88 | remote_path = os.path.join(urllib.unquote(self.parsed_url.path.lstrip('/')), remote_filename).rstrip() |
2402 | 95 | commandline = "lftp -c 'source %s;put \'%s\' -o \'%s\''" % \ | 89 | commandline = "lftp -c 'source %s;put \'%s\' -o \'%s\''" % \ |
2403 | 96 | (self.tempname, source_path.name, remote_path) | 90 | (self.tempname, source_path.name, remote_path) |
2405 | 97 | l = self.run_command_persist(commandline) | 91 | self.subprocess_popen(commandline) |
2406 | 98 | 92 | ||
2409 | 99 | def get(self, remote_filename, local_path): | 93 | def _get(self, remote_filename, local_path): |
2408 | 100 | """Get remote filename, saving it to local_path""" | ||
2410 | 101 | remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip() | 94 | remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip() |
2411 | 102 | commandline = "lftp -c 'source %s;get %s -o %s'" % \ | 95 | commandline = "lftp -c 'source %s;get %s -o %s'" % \ |
2412 | 103 | (self.tempname, remote_path.lstrip('/'), local_path.name) | 96 | (self.tempname, remote_path.lstrip('/'), local_path.name) |
2415 | 104 | self.run_command_persist(commandline) | 97 | self.subprocess_popen(commandline) |
2414 | 105 | local_path.setdata() | ||
2416 | 106 | 98 | ||
2417 | 107 | def _list(self): | 99 | def _list(self): |
2418 | 108 | """List files in directory""" | ||
2419 | 109 | # Do a long listing to avoid connection reset | 100 | # Do a long listing to avoid connection reset |
2420 | 110 | remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip() | 101 | remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip() |
2421 | 111 | commandline = "lftp -c 'source %s;ls \'%s\''" % (self.tempname, remote_dir) | 102 | commandline = "lftp -c 'source %s;ls \'%s\''" % (self.tempname, remote_dir) |
2423 | 112 | l = self.popen_persist(commandline).split('\n') | 103 | _, l, _ = self.subprocess_popen(commandline) |
2424 | 113 | # Look for our files as the last element of a long list line | 104 | # Look for our files as the last element of a long list line |
2426 | 114 | return [x.split()[-1] for x in l if x] | 105 | return [x.split()[-1] for x in l.split('\n') if x] |
2427 | 115 | 106 | ||
2437 | 116 | def delete(self, filename_list): | 107 | def _delete(self, filename): |
2438 | 117 | """Delete files in filename_list""" | 108 | remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip() |
2439 | 118 | filelist = "" | 109 | commandline = "lftp -c 'source %s;cd \'%s\';rm \'%s\''" % (self.tempname, remote_dir, filename) |
2440 | 119 | for filename in filename_list: | 110 | self.subprocess_popen(commandline) |
2432 | 120 | filelist += "\'%s\' " % filename | ||
2433 | 121 | if filelist.rstrip(): | ||
2434 | 122 | remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip() | ||
2435 | 123 | commandline = "lftp -c 'source %s;cd \'%s\';rm %s'" % (self.tempname, remote_dir, filelist.rstrip()) | ||
2436 | 124 | self.popen_persist(commandline) | ||
2441 | 125 | 111 | ||
2442 | 126 | duplicity.backend.register_backend("ftps", FTPSBackend) | 112 | duplicity.backend.register_backend("ftps", FTPSBackend) |
2443 | 127 | 113 | ||
2444 | === modified file 'duplicity/backends/gdocsbackend.py' | |||
2445 | --- duplicity/backends/gdocsbackend.py 2014-04-17 20:50:57 +0000 | |||
2446 | +++ duplicity/backends/gdocsbackend.py 2014-04-28 02:49:55 +0000 | |||
2447 | @@ -23,9 +23,7 @@ | |||
2448 | 23 | import urllib | 23 | import urllib |
2449 | 24 | 24 | ||
2450 | 25 | import duplicity.backend | 25 | import duplicity.backend |
2454 | 26 | from duplicity.backend import retry | 26 | from duplicity.errors import BackendException |
2452 | 27 | from duplicity import log | ||
2453 | 28 | from duplicity.errors import * #@UnusedWildImport | ||
2455 | 29 | 27 | ||
2456 | 30 | 28 | ||
2457 | 31 | class GDocsBackend(duplicity.backend.Backend): | 29 | class GDocsBackend(duplicity.backend.Backend): |
2458 | @@ -53,14 +51,14 @@ | |||
2459 | 53 | self.client = gdata.docs.client.DocsClient(source='duplicity $version') | 51 | self.client = gdata.docs.client.DocsClient(source='duplicity $version') |
2460 | 54 | self.client.ssl = True | 52 | self.client.ssl = True |
2461 | 55 | self.client.http_client.debug = False | 53 | self.client.http_client.debug = False |
2463 | 56 | self.__authorize(parsed_url.username + '@' + parsed_url.hostname, self.get_password()) | 54 | self._authorize(parsed_url.username + '@' + parsed_url.hostname, self.get_password()) |
2464 | 57 | 55 | ||
2465 | 58 | # Fetch destination folder entry (and crete hierarchy if required). | 56 | # Fetch destination folder entry (and crete hierarchy if required). |
2466 | 59 | folder_names = string.split(parsed_url.path[1:], '/') | 57 | folder_names = string.split(parsed_url.path[1:], '/') |
2467 | 60 | parent_folder = None | 58 | parent_folder = None |
2468 | 61 | parent_folder_id = GDocsBackend.ROOT_FOLDER_ID | 59 | parent_folder_id = GDocsBackend.ROOT_FOLDER_ID |
2469 | 62 | for folder_name in folder_names: | 60 | for folder_name in folder_names: |
2471 | 63 | entries = self.__fetch_entries(parent_folder_id, 'folder', folder_name) | 61 | entries = self._fetch_entries(parent_folder_id, 'folder', folder_name) |
2472 | 64 | if entries is not None: | 62 | if entries is not None: |
2473 | 65 | if len(entries) == 1: | 63 | if len(entries) == 1: |
2474 | 66 | parent_folder = entries[0] | 64 | parent_folder = entries[0] |
2475 | @@ -77,106 +75,54 @@ | |||
2476 | 77 | raise BackendException("Error while fetching destination folder '%s'." % folder_name) | 75 | raise BackendException("Error while fetching destination folder '%s'." % folder_name) |
2477 | 78 | self.folder = parent_folder | 76 | self.folder = parent_folder |
2478 | 79 | 77 | ||
2579 | 80 | @retry | 78 | def _put(self, source_path, remote_filename): |
2580 | 81 | def put(self, source_path, remote_filename=None, raise_errors=False): | 79 | self._delete(remote_filename) |
2581 | 82 | """Transfer source_path to remote_filename""" | 80 | |
2582 | 83 | # Default remote file name. | 81 | # Set uploader instance. Note that resumable uploads are required in order to |
2583 | 84 | if not remote_filename: | 82 | # enable uploads for all file types. |
2584 | 85 | remote_filename = source_path.get_filename() | 83 | # (see http://googleappsdeveloper.blogspot.com/2011/05/upload-all-file-types-to-any-google.html) |
2585 | 86 | 84 | file = source_path.open() | |
2586 | 87 | # Upload! | 85 | uploader = gdata.client.ResumableUploader( |
2587 | 88 | try: | 86 | self.client, file, GDocsBackend.BACKUP_DOCUMENT_TYPE, os.path.getsize(file.name), |
2588 | 89 | # If remote file already exists in destination folder, remove it. | 87 | chunk_size=gdata.client.ResumableUploader.DEFAULT_CHUNK_SIZE, |
2589 | 90 | entries = self.__fetch_entries(self.folder.resource_id.text, | 88 | desired_class=gdata.docs.data.Resource) |
2590 | 91 | GDocsBackend.BACKUP_DOCUMENT_TYPE, | 89 | if uploader: |
2591 | 92 | remote_filename) | 90 | # Chunked upload. |
2592 | 93 | for entry in entries: | 91 | entry = gdata.docs.data.Resource(title=atom.data.Title(text=remote_filename)) |
2593 | 94 | self.client.delete(entry.get_edit_link().href + '?delete=true', force=True) | 92 | uri = self.folder.get_resumable_create_media_link().href + '?convert=false' |
2594 | 95 | 93 | entry = uploader.UploadFile(uri, entry=entry) | |
2595 | 96 | # Set uploader instance. Note that resumable uploads are required in order to | 94 | if not entry: |
2596 | 97 | # enable uploads for all file types. | 95 | raise BackendException("Failed to upload file '%s' to remote folder '%s'" |
2597 | 98 | # (see http://googleappsdeveloper.blogspot.com/2011/05/upload-all-file-types-to-any-google.html) | 96 | % (source_path.get_filename(), self.folder.title.text)) |
2598 | 99 | file = source_path.open() | 97 | else: |
2599 | 100 | uploader = gdata.client.ResumableUploader( | 98 | raise BackendException("Failed to initialize upload of file '%s' to remote folder '%s'" |
2600 | 101 | self.client, file, GDocsBackend.BACKUP_DOCUMENT_TYPE, os.path.getsize(file.name), | 99 | % (source_path.get_filename(), self.folder.title.text)) |
2601 | 102 | chunk_size=gdata.client.ResumableUploader.DEFAULT_CHUNK_SIZE, | 100 | assert not file.close() |
2602 | 103 | desired_class=gdata.docs.data.Resource) | 101 | |
2603 | 104 | if uploader: | 102 | def _get(self, remote_filename, local_path): |
2604 | 105 | # Chunked upload. | 103 | entries = self._fetch_entries(self.folder.resource_id.text, |
2605 | 106 | entry = gdata.docs.data.Resource(title=atom.data.Title(text=remote_filename)) | 104 | GDocsBackend.BACKUP_DOCUMENT_TYPE, |
2606 | 107 | uri = self.folder.get_resumable_create_media_link().href + '?convert=false' | 105 | remote_filename) |
2607 | 108 | entry = uploader.UploadFile(uri, entry=entry) | 106 | if len(entries) == 1: |
2608 | 109 | if not entry: | 107 | entry = entries[0] |
2609 | 110 | self.__handle_error("Failed to upload file '%s' to remote folder '%s'" | 108 | self.client.DownloadResource(entry, local_path.name) |
2610 | 111 | % (source_path.get_filename(), self.folder.title.text), raise_errors) | 109 | else: |
2611 | 112 | else: | 110 | raise BackendException("Failed to find file '%s' in remote folder '%s'" |
2612 | 113 | self.__handle_error("Failed to initialize upload of file '%s' to remote folder '%s'" | 111 | % (remote_filename, self.folder.title.text)) |
2613 | 114 | % (source_path.get_filename(), self.folder.title.text), raise_errors) | 112 | |
2614 | 115 | assert not file.close() | 113 | def _list(self): |
2615 | 116 | except Exception as e: | 114 | entries = self._fetch_entries(self.folder.resource_id.text, |
2616 | 117 | self.__handle_error("Failed to upload file '%s' to remote folder '%s': %s" | 115 | GDocsBackend.BACKUP_DOCUMENT_TYPE) |
2617 | 118 | % (source_path.get_filename(), self.folder.title.text, str(e)), raise_errors) | 116 | return [entry.title.text for entry in entries] |
2618 | 119 | 117 | ||
2619 | 120 | @retry | 118 | def _delete(self, filename): |
2620 | 121 | def get(self, remote_filename, local_path, raise_errors=False): | 119 | entries = self._fetch_entries(self.folder.resource_id.text, |
2621 | 122 | """Get remote filename, saving it to local_path""" | 120 | GDocsBackend.BACKUP_DOCUMENT_TYPE, |
2622 | 123 | try: | 121 | filename) |
2623 | 124 | entries = self.__fetch_entries(self.folder.resource_id.text, | 122 | for entry in entries: |
2624 | 125 | GDocsBackend.BACKUP_DOCUMENT_TYPE, | 123 | self.client.delete(entry.get_edit_link().href + '?delete=true', force=True) |
2625 | 126 | remote_filename) | 124 | |
2626 | 127 | if len(entries) == 1: | 125 | def _authorize(self, email, password, captcha_token=None, captcha_response=None): |
2527 | 128 | entry = entries[0] | ||
2528 | 129 | self.client.DownloadResource(entry, local_path.name) | ||
2529 | 130 | local_path.setdata() | ||
2530 | 131 | return | ||
2531 | 132 | else: | ||
2532 | 133 | self.__handle_error("Failed to find file '%s' in remote folder '%s'" | ||
2533 | 134 | % (remote_filename, self.folder.title.text), raise_errors) | ||
2534 | 135 | except Exception as e: | ||
2535 | 136 | self.__handle_error("Failed to download file '%s' in remote folder '%s': %s" | ||
2536 | 137 | % (remote_filename, self.folder.title.text, str(e)), raise_errors) | ||
2537 | 138 | |||
2538 | 139 | @retry | ||
2539 | 140 | def _list(self, raise_errors=False): | ||
2540 | 141 | """List files in folder""" | ||
2541 | 142 | try: | ||
2542 | 143 | entries = self.__fetch_entries(self.folder.resource_id.text, | ||
2543 | 144 | GDocsBackend.BACKUP_DOCUMENT_TYPE) | ||
2544 | 145 | return [entry.title.text for entry in entries] | ||
2545 | 146 | except Exception as e: | ||
2546 | 147 | self.__handle_error("Failed to fetch list of files in remote folder '%s': %s" | ||
2547 | 148 | % (self.folder.title.text, str(e)), raise_errors) | ||
2548 | 149 | |||
2549 | 150 | @retry | ||
2550 | 151 | def delete(self, filename_list, raise_errors=False): | ||
2551 | 152 | """Delete files in filename_list""" | ||
2552 | 153 | for filename in filename_list: | ||
2553 | 154 | try: | ||
2554 | 155 | entries = self.__fetch_entries(self.folder.resource_id.text, | ||
2555 | 156 | GDocsBackend.BACKUP_DOCUMENT_TYPE, | ||
2556 | 157 | filename) | ||
2557 | 158 | if len(entries) > 0: | ||
2558 | 159 | success = True | ||
2559 | 160 | for entry in entries: | ||
2560 | 161 | if not self.client.delete(entry.get_edit_link().href + '?delete=true', force=True): | ||
2561 | 162 | success = False | ||
2562 | 163 | if not success: | ||
2563 | 164 | self.__handle_error("Failed to remove file '%s' in remote folder '%s'" | ||
2564 | 165 | % (filename, self.folder.title.text), raise_errors) | ||
2565 | 166 | else: | ||
2566 | 167 | log.Warn("Failed to fetch file '%s' in remote folder '%s'" | ||
2567 | 168 | % (filename, self.folder.title.text)) | ||
2568 | 169 | except Exception as e: | ||
2569 | 170 | self.__handle_error("Failed to remove file '%s' in remote folder '%s': %s" | ||
2570 | 171 | % (filename, self.folder.title.text, str(e)), raise_errors) | ||
2571 | 172 | |||
2572 | 173 | def __handle_error(self, message, raise_errors=True): | ||
2573 | 174 | if raise_errors: | ||
2574 | 175 | raise BackendException(message) | ||
2575 | 176 | else: | ||
2576 | 177 | log.FatalError(message, log.ErrorCode.backend_error) | ||
2577 | 178 | |||
2578 | 179 | def __authorize(self, email, password, captcha_token=None, captcha_response=None): | ||
2627 | 180 | try: | 126 | try: |
2628 | 181 | self.client.client_login(email, | 127 | self.client.client_login(email, |
2629 | 182 | password, | 128 | password, |
2630 | @@ -189,17 +135,15 @@ | |||
2631 | 189 | answer = None | 135 | answer = None |
2632 | 190 | while not answer: | 136 | while not answer: |
2633 | 191 | answer = raw_input('Answer to the challenge? ') | 137 | answer = raw_input('Answer to the challenge? ') |
2635 | 192 | self.__authorize(email, password, challenge.captcha_token, answer) | 138 | self._authorize(email, password, challenge.captcha_token, answer) |
2636 | 193 | except gdata.client.BadAuthentication: | 139 | except gdata.client.BadAuthentication: |
2644 | 194 | self.__handle_error('Invalid user credentials given. Be aware that accounts ' | 140 | raise BackendException('Invalid user credentials given. Be aware that accounts ' |
2645 | 195 | 'that use 2-step verification require creating an application specific ' | 141 | 'that use 2-step verification require creating an application specific ' |
2646 | 196 | 'access code for using this Duplicity backend. Follow the instrucction in ' | 142 | 'access code for using this Duplicity backend. Follow the instruction in ' |
2647 | 197 | 'http://www.google.com/support/accounts/bin/static.py?page=guide.cs&guide=1056283&topic=1056286 ' | 143 | 'http://www.google.com/support/accounts/bin/static.py?page=guide.cs&guide=1056283&topic=1056286 ' |
2648 | 198 | 'and create your application-specific password to run duplicity backups.') | 144 | 'and create your application-specific password to run duplicity backups.') |
2642 | 199 | except Exception as e: | ||
2643 | 200 | self.__handle_error('Error while authenticating client: %s.' % str(e)) | ||
2649 | 201 | 145 | ||
2651 | 202 | def __fetch_entries(self, folder_id, type, title=None): | 146 | def _fetch_entries(self, folder_id, type, title=None): |
2652 | 203 | # Build URI. | 147 | # Build URI. |
2653 | 204 | uri = '/feeds/default/private/full/%s/contents' % folder_id | 148 | uri = '/feeds/default/private/full/%s/contents' % folder_id |
2654 | 205 | if type == 'folder': | 149 | if type == 'folder': |
2655 | @@ -211,34 +155,31 @@ | |||
2656 | 211 | if title: | 155 | if title: |
2657 | 212 | uri += '&title=' + urllib.quote(title) + '&title-exact=true' | 156 | uri += '&title=' + urllib.quote(title) + '&title-exact=true' |
2658 | 213 | 157 | ||
2688 | 214 | try: | 158 | # Fetch entries. |
2689 | 215 | # Fetch entries. | 159 | entries = self.client.get_all_resources(uri=uri) |
2690 | 216 | entries = self.client.get_all_resources(uri=uri) | 160 | |
2691 | 217 | 161 | # When filtering by entry title, API is returning (don't know why) documents in other | |
2692 | 218 | # When filtering by entry title, API is returning (don't know why) documents in other | 162 | # folders (apart from folder_id) matching the title, so some extra filtering is required. |
2693 | 219 | # folders (apart from folder_id) matching the title, so some extra filtering is required. | 163 | if title: |
2694 | 220 | if title: | 164 | result = [] |
2695 | 221 | result = [] | 165 | for entry in entries: |
2696 | 222 | for entry in entries: | 166 | resource_type = entry.get_resource_type() |
2697 | 223 | resource_type = entry.get_resource_type() | 167 | if (not type) \ |
2698 | 224 | if (not type) \ | 168 | or (type == 'folder' and resource_type == 'folder') \ |
2699 | 225 | or (type == 'folder' and resource_type == 'folder') \ | 169 | or (type == GDocsBackend.BACKUP_DOCUMENT_TYPE and resource_type != 'folder'): |
2700 | 226 | or (type == GDocsBackend.BACKUP_DOCUMENT_TYPE and resource_type != 'folder'): | 170 | |
2701 | 227 | 171 | if folder_id != GDocsBackend.ROOT_FOLDER_ID: | |
2702 | 228 | if folder_id != GDocsBackend.ROOT_FOLDER_ID: | 172 | for link in entry.in_collections(): |
2703 | 229 | for link in entry.in_collections(): | 173 | folder_entry = self.client.get_entry(link.href, None, None, |
2704 | 230 | folder_entry = self.client.get_entry(link.href, None, None, | 174 | desired_class=gdata.docs.data.Resource) |
2705 | 231 | desired_class=gdata.docs.data.Resource) | 175 | if folder_entry and (folder_entry.resource_id.text == folder_id): |
2706 | 232 | if folder_entry and (folder_entry.resource_id.text == folder_id): | 176 | result.append(entry) |
2707 | 233 | result.append(entry) | 177 | elif len(entry.in_collections()) == 0: |
2708 | 234 | elif len(entry.in_collections()) == 0: | 178 | result.append(entry) |
2709 | 235 | result.append(entry) | 179 | else: |
2710 | 236 | else: | 180 | result = entries |
2711 | 237 | result = entries | 181 | |
2712 | 238 | 182 | # Done! | |
2713 | 239 | # Done! | 183 | return result |
2685 | 240 | return result | ||
2686 | 241 | except Exception as e: | ||
2687 | 242 | self.__handle_error('Error while fetching remote entries: %s.' % str(e)) | ||
2714 | 243 | 184 | ||
2715 | 244 | duplicity.backend.register_backend('gdocs', GDocsBackend) | 185 | duplicity.backend.register_backend('gdocs', GDocsBackend) |
2716 | 245 | 186 | ||
2717 | === modified file 'duplicity/backends/giobackend.py' | |||
2718 | --- duplicity/backends/giobackend.py 2014-04-17 20:50:57 +0000 | |||
2719 | +++ duplicity/backends/giobackend.py 2014-04-28 02:49:55 +0000 | |||
2720 | @@ -19,18 +19,12 @@ | |||
2721 | 19 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | 19 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
2722 | 20 | 20 | ||
2723 | 21 | import os | 21 | import os |
2724 | 22 | import types | ||
2725 | 23 | import subprocess | 22 | import subprocess |
2726 | 24 | import atexit | 23 | import atexit |
2727 | 25 | import signal | 24 | import signal |
2728 | 26 | from gi.repository import Gio #@UnresolvedImport | ||
2729 | 27 | from gi.repository import GLib #@UnresolvedImport | ||
2730 | 28 | 25 | ||
2731 | 29 | import duplicity.backend | 26 | import duplicity.backend |
2732 | 30 | from duplicity.backend import retry | ||
2733 | 31 | from duplicity import log | 27 | from duplicity import log |
2734 | 32 | from duplicity import util | ||
2735 | 33 | from duplicity.errors import * #@UnusedWildImport | ||
2736 | 34 | 28 | ||
2737 | 35 | def ensure_dbus(): | 29 | def ensure_dbus(): |
2738 | 36 | # GIO requires a dbus session bus which can start the gvfs daemons | 30 | # GIO requires a dbus session bus which can start the gvfs daemons |
2739 | @@ -46,36 +40,39 @@ | |||
2740 | 46 | atexit.register(os.kill, int(parts[1]), signal.SIGTERM) | 40 | atexit.register(os.kill, int(parts[1]), signal.SIGTERM) |
2741 | 47 | os.environ[parts[0]] = parts[1] | 41 | os.environ[parts[0]] = parts[1] |
2742 | 48 | 42 | ||
2743 | 49 | class DupMountOperation(Gio.MountOperation): | ||
2744 | 50 | """A simple MountOperation that grabs the password from the environment | ||
2745 | 51 | or the user. | ||
2746 | 52 | """ | ||
2747 | 53 | def __init__(self, backend): | ||
2748 | 54 | Gio.MountOperation.__init__(self) | ||
2749 | 55 | self.backend = backend | ||
2750 | 56 | self.connect('ask-password', self.ask_password_cb) | ||
2751 | 57 | self.connect('ask-question', self.ask_question_cb) | ||
2752 | 58 | |||
2753 | 59 | def ask_password_cb(self, *args, **kwargs): | ||
2754 | 60 | self.set_password(self.backend.get_password()) | ||
2755 | 61 | self.reply(Gio.MountOperationResult.HANDLED) | ||
2756 | 62 | |||
2757 | 63 | def ask_question_cb(self, *args, **kwargs): | ||
2758 | 64 | # Obviously just always answering with the first choice is a naive | ||
2759 | 65 | # approach. But there's no easy way to allow for answering questions | ||
2760 | 66 | # in duplicity's typical run-from-cron mode with environment variables. | ||
2761 | 67 | # And only a couple gvfs backends ask questions: 'sftp' does about | ||
2762 | 68 | # new hosts and 'afc' does if the device is locked. 0 should be a | ||
2763 | 69 | # safe choice. | ||
2764 | 70 | self.set_choice(0) | ||
2765 | 71 | self.reply(Gio.MountOperationResult.HANDLED) | ||
2766 | 72 | |||
2767 | 73 | class GIOBackend(duplicity.backend.Backend): | 43 | class GIOBackend(duplicity.backend.Backend): |
2768 | 74 | """Use this backend when saving to a GIO URL. | 44 | """Use this backend when saving to a GIO URL. |
2769 | 75 | This is a bit of a meta-backend, in that it can handle multiple schemas. | 45 | This is a bit of a meta-backend, in that it can handle multiple schemas. |
2770 | 76 | URLs look like schema://user@server/path. | 46 | URLs look like schema://user@server/path. |
2771 | 77 | """ | 47 | """ |
2772 | 78 | def __init__(self, parsed_url): | 48 | def __init__(self, parsed_url): |
2773 | 49 | from gi.repository import Gio #@UnresolvedImport | ||
2774 | 50 | from gi.repository import GLib #@UnresolvedImport | ||
2775 | 51 | |||
2776 | 52 | class DupMountOperation(Gio.MountOperation): | ||
2777 | 53 | """A simple MountOperation that grabs the password from the environment | ||
2778 | 54 | or the user. | ||
2779 | 55 | """ | ||
2780 | 56 | def __init__(self, backend): | ||
2781 | 57 | Gio.MountOperation.__init__(self) | ||
2782 | 58 | self.backend = backend | ||
2783 | 59 | self.connect('ask-password', self.ask_password_cb) | ||
2784 | 60 | self.connect('ask-question', self.ask_question_cb) | ||
2785 | 61 | |||
2786 | 62 | def ask_password_cb(self, *args, **kwargs): | ||
2787 | 63 | self.set_password(self.backend.get_password()) | ||
2788 | 64 | self.reply(Gio.MountOperationResult.HANDLED) | ||
2789 | 65 | |||
2790 | 66 | def ask_question_cb(self, *args, **kwargs): | ||
2791 | 67 | # Obviously just always answering with the first choice is a naive | ||
2792 | 68 | # approach. But there's no easy way to allow for answering questions | ||
2793 | 69 | # in duplicity's typical run-from-cron mode with environment variables. | ||
2794 | 70 | # And only a couple gvfs backends ask questions: 'sftp' does about | ||
2795 | 71 | # new hosts and 'afc' does if the device is locked. 0 should be a | ||
2796 | 72 | # safe choice. | ||
2797 | 73 | self.set_choice(0) | ||
2798 | 74 | self.reply(Gio.MountOperationResult.HANDLED) | ||
2799 | 75 | |||
2800 | 79 | duplicity.backend.Backend.__init__(self, parsed_url) | 76 | duplicity.backend.Backend.__init__(self, parsed_url) |
2801 | 80 | 77 | ||
2802 | 81 | ensure_dbus() | 78 | ensure_dbus() |
2803 | @@ -86,8 +83,8 @@ | |||
2804 | 86 | op = DupMountOperation(self) | 83 | op = DupMountOperation(self) |
2805 | 87 | loop = GLib.MainLoop() | 84 | loop = GLib.MainLoop() |
2806 | 88 | self.remote_file.mount_enclosing_volume(Gio.MountMountFlags.NONE, | 85 | self.remote_file.mount_enclosing_volume(Gio.MountMountFlags.NONE, |
2809 | 89 | op, None, self.done_with_mount, | 86 | op, None, |
2810 | 90 | loop) | 87 | self.__done_with_mount, loop) |
2811 | 91 | loop.run() # halt program until we're done mounting | 88 | loop.run() # halt program until we're done mounting |
2812 | 92 | 89 | ||
2813 | 93 | # Now make the directory if it doesn't exist | 90 | # Now make the directory if it doesn't exist |
2814 | @@ -97,7 +94,9 @@ | |||
2815 | 97 | if e.code != Gio.IOErrorEnum.EXISTS: | 94 | if e.code != Gio.IOErrorEnum.EXISTS: |
2816 | 98 | raise | 95 | raise |
2817 | 99 | 96 | ||
2819 | 100 | def done_with_mount(self, fileobj, result, loop): | 97 | def __done_with_mount(self, fileobj, result, loop): |
2820 | 98 | from gi.repository import Gio #@UnresolvedImport | ||
2821 | 99 | from gi.repository import GLib #@UnresolvedImport | ||
2822 | 101 | try: | 100 | try: |
2823 | 102 | fileobj.mount_enclosing_volume_finish(result) | 101 | fileobj.mount_enclosing_volume_finish(result) |
2824 | 103 | except GLib.GError as e: | 102 | except GLib.GError as e: |
2825 | @@ -107,97 +106,63 @@ | |||
2826 | 107 | % str(e), log.ErrorCode.connection_failed) | 106 | % str(e), log.ErrorCode.connection_failed) |
2827 | 108 | loop.quit() | 107 | loop.quit() |
2828 | 109 | 108 | ||
2833 | 110 | def handle_error(self, raise_error, e, op, file1=None, file2=None): | 109 | def __copy_progress(self, *args, **kwargs): |
2834 | 111 | if raise_error: | 110 | pass |
2835 | 112 | raise e | 111 | |
2836 | 113 | code = log.ErrorCode.backend_error | 112 | def __copy_file(self, source, target): |
2837 | 113 | from gi.repository import Gio #@UnresolvedImport | ||
2838 | 114 | source.copy(target, | ||
2839 | 115 | Gio.FileCopyFlags.OVERWRITE | Gio.FileCopyFlags.NOFOLLOW_SYMLINKS, | ||
2840 | 116 | None, self.__copy_progress, None) | ||
2841 | 117 | |||
2842 | 118 | def _error_code(self, operation, e): | ||
2843 | 119 | from gi.repository import Gio #@UnresolvedImport | ||
2844 | 120 | from gi.repository import GLib #@UnresolvedImport | ||
2845 | 114 | if isinstance(e, GLib.GError): | 121 | if isinstance(e, GLib.GError): |
2848 | 115 | if e.code == Gio.IOErrorEnum.PERMISSION_DENIED: | 122 | if e.code == Gio.IOErrorEnum.FAILED and operation == 'delete': |
2849 | 116 | code = log.ErrorCode.backend_permission_denied | 123 | # Sometimes delete will return a generic failure on a file not |
2850 | 124 | # found (notably the FTP does that) | ||
2851 | 125 | return log.ErrorCode.backend_not_found | ||
2852 | 126 | elif e.code == Gio.IOErrorEnum.PERMISSION_DENIED: | ||
2853 | 127 | return log.ErrorCode.backend_permission_denied | ||
2854 | 117 | elif e.code == Gio.IOErrorEnum.NOT_FOUND: | 128 | elif e.code == Gio.IOErrorEnum.NOT_FOUND: |
2856 | 118 | code = log.ErrorCode.backend_not_found | 129 | return log.ErrorCode.backend_not_found |
2857 | 119 | elif e.code == Gio.IOErrorEnum.NO_SPACE: | 130 | elif e.code == Gio.IOErrorEnum.NO_SPACE: |
2881 | 120 | code = log.ErrorCode.backend_no_space | 131 | return log.ErrorCode.backend_no_space |
2882 | 121 | extra = ' '.join([util.escape(x) for x in [file1, file2] if x]) | 132 | |
2883 | 122 | extra = ' '.join([op, extra]) | 133 | def _put(self, source_path, remote_filename): |
2884 | 123 | log.FatalError(str(e), code, extra) | 134 | from gi.repository import Gio #@UnresolvedImport |
2862 | 124 | |||
2863 | 125 | def copy_progress(self, *args, **kwargs): | ||
2864 | 126 | pass | ||
2865 | 127 | |||
2866 | 128 | @retry | ||
2867 | 129 | def copy_file(self, op, source, target, raise_errors=False): | ||
2868 | 130 | log.Info(_("Writing %s") % target.get_parse_name()) | ||
2869 | 131 | try: | ||
2870 | 132 | source.copy(target, | ||
2871 | 133 | Gio.FileCopyFlags.OVERWRITE | Gio.FileCopyFlags.NOFOLLOW_SYMLINKS, | ||
2872 | 134 | None, self.copy_progress, None) | ||
2873 | 135 | except Exception as e: | ||
2874 | 136 | self.handle_error(raise_errors, e, op, source.get_parse_name(), | ||
2875 | 137 | target.get_parse_name()) | ||
2876 | 138 | |||
2877 | 139 | def put(self, source_path, remote_filename = None): | ||
2878 | 140 | """Copy file to remote""" | ||
2879 | 141 | if not remote_filename: | ||
2880 | 142 | remote_filename = source_path.get_filename() | ||
2885 | 143 | source_file = Gio.File.new_for_path(source_path.name) | 135 | source_file = Gio.File.new_for_path(source_path.name) |
2886 | 144 | target_file = self.remote_file.get_child(remote_filename) | 136 | target_file = self.remote_file.get_child(remote_filename) |
2888 | 145 | self.copy_file('put', source_file, target_file) | 137 | self.__copy_file(source_file, target_file) |
2889 | 146 | 138 | ||
2892 | 147 | def get(self, filename, local_path): | 139 | def _get(self, filename, local_path): |
2893 | 148 | """Get file and put in local_path (Path object)""" | 140 | from gi.repository import Gio #@UnresolvedImport |
2894 | 149 | source_file = self.remote_file.get_child(filename) | 141 | source_file = self.remote_file.get_child(filename) |
2895 | 150 | target_file = Gio.File.new_for_path(local_path.name) | 142 | target_file = Gio.File.new_for_path(local_path.name) |
2898 | 151 | self.copy_file('get', source_file, target_file) | 143 | self.__copy_file(source_file, target_file) |
2897 | 152 | local_path.setdata() | ||
2899 | 153 | 144 | ||
2903 | 154 | @retry | 145 | def _list(self): |
2904 | 155 | def _list(self, raise_errors=False): | 146 | from gi.repository import Gio #@UnresolvedImport |
2902 | 156 | """List files in that directory""" | ||
2905 | 157 | files = [] | 147 | files = [] |
2910 | 158 | try: | 148 | enum = self.remote_file.enumerate_children(Gio.FILE_ATTRIBUTE_STANDARD_NAME, |
2911 | 159 | enum = self.remote_file.enumerate_children(Gio.FILE_ATTRIBUTE_STANDARD_NAME, | 149 | Gio.FileQueryInfoFlags.NOFOLLOW_SYMLINKS, |
2912 | 160 | Gio.FileQueryInfoFlags.NOFOLLOW_SYMLINKS, | 150 | None) |
2913 | 161 | None) | 151 | info = enum.next_file(None) |
2914 | 152 | while info: | ||
2915 | 153 | files.append(info.get_name()) | ||
2916 | 162 | info = enum.next_file(None) | 154 | info = enum.next_file(None) |
2917 | 163 | while info: | ||
2918 | 164 | files.append(info.get_name()) | ||
2919 | 165 | info = enum.next_file(None) | ||
2920 | 166 | except Exception as e: | ||
2921 | 167 | self.handle_error(raise_errors, e, 'list', | ||
2922 | 168 | self.remote_file.get_parse_name()) | ||
2923 | 169 | return files | 155 | return files |
2924 | 170 | 156 | ||
2958 | 171 | @retry | 157 | def _delete(self, filename): |
2959 | 172 | def delete(self, filename_list, raise_errors=False): | 158 | target_file = self.remote_file.get_child(filename) |
2960 | 173 | """Delete all files in filename list""" | 159 | target_file.delete(None) |
2961 | 174 | assert type(filename_list) is not types.StringType | 160 | |
2962 | 175 | for filename in filename_list: | 161 | def _query(self, filename): |
2963 | 176 | target_file = self.remote_file.get_child(filename) | 162 | from gi.repository import Gio #@UnresolvedImport |
2964 | 177 | try: | 163 | target_file = self.remote_file.get_child(filename) |
2965 | 178 | target_file.delete(None) | 164 | info = target_file.query_info(Gio.FILE_ATTRIBUTE_STANDARD_SIZE, |
2966 | 179 | except Exception as e: | 165 | Gio.FileQueryInfoFlags.NONE, None) |
2967 | 180 | if isinstance(e, GLib.GError): | 166 | return {'size': info.get_size()} |
2968 | 181 | if e.code == Gio.IOErrorEnum.NOT_FOUND: | 167 | |
2969 | 182 | continue | 168 | duplicity.backend.register_backend_prefix('gio', GIOBackend) |
2937 | 183 | self.handle_error(raise_errors, e, 'delete', | ||
2938 | 184 | target_file.get_parse_name()) | ||
2939 | 185 | return | ||
2940 | 186 | |||
2941 | 187 | @retry | ||
2942 | 188 | def _query_file_info(self, filename, raise_errors=False): | ||
2943 | 189 | """Query attributes on filename""" | ||
2944 | 190 | target_file = self.remote_file.get_child(filename) | ||
2945 | 191 | attrs = Gio.FILE_ATTRIBUTE_STANDARD_SIZE | ||
2946 | 192 | try: | ||
2947 | 193 | info = target_file.query_info(attrs, Gio.FileQueryInfoFlags.NONE, | ||
2948 | 194 | None) | ||
2949 | 195 | return {'size': info.get_size()} | ||
2950 | 196 | except Exception as e: | ||
2951 | 197 | if isinstance(e, GLib.GError): | ||
2952 | 198 | if e.code == Gio.IOErrorEnum.NOT_FOUND: | ||
2953 | 199 | return {'size': -1} # early exit, no need to retry | ||
2954 | 200 | if raise_errors: | ||
2955 | 201 | raise e | ||
2956 | 202 | else: | ||
2957 | 203 | return {'size': None} | ||
2970 | 204 | 169 | ||
2971 | === modified file 'duplicity/backends/hsibackend.py' | |||
2972 | --- duplicity/backends/hsibackend.py 2014-04-25 23:20:12 +0000 | |||
2973 | +++ duplicity/backends/hsibackend.py 2014-04-28 02:49:55 +0000 | |||
2974 | @@ -20,9 +20,7 @@ | |||
2975 | 20 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | 20 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
2976 | 21 | 21 | ||
2977 | 22 | import os | 22 | import os |
2978 | 23 | |||
2979 | 24 | import duplicity.backend | 23 | import duplicity.backend |
2980 | 25 | from duplicity.errors import * #@UnusedWildImport | ||
2981 | 26 | 24 | ||
2982 | 27 | hsi_command = "hsi" | 25 | hsi_command = "hsi" |
2983 | 28 | class HSIBackend(duplicity.backend.Backend): | 26 | class HSIBackend(duplicity.backend.Backend): |
2984 | @@ -35,35 +33,23 @@ | |||
2985 | 35 | else: | 33 | else: |
2986 | 36 | self.remote_prefix = "" | 34 | self.remote_prefix = "" |
2987 | 37 | 35 | ||
2991 | 38 | def put(self, source_path, remote_filename = None): | 36 | def _put(self, source_path, remote_filename): |
2989 | 39 | if not remote_filename: | ||
2990 | 40 | remote_filename = source_path.get_filename() | ||
2992 | 41 | commandline = '%s "put %s : %s%s"' % (hsi_command,source_path.name,self.remote_prefix,remote_filename) | 37 | commandline = '%s "put %s : %s%s"' % (hsi_command,source_path.name,self.remote_prefix,remote_filename) |
2997 | 42 | try: | 38 | self.subprocess_popen(commandline) |
2994 | 43 | self.run_command(commandline) | ||
2995 | 44 | except Exception: | ||
2996 | 45 | print commandline | ||
2998 | 46 | 39 | ||
3000 | 47 | def get(self, remote_filename, local_path): | 40 | def _get(self, remote_filename, local_path): |
3001 | 48 | commandline = '%s "get %s : %s%s"' % (hsi_command, local_path.name, self.remote_prefix, remote_filename) | 41 | commandline = '%s "get %s : %s%s"' % (hsi_command, local_path.name, self.remote_prefix, remote_filename) |
3006 | 49 | self.run_command(commandline) | 42 | self.subprocess_popen(commandline) |
3003 | 50 | local_path.setdata() | ||
3004 | 51 | if not local_path.exists(): | ||
3005 | 52 | raise BackendException("File %s not found" % local_path.name) | ||
3007 | 53 | 43 | ||
3009 | 54 | def list(self): | 44 | def _list(self): |
3010 | 55 | commandline = '%s "ls -l %s"' % (hsi_command, self.remote_dir) | 45 | commandline = '%s "ls -l %s"' % (hsi_command, self.remote_dir) |
3011 | 56 | l = os.popen3(commandline)[2].readlines()[3:] | 46 | l = os.popen3(commandline)[2].readlines()[3:] |
3012 | 57 | for i in range(0,len(l)): | 47 | for i in range(0,len(l)): |
3013 | 58 | l[i] = l[i].split()[-1] | 48 | l[i] = l[i].split()[-1] |
3014 | 59 | return [x for x in l if x] | 49 | return [x for x in l if x] |
3015 | 60 | 50 | ||
3021 | 61 | def delete(self, filename_list): | 51 | def _delete(self, filename): |
3022 | 62 | assert len(filename_list) > 0 | 52 | commandline = '%s "rm %s%s"' % (hsi_command, self.remote_prefix, filename) |
3023 | 63 | for fn in filename_list: | 53 | self.subprocess_popen(commandline) |
3019 | 64 | commandline = '%s "rm %s%s"' % (hsi_command, self.remote_prefix, fn) | ||
3020 | 65 | self.run_command(commandline) | ||
3024 | 66 | 54 | ||
3025 | 67 | duplicity.backend.register_backend("hsi", HSIBackend) | 55 | duplicity.backend.register_backend("hsi", HSIBackend) |
3026 | 68 | |||
3027 | 69 | |||
3028 | 70 | 56 | ||
3029 | === modified file 'duplicity/backends/imapbackend.py' | |||
3030 | --- duplicity/backends/imapbackend.py 2014-04-17 22:03:10 +0000 | |||
3031 | +++ duplicity/backends/imapbackend.py 2014-04-28 02:49:55 +0000 | |||
3032 | @@ -44,7 +44,7 @@ | |||
3033 | 44 | (self.__class__.__name__, parsed_url.scheme, parsed_url.hostname, parsed_url.username)) | 44 | (self.__class__.__name__, parsed_url.scheme, parsed_url.hostname, parsed_url.username)) |
3034 | 45 | 45 | ||
3035 | 46 | # Store url for reconnection on error | 46 | # Store url for reconnection on error |
3037 | 47 | self._url = parsed_url | 47 | self.url = parsed_url |
3038 | 48 | 48 | ||
3039 | 49 | # Set the username | 49 | # Set the username |
3040 | 50 | if ( parsed_url.username is None ): | 50 | if ( parsed_url.username is None ): |
3041 | @@ -61,12 +61,12 @@ | |||
3042 | 61 | else: | 61 | else: |
3043 | 62 | password = parsed_url.password | 62 | password = parsed_url.password |
3044 | 63 | 63 | ||
3048 | 64 | self._username = username | 64 | self.username = username |
3049 | 65 | self._password = password | 65 | self.password = password |
3050 | 66 | self._resetConnection() | 66 | self.resetConnection() |
3051 | 67 | 67 | ||
3054 | 68 | def _resetConnection(self): | 68 | def resetConnection(self): |
3055 | 69 | parsed_url = self._url | 69 | parsed_url = self.url |
3056 | 70 | try: | 70 | try: |
3057 | 71 | imap_server = os.environ['IMAP_SERVER'] | 71 | imap_server = os.environ['IMAP_SERVER'] |
3058 | 72 | except KeyError: | 72 | except KeyError: |
3059 | @@ -74,32 +74,32 @@ | |||
3060 | 74 | 74 | ||
3061 | 75 | # Try to close the connection cleanly | 75 | # Try to close the connection cleanly |
3062 | 76 | try: | 76 | try: |
3064 | 77 | self._conn.close() | 77 | self.conn.close() |
3065 | 78 | except Exception: | 78 | except Exception: |
3066 | 79 | pass | 79 | pass |
3067 | 80 | 80 | ||
3068 | 81 | if (parsed_url.scheme == "imap"): | 81 | if (parsed_url.scheme == "imap"): |
3069 | 82 | cl = imaplib.IMAP4 | 82 | cl = imaplib.IMAP4 |
3071 | 83 | self._conn = cl(imap_server, 143) | 83 | self.conn = cl(imap_server, 143) |
3072 | 84 | elif (parsed_url.scheme == "imaps"): | 84 | elif (parsed_url.scheme == "imaps"): |
3073 | 85 | cl = imaplib.IMAP4_SSL | 85 | cl = imaplib.IMAP4_SSL |
3075 | 86 | self._conn = cl(imap_server, 993) | 86 | self.conn = cl(imap_server, 993) |
3076 | 87 | 87 | ||
3077 | 88 | log.Debug("Type of imap class: %s" % (cl.__name__)) | 88 | log.Debug("Type of imap class: %s" % (cl.__name__)) |
3078 | 89 | self.remote_dir = re.sub(r'^/', r'', parsed_url.path, 1) | 89 | self.remote_dir = re.sub(r'^/', r'', parsed_url.path, 1) |
3079 | 90 | 90 | ||
3080 | 91 | # Login | 91 | # Login |
3081 | 92 | if (not(globals.imap_full_address)): | 92 | if (not(globals.imap_full_address)): |
3084 | 93 | self._conn.login(self._username, self._password) | 93 | self.conn.login(self.username, self.password) |
3085 | 94 | self._conn.select(globals.imap_mailbox) | 94 | self.conn.select(globals.imap_mailbox) |
3086 | 95 | log.Info("IMAP connected") | 95 | log.Info("IMAP connected") |
3087 | 96 | else: | 96 | else: |
3090 | 97 | self._conn.login(self._username + "@" + parsed_url.hostname, self._password) | 97 | self.conn.login(self.username + "@" + parsed_url.hostname, self.password) |
3091 | 98 | self._conn.select(globals.imap_mailbox) | 98 | self.conn.select(globals.imap_mailbox) |
3092 | 99 | log.Info("IMAP connected") | 99 | log.Info("IMAP connected") |
3093 | 100 | 100 | ||
3094 | 101 | 101 | ||
3096 | 102 | def _prepareBody(self,f,rname): | 102 | def prepareBody(self,f,rname): |
3097 | 103 | mp = email.MIMEMultipart.MIMEMultipart() | 103 | mp = email.MIMEMultipart.MIMEMultipart() |
3098 | 104 | 104 | ||
3099 | 105 | # I am going to use the remote_dir as the From address so that | 105 | # I am going to use the remote_dir as the From address so that |
3100 | @@ -117,9 +117,7 @@ | |||
3101 | 117 | 117 | ||
3102 | 118 | return mp.as_string() | 118 | return mp.as_string() |
3103 | 119 | 119 | ||
3107 | 120 | def put(self, source_path, remote_filename = None): | 120 | def _put(self, source_path, remote_filename): |
3105 | 121 | if not remote_filename: | ||
3106 | 122 | remote_filename = source_path.get_filename() | ||
3108 | 123 | f=source_path.open("rb") | 121 | f=source_path.open("rb") |
3109 | 124 | allowedTimeout = globals.timeout | 122 | allowedTimeout = globals.timeout |
3110 | 125 | if (allowedTimeout == 0): | 123 | if (allowedTimeout == 0): |
3111 | @@ -127,12 +125,12 @@ | |||
3112 | 127 | allowedTimeout = 2880 | 125 | allowedTimeout = 2880 |
3113 | 128 | while allowedTimeout > 0: | 126 | while allowedTimeout > 0: |
3114 | 129 | try: | 127 | try: |
3117 | 130 | self._conn.select(remote_filename) | 128 | self.conn.select(remote_filename) |
3118 | 131 | body=self._prepareBody(f,remote_filename) | 129 | body=self.prepareBody(f,remote_filename) |
3119 | 132 | # If we don't select the IMAP folder before | 130 | # If we don't select the IMAP folder before |
3120 | 133 | # append, the message goes into the INBOX. | 131 | # append, the message goes into the INBOX. |
3123 | 134 | self._conn.select(globals.imap_mailbox) | 132 | self.conn.select(globals.imap_mailbox) |
3124 | 135 | self._conn.append(globals.imap_mailbox, None, None, body) | 133 | self.conn.append(globals.imap_mailbox, None, None, body) |
3125 | 136 | break | 134 | break |
3126 | 137 | except (imaplib.IMAP4.abort, socket.error, socket.sslerror): | 135 | except (imaplib.IMAP4.abort, socket.error, socket.sslerror): |
3127 | 138 | allowedTimeout -= 1 | 136 | allowedTimeout -= 1 |
3128 | @@ -140,7 +138,7 @@ | |||
3129 | 140 | time.sleep(30) | 138 | time.sleep(30) |
3130 | 141 | while allowedTimeout > 0: | 139 | while allowedTimeout > 0: |
3131 | 142 | try: | 140 | try: |
3133 | 143 | self._resetConnection() | 141 | self.resetConnection() |
3134 | 144 | break | 142 | break |
3135 | 145 | except (imaplib.IMAP4.abort, socket.error, socket.sslerror): | 143 | except (imaplib.IMAP4.abort, socket.error, socket.sslerror): |
3136 | 146 | allowedTimeout -= 1 | 144 | allowedTimeout -= 1 |
3137 | @@ -149,15 +147,15 @@ | |||
3138 | 149 | 147 | ||
3139 | 150 | log.Info("IMAP mail with '%s' subject stored" % remote_filename) | 148 | log.Info("IMAP mail with '%s' subject stored" % remote_filename) |
3140 | 151 | 149 | ||
3142 | 152 | def get(self, remote_filename, local_path): | 150 | def _get(self, remote_filename, local_path): |
3143 | 153 | allowedTimeout = globals.timeout | 151 | allowedTimeout = globals.timeout |
3144 | 154 | if (allowedTimeout == 0): | 152 | if (allowedTimeout == 0): |
3145 | 155 | # Allow a total timeout of 1 day | 153 | # Allow a total timeout of 1 day |
3146 | 156 | allowedTimeout = 2880 | 154 | allowedTimeout = 2880 |
3147 | 157 | while allowedTimeout > 0: | 155 | while allowedTimeout > 0: |
3148 | 158 | try: | 156 | try: |
3151 | 159 | self._conn.select(globals.imap_mailbox) | 157 | self.conn.select(globals.imap_mailbox) |
3152 | 160 | (result,list) = self._conn.search(None, 'Subject', remote_filename) | 158 | (result,list) = self.conn.search(None, 'Subject', remote_filename) |
3153 | 161 | if result != "OK": | 159 | if result != "OK": |
3154 | 162 | raise Exception(list[0]) | 160 | raise Exception(list[0]) |
3155 | 163 | 161 | ||
3156 | @@ -165,7 +163,7 @@ | |||
3157 | 165 | if list[0] == '': | 163 | if list[0] == '': |
3158 | 166 | raise Exception("no mail with subject %s") | 164 | raise Exception("no mail with subject %s") |
3159 | 167 | 165 | ||
3161 | 168 | (result,list) = self._conn.fetch(list[0],"(RFC822)") | 166 | (result,list) = self.conn.fetch(list[0],"(RFC822)") |
3162 | 169 | 167 | ||
3163 | 170 | if result != "OK": | 168 | if result != "OK": |
3164 | 171 | raise Exception(list[0]) | 169 | raise Exception(list[0]) |
3165 | @@ -185,7 +183,7 @@ | |||
3166 | 185 | time.sleep(30) | 183 | time.sleep(30) |
3167 | 186 | while allowedTimeout > 0: | 184 | while allowedTimeout > 0: |
3168 | 187 | try: | 185 | try: |
3170 | 188 | self._resetConnection() | 186 | self.resetConnection() |
3171 | 189 | break | 187 | break |
3172 | 190 | except (imaplib.IMAP4.abort, socket.error, socket.sslerror): | 188 | except (imaplib.IMAP4.abort, socket.error, socket.sslerror): |
3173 | 191 | allowedTimeout -= 1 | 189 | allowedTimeout -= 1 |
3174 | @@ -199,7 +197,7 @@ | |||
3175 | 199 | 197 | ||
3176 | 200 | def _list(self): | 198 | def _list(self): |
3177 | 201 | ret = [] | 199 | ret = [] |
3179 | 202 | (result,list) = self._conn.select(globals.imap_mailbox) | 200 | (result,list) = self.conn.select(globals.imap_mailbox) |
3180 | 203 | if result != "OK": | 201 | if result != "OK": |
3181 | 204 | raise BackendException(list[0]) | 202 | raise BackendException(list[0]) |
3182 | 205 | 203 | ||
3183 | @@ -207,14 +205,14 @@ | |||
3184 | 207 | # address | 205 | # address |
3185 | 208 | 206 | ||
3186 | 209 | # Search returns an error if you haven't selected an IMAP folder. | 207 | # Search returns an error if you haven't selected an IMAP folder. |
3188 | 210 | (result,list) = self._conn.search(None, 'FROM', self.remote_dir) | 208 | (result,list) = self.conn.search(None, 'FROM', self.remote_dir) |
3189 | 211 | if result!="OK": | 209 | if result!="OK": |
3190 | 212 | raise Exception(list[0]) | 210 | raise Exception(list[0]) |
3191 | 213 | if list[0]=='': | 211 | if list[0]=='': |
3192 | 214 | return ret | 212 | return ret |
3193 | 215 | nums=list[0].split(" ") | 213 | nums=list[0].split(" ") |
3194 | 216 | set="%s:%s"%(nums[0],nums[-1]) | 214 | set="%s:%s"%(nums[0],nums[-1]) |
3196 | 217 | (result,list) = self._conn.fetch(set,"(BODY[HEADER])") | 215 | (result,list) = self.conn.fetch(set,"(BODY[HEADER])") |
3197 | 218 | if result!="OK": | 216 | if result!="OK": |
3198 | 219 | raise Exception(list[0]) | 217 | raise Exception(list[0]) |
3199 | 220 | 218 | ||
3200 | @@ -232,34 +230,32 @@ | |||
3201 | 232 | log.Info("IMAP LIST: %s %s" % (subj,header_from)) | 230 | log.Info("IMAP LIST: %s %s" % (subj,header_from)) |
3202 | 233 | return ret | 231 | return ret |
3203 | 234 | 232 | ||
3205 | 235 | def _imapf(self,fun,*args): | 233 | def imapf(self,fun,*args): |
3206 | 236 | (ret,list)=fun(*args) | 234 | (ret,list)=fun(*args) |
3207 | 237 | if ret != "OK": | 235 | if ret != "OK": |
3208 | 238 | raise Exception(list[0]) | 236 | raise Exception(list[0]) |
3209 | 239 | return list | 237 | return list |
3210 | 240 | 238 | ||
3219 | 241 | def _delete_single_mail(self,i): | 239 | def delete_single_mail(self,i): |
3220 | 242 | self._imapf(self._conn.store,i,"+FLAGS",'\\DELETED') | 240 | self.imapf(self.conn.store,i,"+FLAGS",'\\DELETED') |
3221 | 243 | 241 | ||
3222 | 244 | def _expunge(self): | 242 | def expunge(self): |
3223 | 245 | list=self._imapf(self._conn.expunge) | 243 | list=self.imapf(self.conn.expunge) |
3224 | 246 | 244 | ||
3225 | 247 | def delete(self, filename_list): | 245 | def _delete_list(self, filename_list): |
3218 | 248 | assert len(filename_list) > 0 | ||
3226 | 249 | for filename in filename_list: | 246 | for filename in filename_list: |
3228 | 250 | list = self._imapf(self._conn.search,None,"(SUBJECT %s)"%filename) | 247 | list = self.imapf(self.conn.search,None,"(SUBJECT %s)"%filename) |
3229 | 251 | list = list[0].split() | 248 | list = list[0].split() |
3235 | 252 | if len(list)==0 or list[0]=="":raise Exception("no such mail with subject '%s'"%filename) | 249 | if len(list) > 0 and list[0] != "": |
3236 | 253 | self._delete_single_mail(list[0]) | 250 | self.delete_single_mail(list[0]) |
3237 | 254 | log.Notice("marked %s to be deleted" % filename) | 251 | log.Notice("marked %s to be deleted" % filename) |
3238 | 255 | self._expunge() | 252 | self.expunge() |
3239 | 256 | log.Notice("IMAP expunged %s files" % len(list)) | 253 | log.Notice("IMAP expunged %s files" % len(filename_list)) |
3240 | 257 | 254 | ||
3245 | 258 | def close(self): | 255 | def _close(self): |
3246 | 259 | self._conn.select(globals.imap_mailbox) | 256 | self.conn.select(globals.imap_mailbox) |
3247 | 260 | self._conn.close() | 257 | self.conn.close() |
3248 | 261 | self._conn.logout() | 258 | self.conn.logout() |
3249 | 262 | 259 | ||
3250 | 263 | duplicity.backend.register_backend("imap", ImapBackend); | 260 | duplicity.backend.register_backend("imap", ImapBackend); |
3251 | 264 | duplicity.backend.register_backend("imaps", ImapBackend); | 261 | duplicity.backend.register_backend("imaps", ImapBackend); |
3252 | 265 | |||
3253 | 266 | 262 | ||
3254 | === modified file 'duplicity/backends/localbackend.py' | |||
3255 | --- duplicity/backends/localbackend.py 2014-04-17 20:50:57 +0000 | |||
3256 | +++ duplicity/backends/localbackend.py 2014-04-28 02:49:55 +0000 | |||
3257 | @@ -20,14 +20,11 @@ | |||
3258 | 20 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | 20 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
3259 | 21 | 21 | ||
3260 | 22 | import os | 22 | import os |
3261 | 23 | import types | ||
3262 | 24 | import errno | ||
3263 | 25 | 23 | ||
3264 | 26 | import duplicity.backend | 24 | import duplicity.backend |
3265 | 27 | from duplicity import log | 25 | from duplicity import log |
3266 | 28 | from duplicity import path | 26 | from duplicity import path |
3269 | 29 | from duplicity import util | 27 | from duplicity.errors import BackendException |
3268 | 30 | from duplicity.errors import * #@UnusedWildImport | ||
3270 | 31 | 28 | ||
3271 | 32 | 29 | ||
3272 | 33 | class LocalBackend(duplicity.backend.Backend): | 30 | class LocalBackend(duplicity.backend.Backend): |
3273 | @@ -43,90 +40,37 @@ | |||
3274 | 43 | if not parsed_url.path.startswith('//'): | 40 | if not parsed_url.path.startswith('//'): |
3275 | 44 | raise BackendException("Bad file:// path syntax.") | 41 | raise BackendException("Bad file:// path syntax.") |
3276 | 45 | self.remote_pathdir = path.Path(parsed_url.path[2:]) | 42 | self.remote_pathdir = path.Path(parsed_url.path[2:]) |
3324 | 46 | 43 | try: | |
3325 | 47 | def handle_error(self, e, op, file1 = None, file2 = None): | 44 | os.makedirs(self.remote_pathdir.base) |
3326 | 48 | code = log.ErrorCode.backend_error | 45 | except Exception: |
3327 | 49 | if hasattr(e, 'errno'): | 46 | pass |
3328 | 50 | if e.errno == errno.EACCES: | 47 | |
3329 | 51 | code = log.ErrorCode.backend_permission_denied | 48 | def _move(self, source_path, remote_filename): |
3330 | 52 | elif e.errno == errno.ENOENT: | 49 | target_path = self.remote_pathdir.append(remote_filename) |
3331 | 53 | code = log.ErrorCode.backend_not_found | 50 | try: |
3332 | 54 | elif e.errno == errno.ENOSPC: | 51 | source_path.rename(target_path) |
3333 | 55 | code = log.ErrorCode.backend_no_space | 52 | return True |
3334 | 56 | extra = ' '.join([util.escape(x) for x in [file1, file2] if x]) | 53 | except OSError: |
3335 | 57 | extra = ' '.join([op, extra]) | 54 | return False |
3336 | 58 | if op != 'delete' and op != 'query': | 55 | |
3337 | 59 | log.FatalError(str(e), code, extra) | 56 | def _put(self, source_path, remote_filename): |
3338 | 60 | else: | 57 | target_path = self.remote_pathdir.append(remote_filename) |
3339 | 61 | log.Warn(str(e), code, extra) | 58 | target_path.writefileobj(source_path.open("rb")) |
3340 | 62 | 59 | ||
3341 | 63 | def move(self, source_path, remote_filename = None): | 60 | def _get(self, filename, local_path): |
3295 | 64 | self.put(source_path, remote_filename, rename_instead = True) | ||
3296 | 65 | |||
3297 | 66 | def put(self, source_path, remote_filename = None, rename_instead = False): | ||
3298 | 67 | if not remote_filename: | ||
3299 | 68 | remote_filename = source_path.get_filename() | ||
3300 | 69 | target_path = self.remote_pathdir.append(remote_filename) | ||
3301 | 70 | log.Info("Writing %s" % target_path.name) | ||
3302 | 71 | """Try renaming first (if allowed to), copying if doesn't work""" | ||
3303 | 72 | if rename_instead: | ||
3304 | 73 | try: | ||
3305 | 74 | source_path.rename(target_path) | ||
3306 | 75 | except OSError: | ||
3307 | 76 | pass | ||
3308 | 77 | except Exception as e: | ||
3309 | 78 | self.handle_error(e, 'put', source_path.name, target_path.name) | ||
3310 | 79 | else: | ||
3311 | 80 | return | ||
3312 | 81 | try: | ||
3313 | 82 | target_path.writefileobj(source_path.open("rb")) | ||
3314 | 83 | except Exception as e: | ||
3315 | 84 | self.handle_error(e, 'put', source_path.name, target_path.name) | ||
3316 | 85 | |||
3317 | 86 | """If we get here, renaming failed previously""" | ||
3318 | 87 | if rename_instead: | ||
3319 | 88 | """We need to simulate its behaviour""" | ||
3320 | 89 | source_path.delete() | ||
3321 | 90 | |||
3322 | 91 | def get(self, filename, local_path): | ||
3323 | 92 | """Get file and put in local_path (Path object)""" | ||
3342 | 93 | source_path = self.remote_pathdir.append(filename) | 61 | source_path = self.remote_pathdir.append(filename) |
3347 | 94 | try: | 62 | local_path.writefileobj(source_path.open("rb")) |
3344 | 95 | local_path.writefileobj(source_path.open("rb")) | ||
3345 | 96 | except Exception as e: | ||
3346 | 97 | self.handle_error(e, 'get', source_path.name, local_path.name) | ||
3348 | 98 | 63 | ||
3349 | 99 | def _list(self): | 64 | def _list(self): |
3381 | 100 | """List files in that directory""" | 65 | return self.remote_pathdir.listdir() |
3382 | 101 | try: | 66 | |
3383 | 102 | os.makedirs(self.remote_pathdir.base) | 67 | def _delete(self, filename): |
3384 | 103 | except Exception: | 68 | self.remote_pathdir.append(filename).delete() |
3385 | 104 | pass | 69 | |
3386 | 105 | try: | 70 | def _query(self, filename): |
3387 | 106 | return self.remote_pathdir.listdir() | 71 | target_file = self.remote_pathdir.append(filename) |
3388 | 107 | except Exception as e: | 72 | target_file.setdata() |
3389 | 108 | self.handle_error(e, 'list', self.remote_pathdir.name) | 73 | size = target_file.getsize() if target_file.exists() else -1 |
3390 | 109 | 74 | return {'size': size} | |
3360 | 110 | def delete(self, filename_list): | ||
3361 | 111 | """Delete all files in filename list""" | ||
3362 | 112 | assert type(filename_list) is not types.StringType | ||
3363 | 113 | for filename in filename_list: | ||
3364 | 114 | try: | ||
3365 | 115 | self.remote_pathdir.append(filename).delete() | ||
3366 | 116 | except Exception as e: | ||
3367 | 117 | self.handle_error(e, 'delete', self.remote_pathdir.append(filename).name) | ||
3368 | 118 | |||
3369 | 119 | def _query_file_info(self, filename): | ||
3370 | 120 | """Query attributes on filename""" | ||
3371 | 121 | try: | ||
3372 | 122 | target_file = self.remote_pathdir.append(filename) | ||
3373 | 123 | if not os.path.exists(target_file.name): | ||
3374 | 124 | return {'size': -1} | ||
3375 | 125 | target_file.setdata() | ||
3376 | 126 | size = target_file.getsize() | ||
3377 | 127 | return {'size': size} | ||
3378 | 128 | except Exception as e: | ||
3379 | 129 | self.handle_error(e, 'query', target_file.name) | ||
3380 | 130 | return {'size': None} | ||
3391 | 131 | 75 | ||
3392 | 132 | duplicity.backend.register_backend("file", LocalBackend) | 76 | duplicity.backend.register_backend("file", LocalBackend) |
3393 | 133 | 77 | ||
3394 | === modified file 'duplicity/backends/megabackend.py' | |||
3395 | --- duplicity/backends/megabackend.py 2014-04-17 20:50:57 +0000 | |||
3396 | +++ duplicity/backends/megabackend.py 2014-04-28 02:49:55 +0000 | |||
3397 | @@ -22,9 +22,8 @@ | |||
3398 | 22 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | 22 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
3399 | 23 | 23 | ||
3400 | 24 | import duplicity.backend | 24 | import duplicity.backend |
3401 | 25 | from duplicity.backend import retry | ||
3402 | 26 | from duplicity import log | 25 | from duplicity import log |
3404 | 27 | from duplicity.errors import * #@UnusedWildImport | 26 | from duplicity.errors import BackendException |
3405 | 28 | 27 | ||
3406 | 29 | 28 | ||
3407 | 30 | class MegaBackend(duplicity.backend.Backend): | 29 | class MegaBackend(duplicity.backend.Backend): |
3408 | @@ -63,113 +62,64 @@ | |||
3409 | 63 | 62 | ||
3410 | 64 | self.folder = parent_folder | 63 | self.folder = parent_folder |
3411 | 65 | 64 | ||
3481 | 66 | @retry | 65 | def _put(self, source_path, remote_filename): |
3482 | 67 | def put(self, source_path, remote_filename=None, raise_errors=False): | 66 | try: |
3483 | 68 | """Transfer source_path to remote_filename""" | 67 | self._delete(remote_filename) |
3484 | 69 | # Default remote file name. | 68 | except Exception: |
3485 | 70 | if not remote_filename: | 69 | pass |
3486 | 71 | remote_filename = source_path.get_filename() | 70 | self.client.upload(source_path.get_canonical(), self.folder, dest_filename=remote_filename) |
3487 | 72 | 71 | ||
3488 | 73 | try: | 72 | def _get(self, remote_filename, local_path): |
3489 | 74 | # If remote file already exists in destination folder, remove it. | 73 | files = self.client.get_files() |
3490 | 75 | files = self.client.get_files() | 74 | entries = self.__filter_entries(files, self.folder, remote_filename, 'file') |
3491 | 76 | entries = self.__filter_entries(files, self.folder, remote_filename, 'file') | 75 | if len(entries): |
3492 | 77 | 76 | # get first matching remote file | |
3493 | 78 | for entry in entries: | 77 | entry = entries.keys()[0] |
3494 | 79 | self.client.delete(entry) | 78 | self.client.download((entry, entries[entry]), dest_filename=local_path.name) |
3495 | 80 | 79 | else: | |
3496 | 81 | self.client.upload(source_path.get_canonical(), self.folder, dest_filename=remote_filename) | 80 | raise BackendException("Failed to find file '%s' in remote folder '%s'" |
3497 | 82 | 81 | % (remote_filename, self.__get_node_name(self.folder)), | |
3498 | 83 | except Exception as e: | 82 | code=log.ErrorCode.backend_not_found) |
3499 | 84 | self.__handle_error("Failed to upload file '%s' to remote folder '%s': %s" | 83 | |
3500 | 85 | % (source_path.get_canonical(), self.__get_node_name(self.folder), str(e)), raise_errors) | 84 | def _list(self): |
3501 | 86 | 85 | entries = self.client.get_files_in_node(self.folder) | |
3502 | 87 | @retry | 86 | return [self.client.get_name_from_file({entry:entries[entry]}) for entry in entries] |
3503 | 88 | def get(self, remote_filename, local_path, raise_errors=False): | 87 | |
3504 | 89 | """Get remote filename, saving it to local_path""" | 88 | def _delete(self, filename): |
3505 | 90 | try: | 89 | files = self.client.get_files() |
3506 | 91 | files = self.client.get_files() | 90 | entries = self.__filter_entries(files, self.folder, filename, 'file') |
3507 | 92 | entries = self.__filter_entries(files, self.folder, remote_filename, 'file') | 91 | if len(entries): |
3508 | 93 | 92 | self.client.destroy(entries.keys()[0]) | |
3509 | 94 | if len(entries): | 93 | else: |
3510 | 95 | # get first matching remote file | 94 | raise BackendException("Failed to find file '%s' in remote folder '%s'" |
3511 | 96 | entry = entries.keys()[0] | 95 | % (filename, self.__get_node_name(self.folder)), |
3512 | 97 | self.client.download((entry, entries[entry]), dest_filename=local_path.name) | 96 | code=log.ErrorCode.backend_not_found) |
3444 | 98 | local_path.setdata() | ||
3445 | 99 | return | ||
3446 | 100 | else: | ||
3447 | 101 | self.__handle_error("Failed to find file '%s' in remote folder '%s'" | ||
3448 | 102 | % (remote_filename, self.__get_node_name(self.folder)), raise_errors) | ||
3449 | 103 | except Exception as e: | ||
3450 | 104 | self.__handle_error("Failed to download file '%s' in remote folder '%s': %s" | ||
3451 | 105 | % (remote_filename, self.__get_node_name(self.folder), str(e)), raise_errors) | ||
3452 | 106 | |||
3453 | 107 | @retry | ||
3454 | 108 | def _list(self, raise_errors=False): | ||
3455 | 109 | """List files in folder""" | ||
3456 | 110 | try: | ||
3457 | 111 | entries = self.client.get_files_in_node(self.folder) | ||
3458 | 112 | return [ self.client.get_name_from_file({entry:entries[entry]}) for entry in entries] | ||
3459 | 113 | except Exception as e: | ||
3460 | 114 | self.__handle_error("Failed to fetch list of files in remote folder '%s': %s" | ||
3461 | 115 | % (self.__get_node_name(self.folder), str(e)), raise_errors) | ||
3462 | 116 | |||
3463 | 117 | @retry | ||
3464 | 118 | def delete(self, filename_list, raise_errors=False): | ||
3465 | 119 | """Delete files in filename_list""" | ||
3466 | 120 | files = self.client.get_files() | ||
3467 | 121 | for filename in filename_list: | ||
3468 | 122 | entries = self.__filter_entries(files, self.folder, filename) | ||
3469 | 123 | try: | ||
3470 | 124 | if len(entries) > 0: | ||
3471 | 125 | for entry in entries: | ||
3472 | 126 | if self.client.destroy(entry): | ||
3473 | 127 | self.__handle_error("Failed to remove file '%s' in remote folder '%s'" | ||
3474 | 128 | % (filename, self.__get_node_name(self.folder)), raise_errors) | ||
3475 | 129 | else: | ||
3476 | 130 | log.Warn("Failed to fetch file '%s' in remote folder '%s'" | ||
3477 | 131 | % (filename, self.__get_node_name(self.folder))) | ||
3478 | 132 | except Exception as e: | ||
3479 | 133 | self.__handle_error("Failed to remove file '%s' in remote folder '%s': %s" | ||
3480 | 134 | % (filename, self.__get_node_name(self.folder), str(e)), raise_errors) | ||
3513 | 135 | 97 | ||
3514 | 136 | def __get_node_name(self, handle): | 98 | def __get_node_name(self, handle): |
3515 | 137 | """get node name from public handle""" | 99 | """get node name from public handle""" |
3516 | 138 | files = self.client.get_files() | 100 | files = self.client.get_files() |
3517 | 139 | return self.client.get_name_from_file({handle:files[handle]}) | 101 | return self.client.get_name_from_file({handle:files[handle]}) |
3518 | 140 | |||
3519 | 141 | def __handle_error(self, message, raise_errors=True): | ||
3520 | 142 | if raise_errors: | ||
3521 | 143 | raise BackendException(message) | ||
3522 | 144 | else: | ||
3523 | 145 | log.FatalError(message, log.ErrorCode.backend_error) | ||
3524 | 146 | 102 | ||
3525 | 147 | def __authorize(self, email, password): | 103 | def __authorize(self, email, password): |
3530 | 148 | try: | 104 | self.client.login(email, password) |
3527 | 149 | self.client.login(email, password) | ||
3528 | 150 | except Exception as e: | ||
3529 | 151 | self.__handle_error('Error while authenticating client: %s.' % str(e)) | ||
3531 | 152 | 105 | ||
3532 | 153 | def __filter_entries(self, entries, parent_id=None, title=None, type=None): | 106 | def __filter_entries(self, entries, parent_id=None, title=None, type=None): |
3533 | 154 | result = {} | 107 | result = {} |
3534 | 155 | type_map = { 'folder': 1, 'file': 0 } | 108 | type_map = { 'folder': 1, 'file': 0 } |
3535 | 156 | 109 | ||
3553 | 157 | try: | 110 | for k, v in entries.items(): |
3554 | 158 | for k, v in entries.items(): | 111 | try: |
3555 | 159 | try: | 112 | if parent_id != None: |
3556 | 160 | if parent_id != None: | 113 | assert(v['p'] == parent_id) |
3557 | 161 | assert(v['p'] == parent_id) | 114 | if title != None: |
3558 | 162 | if title != None: | 115 | assert(v['a']['n'] == title) |
3559 | 163 | assert(v['a']['n'] == title) | 116 | if type != None: |
3560 | 164 | if type != None: | 117 | assert(v['t'] == type_map[type]) |
3561 | 165 | assert(v['t'] == type_map[type]) | 118 | except AssertionError: |
3562 | 166 | except AssertionError: | 119 | continue |
3563 | 167 | continue | 120 | |
3564 | 168 | 121 | result.update({k:v}) | |
3565 | 169 | result.update({k:v}) | 122 | |
3566 | 170 | 123 | return result | |
3550 | 171 | return result | ||
3551 | 172 | except Exception as e: | ||
3552 | 173 | self.__handle_error('Error while fetching remote entries: %s.' % str(e)) | ||
3567 | 174 | 124 | ||
3568 | 175 | duplicity.backend.register_backend('mega', MegaBackend) | 125 | duplicity.backend.register_backend('mega', MegaBackend) |
3569 | 176 | 126 | ||
3570 | === renamed file 'duplicity/backends/~par2wrapperbackend.py' => 'duplicity/backends/par2backend.py' | |||
3571 | --- duplicity/backends/~par2wrapperbackend.py 2014-04-17 19:53:30 +0000 | |||
3572 | +++ duplicity/backends/par2backend.py 2014-04-28 02:49:55 +0000 | |||
3573 | @@ -16,14 +16,16 @@ | |||
3574 | 16 | # along with duplicity; if not, write to the Free Software Foundation, | 16 | # along with duplicity; if not, write to the Free Software Foundation, |
3575 | 17 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | 17 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
3576 | 18 | 18 | ||
3577 | 19 | from future_builtins import filter | ||
3578 | 20 | |||
3579 | 19 | import os | 21 | import os |
3580 | 20 | import re | 22 | import re |
3581 | 21 | from duplicity import backend | 23 | from duplicity import backend |
3583 | 22 | from duplicity.errors import UnsupportedBackendScheme, BackendException | 24 | from duplicity.errors import BackendException |
3584 | 23 | from duplicity import log | 25 | from duplicity import log |
3585 | 24 | from duplicity import globals | 26 | from duplicity import globals |
3586 | 25 | 27 | ||
3588 | 26 | class Par2WrapperBackend(backend.Backend): | 28 | class Par2Backend(backend.Backend): |
3589 | 27 | """This backend wrap around other backends and create Par2 recovery files | 29 | """This backend wrap around other backends and create Par2 recovery files |
3590 | 28 | before the file and the Par2 files are transfered with the wrapped backend. | 30 | before the file and the Par2 files are transfered with the wrapped backend. |
3591 | 29 | 31 | ||
3592 | @@ -37,13 +39,15 @@ | |||
3593 | 37 | except AttributeError: | 39 | except AttributeError: |
3594 | 38 | self.redundancy = 10 | 40 | self.redundancy = 10 |
3595 | 39 | 41 | ||
3603 | 40 | try: | 42 | self.wrapped_backend = backend.get_backend_object(parsed_url.url_string) |
3604 | 41 | url_string = self.parsed_url.url_string.lstrip('par2+') | 43 | |
3605 | 42 | self.wrapped_backend = backend.get_backend(url_string) | 44 | for attr in ['_get', '_put', '_list', '_delete', '_delete_list', |
3606 | 43 | except: | 45 | '_query', '_query_list', '_retry_cleanup', '_error_code', |
3607 | 44 | raise UnsupportedBackendScheme(self.parsed_url.url_string) | 46 | '_move', '_close']: |
3608 | 45 | 47 | if hasattr(self.wrapped_backend, attr): | |
3609 | 46 | def put(self, source_path, remote_filename = None): | 48 | setattr(self, attr, getattr(self, attr[1:])) |
3610 | 49 | |||
3611 | 50 | def transfer(self, method, source_path, remote_filename): | ||
3612 | 47 | """create Par2 files and transfer the given file and the Par2 files | 51 | """create Par2 files and transfer the given file and the Par2 files |
3613 | 48 | with the wrapped backend. | 52 | with the wrapped backend. |
3614 | 49 | 53 | ||
3615 | @@ -52,13 +56,14 @@ | |||
3616 | 52 | the soure_path with remote_filename into this. | 56 | the soure_path with remote_filename into this. |
3617 | 53 | """ | 57 | """ |
3618 | 54 | import pexpect | 58 | import pexpect |
3619 | 55 | if remote_filename is None: | ||
3620 | 56 | remote_filename = source_path.get_filename() | ||
3621 | 57 | 59 | ||
3622 | 58 | par2temp = source_path.get_temp_in_same_dir() | 60 | par2temp = source_path.get_temp_in_same_dir() |
3623 | 59 | par2temp.mkdir() | 61 | par2temp.mkdir() |
3624 | 60 | source_symlink = par2temp.append(remote_filename) | 62 | source_symlink = par2temp.append(remote_filename) |
3626 | 61 | os.symlink(source_path.get_canonical(), source_symlink.get_canonical()) | 63 | source_target = source_path.get_canonical() |
3627 | 64 | if not os.path.isabs(source_target): | ||
3628 | 65 | source_target = os.path.join(os.getcwd(), source_target) | ||
3629 | 66 | os.symlink(source_target, source_symlink.get_canonical()) | ||
3630 | 62 | source_symlink.setdata() | 67 | source_symlink.setdata() |
3631 | 63 | 68 | ||
3632 | 64 | log.Info("Create Par2 recovery files") | 69 | log.Info("Create Par2 recovery files") |
3633 | @@ -70,16 +75,17 @@ | |||
3634 | 70 | for file in par2temp.listdir(): | 75 | for file in par2temp.listdir(): |
3635 | 71 | files_to_transfer.append(par2temp.append(file)) | 76 | files_to_transfer.append(par2temp.append(file)) |
3636 | 72 | 77 | ||
3638 | 73 | ret = self.wrapped_backend.put(source_path, remote_filename) | 78 | method(source_path, remote_filename) |
3639 | 74 | for file in files_to_transfer: | 79 | for file in files_to_transfer: |
3641 | 75 | self.wrapped_backend.put(file, file.get_filename()) | 80 | method(file, file.get_filename()) |
3642 | 76 | 81 | ||
3643 | 77 | par2temp.deltree() | 82 | par2temp.deltree() |
3649 | 78 | return ret | 83 | |
3650 | 79 | 84 | def put(self, local, remote): | |
3651 | 80 | def move(self, source_path, remote_filename = None): | 85 | self.transfer(self.wrapped_backend._put, local, remote) |
3652 | 81 | self.put(source_path, remote_filename) | 86 | |
3653 | 82 | source_path.delete() | 87 | def move(self, local, remote): |
3654 | 88 | self.transfer(self.wrapped_backend._move, local, remote) | ||
3655 | 83 | 89 | ||
3656 | 84 | def get(self, remote_filename, local_path): | 90 | def get(self, remote_filename, local_path): |
3657 | 85 | """transfer remote_filename and the related .par2 file into | 91 | """transfer remote_filename and the related .par2 file into |
3658 | @@ -94,22 +100,23 @@ | |||
3659 | 94 | par2temp.mkdir() | 100 | par2temp.mkdir() |
3660 | 95 | local_path_temp = par2temp.append(remote_filename) | 101 | local_path_temp = par2temp.append(remote_filename) |
3661 | 96 | 102 | ||
3663 | 97 | ret = self.wrapped_backend.get(remote_filename, local_path_temp) | 103 | self.wrapped_backend._get(remote_filename, local_path_temp) |
3664 | 98 | 104 | ||
3665 | 99 | try: | 105 | try: |
3666 | 100 | par2file = par2temp.append(remote_filename + '.par2') | 106 | par2file = par2temp.append(remote_filename + '.par2') |
3668 | 101 | self.wrapped_backend.get(par2file.get_filename(), par2file) | 107 | self.wrapped_backend._get(par2file.get_filename(), par2file) |
3669 | 102 | 108 | ||
3670 | 103 | par2verify = 'par2 v -q -q %s %s' % (par2file.get_canonical(), local_path_temp.get_canonical()) | 109 | par2verify = 'par2 v -q -q %s %s' % (par2file.get_canonical(), local_path_temp.get_canonical()) |
3671 | 104 | out, returncode = pexpect.run(par2verify, -1, True) | 110 | out, returncode = pexpect.run(par2verify, -1, True) |
3672 | 105 | 111 | ||
3673 | 106 | if returncode: | 112 | if returncode: |
3674 | 107 | log.Warn("File is corrupt. Try to repair %s" % remote_filename) | 113 | log.Warn("File is corrupt. Try to repair %s" % remote_filename) |
3676 | 108 | par2volumes = self.list(re.compile(r'%s\.vol[\d+]*\.par2' % remote_filename)) | 114 | par2volumes = filter(re.compile((r'%s\.vol[\d+]*\.par2' % remote_filename).match, |
3677 | 115 | self.wrapped_backend._list())) | ||
3678 | 109 | 116 | ||
3679 | 110 | for filename in par2volumes: | 117 | for filename in par2volumes: |
3680 | 111 | file = par2temp.append(filename) | 118 | file = par2temp.append(filename) |
3682 | 112 | self.wrapped_backend.get(filename, file) | 119 | self.wrapped_backend._get(filename, file) |
3683 | 113 | 120 | ||
3684 | 114 | par2repair = 'par2 r -q -q %s %s' % (par2file.get_canonical(), local_path_temp.get_canonical()) | 121 | par2repair = 'par2 r -q -q %s %s' % (par2file.get_canonical(), local_path_temp.get_canonical()) |
3685 | 115 | out, returncode = pexpect.run(par2repair, -1, True) | 122 | out, returncode = pexpect.run(par2repair, -1, True) |
3686 | @@ -124,25 +131,23 @@ | |||
3687 | 124 | finally: | 131 | finally: |
3688 | 125 | local_path_temp.rename(local_path) | 132 | local_path_temp.rename(local_path) |
3689 | 126 | par2temp.deltree() | 133 | par2temp.deltree() |
3690 | 127 | return ret | ||
3691 | 128 | 134 | ||
3695 | 129 | def list(self, filter = re.compile(r'(?!.*\.par2$)')): | 135 | def delete(self, filename): |
3696 | 130 | """default filter all files that ends with ".par" | 136 | """delete given filename and its .par2 files |
3694 | 131 | filter can be a re.compile instance or False for all remote files | ||
3697 | 132 | """ | 137 | """ |
3708 | 133 | list = self.wrapped_backend.list() | 138 | self.wrapped_backend._delete(filename) |
3709 | 134 | if not filter: | 139 | |
3710 | 135 | return list | 140 | remote_list = self.list() |
3711 | 136 | filtered_list = [] | 141 | filename_list = [filename] |
3712 | 137 | for item in list: | 142 | c = re.compile(r'%s(?:\.vol[\d+]*)?\.par2' % filename) |
3713 | 138 | if filter.match(item): | 143 | for remote_filename in remote_list: |
3714 | 139 | filtered_list.append(item) | 144 | if c.match(remote_filename): |
3715 | 140 | return filtered_list | 145 | self.wrapped_backend._delete(remote_filename) |
3716 | 141 | 146 | ||
3717 | 142 | def delete(self, filename_list): | 147 | def delete_list(self, filename_list): |
3718 | 143 | """delete given filename_list and all .par2 files that belong to them | 148 | """delete given filename_list and all .par2 files that belong to them |
3719 | 144 | """ | 149 | """ |
3721 | 145 | remote_list = self.list(False) | 150 | remote_list = self.list() |
3722 | 146 | 151 | ||
3723 | 147 | for filename in filename_list[:]: | 152 | for filename in filename_list[:]: |
3724 | 148 | c = re.compile(r'%s(?:\.vol[\d+]*)?\.par2' % filename) | 153 | c = re.compile(r'%s(?:\.vol[\d+]*)?\.par2' % filename) |
3725 | @@ -150,46 +155,25 @@ | |||
3726 | 150 | if c.match(remote_filename): | 155 | if c.match(remote_filename): |
3727 | 151 | filename_list.append(remote_filename) | 156 | filename_list.append(remote_filename) |
3728 | 152 | 157 | ||
3761 | 153 | return self.wrapped_backend.delete(filename_list) | 158 | return self.wrapped_backend._delete_list(filename_list) |
3762 | 154 | 159 | ||
3763 | 155 | """just return the output of coresponding wrapped backend | 160 | |
3764 | 156 | for all other functions | 161 | def list(self): |
3765 | 157 | """ | 162 | return self.wrapped_backend._list() |
3766 | 158 | def query_info(self, filename_list, raise_errors=True): | 163 | |
3767 | 159 | return self.wrapped_backend.query_info(filename_list, raise_errors) | 164 | def retry_cleanup(self): |
3768 | 160 | 165 | self.wrapped_backend._retry_cleanup() | |
3769 | 161 | def get_password(self): | 166 | |
3770 | 162 | return self.wrapped_backend.get_password() | 167 | def error_code(self, operation, e): |
3771 | 163 | 168 | return self.wrapped_backend._error_code(operation, e) | |
3772 | 164 | def munge_password(self, commandline): | 169 | |
3773 | 165 | return self.wrapped_backend.munge_password(commandline) | 170 | def query(self, filename): |
3774 | 166 | 171 | return self.wrapped_backend._query(filename) | |
3775 | 167 | def run_command(self, commandline): | 172 | |
3776 | 168 | return self.wrapped_backend.run_command(commandline) | 173 | def query_list(self, filename_list): |
3777 | 169 | def run_command_persist(self, commandline): | 174 | return self.wrapped_backend._query(filename_list) |
3746 | 170 | return self.wrapped_backend.run_command_persist(commandline) | ||
3747 | 171 | |||
3748 | 172 | def popen(self, commandline): | ||
3749 | 173 | return self.wrapped_backend.popen(commandline) | ||
3750 | 174 | def popen_persist(self, commandline): | ||
3751 | 175 | return self.wrapped_backend.popen_persist(commandline) | ||
3752 | 176 | |||
3753 | 177 | def _subprocess_popen(self, commandline): | ||
3754 | 178 | return self.wrapped_backend._subprocess_popen(commandline) | ||
3755 | 179 | |||
3756 | 180 | def subprocess_popen(self, commandline): | ||
3757 | 181 | return self.wrapped_backend.subprocess_popen(commandline) | ||
3758 | 182 | |||
3759 | 183 | def subprocess_popen_persist(self, commandline): | ||
3760 | 184 | return self.wrapped_backend.subprocess_popen_persist(commandline) | ||
3778 | 185 | 175 | ||
3779 | 186 | def close(self): | 176 | def close(self): |
3789 | 187 | return self.wrapped_backend.close() | 177 | self.wrapped_backend._close() |
3790 | 188 | 178 | ||
3791 | 189 | """register this backend with leading "par2+" for all already known backends | 179 | backend.register_backend_prefix('par2', Par2Backend) |
3783 | 190 | |||
3784 | 191 | files must be sorted in duplicity.backend.import_backends to catch | ||
3785 | 192 | all supported backends | ||
3786 | 193 | """ | ||
3787 | 194 | for item in backend._backends.keys(): | ||
3788 | 195 | backend.register_backend('par2+' + item, Par2WrapperBackend) | ||
3792 | 196 | 180 | ||
3793 | === modified file 'duplicity/backends/rsyncbackend.py' | |||
3794 | --- duplicity/backends/rsyncbackend.py 2014-04-25 23:20:12 +0000 | |||
3795 | +++ duplicity/backends/rsyncbackend.py 2014-04-28 02:49:55 +0000 | |||
3796 | @@ -23,7 +23,7 @@ | |||
3797 | 23 | import tempfile | 23 | import tempfile |
3798 | 24 | 24 | ||
3799 | 25 | import duplicity.backend | 25 | import duplicity.backend |
3801 | 26 | from duplicity.errors import * #@UnusedWildImport | 26 | from duplicity.errors import InvalidBackendURL |
3802 | 27 | from duplicity import globals, tempdir, util | 27 | from duplicity import globals, tempdir, util |
3803 | 28 | 28 | ||
3804 | 29 | class RsyncBackend(duplicity.backend.Backend): | 29 | class RsyncBackend(duplicity.backend.Backend): |
3805 | @@ -58,12 +58,13 @@ | |||
3806 | 58 | if port: | 58 | if port: |
3807 | 59 | port = " --port=%s" % port | 59 | port = " --port=%s" % port |
3808 | 60 | else: | 60 | else: |
3809 | 61 | host_string = host + ":" if host else "" | ||
3810 | 61 | if parsed_url.path.startswith("//"): | 62 | if parsed_url.path.startswith("//"): |
3811 | 62 | # its an absolute path | 63 | # its an absolute path |
3813 | 63 | self.url_string = "%s:/%s" % (host, parsed_url.path.lstrip('/')) | 64 | self.url_string = "%s/%s" % (host_string, parsed_url.path.lstrip('/')) |
3814 | 64 | else: | 65 | else: |
3815 | 65 | # its a relative path | 66 | # its a relative path |
3817 | 66 | self.url_string = "%s:%s" % (host, parsed_url.path.lstrip('/')) | 67 | self.url_string = "%s%s" % (host_string, parsed_url.path.lstrip('/')) |
3818 | 67 | if parsed_url.port: | 68 | if parsed_url.port: |
3819 | 68 | port = " -p %s" % parsed_url.port | 69 | port = " -p %s" % parsed_url.port |
3820 | 69 | # add trailing slash if missing | 70 | # add trailing slash if missing |
3821 | @@ -105,29 +106,17 @@ | |||
3822 | 105 | raise InvalidBackendURL("Could not determine rsync path: %s" | 106 | raise InvalidBackendURL("Could not determine rsync path: %s" |
3823 | 106 | "" % self.munge_password( url ) ) | 107 | "" % self.munge_password( url ) ) |
3824 | 107 | 108 | ||
3833 | 108 | def run_command(self, commandline): | 109 | def _put(self, source_path, remote_filename): |
3826 | 109 | result, stdout, stderr = self.subprocess_popen_persist(commandline) | ||
3827 | 110 | return result, stdout | ||
3828 | 111 | |||
3829 | 112 | def put(self, source_path, remote_filename = None): | ||
3830 | 113 | """Use rsync to copy source_dir/filename to remote computer""" | ||
3831 | 114 | if not remote_filename: | ||
3832 | 115 | remote_filename = source_path.get_filename() | ||
3834 | 116 | remote_path = os.path.join(self.url_string, remote_filename) | 110 | remote_path = os.path.join(self.url_string, remote_filename) |
3835 | 117 | commandline = "%s %s %s" % (self.cmd, source_path.name, remote_path) | 111 | commandline = "%s %s %s" % (self.cmd, source_path.name, remote_path) |
3837 | 118 | self.run_command(commandline) | 112 | self.subprocess_popen(commandline) |
3838 | 119 | 113 | ||
3841 | 120 | def get(self, remote_filename, local_path): | 114 | def _get(self, remote_filename, local_path): |
3840 | 121 | """Use rsync to get a remote file""" | ||
3842 | 122 | remote_path = os.path.join (self.url_string, remote_filename) | 115 | remote_path = os.path.join (self.url_string, remote_filename) |
3843 | 123 | commandline = "%s %s %s" % (self.cmd, remote_path, local_path.name) | 116 | commandline = "%s %s %s" % (self.cmd, remote_path, local_path.name) |
3848 | 124 | self.run_command(commandline) | 117 | self.subprocess_popen(commandline) |
3845 | 125 | local_path.setdata() | ||
3846 | 126 | if not local_path.exists(): | ||
3847 | 127 | raise BackendException("File %s not found" % local_path.name) | ||
3849 | 128 | 118 | ||
3852 | 129 | def list(self): | 119 | def _list(self): |
3851 | 130 | """List files""" | ||
3853 | 131 | def split (str): | 120 | def split (str): |
3854 | 132 | line = str.split () | 121 | line = str.split () |
3855 | 133 | if len (line) > 4 and line[4] != '.': | 122 | if len (line) > 4 and line[4] != '.': |
3856 | @@ -135,20 +124,17 @@ | |||
3857 | 135 | else: | 124 | else: |
3858 | 136 | return None | 125 | return None |
3859 | 137 | commandline = "%s %s" % (self.cmd, self.url_string) | 126 | commandline = "%s %s" % (self.cmd, self.url_string) |
3861 | 138 | result, stdout = self.run_command(commandline) | 127 | result, stdout, stderr = self.subprocess_popen(commandline) |
3862 | 139 | return [x for x in map (split, stdout.split('\n')) if x] | 128 | return [x for x in map (split, stdout.split('\n')) if x] |
3863 | 140 | 129 | ||
3866 | 141 | def delete(self, filename_list): | 130 | def _delete_list(self, filename_list): |
3865 | 142 | """Delete files.""" | ||
3867 | 143 | delete_list = filename_list | 131 | delete_list = filename_list |
3868 | 144 | dont_delete_list = [] | 132 | dont_delete_list = [] |
3870 | 145 | for file in self.list (): | 133 | for file in self._list (): |
3871 | 146 | if file in delete_list: | 134 | if file in delete_list: |
3872 | 147 | delete_list.remove (file) | 135 | delete_list.remove (file) |
3873 | 148 | else: | 136 | else: |
3874 | 149 | dont_delete_list.append (file) | 137 | dont_delete_list.append (file) |
3875 | 150 | if len (delete_list) > 0: | ||
3876 | 151 | raise BackendException("Files %s not found" % str (delete_list)) | ||
3877 | 152 | 138 | ||
3878 | 153 | dir = tempfile.mkdtemp() | 139 | dir = tempfile.mkdtemp() |
3879 | 154 | exclude, exclude_name = tempdir.default().mkstemp_file() | 140 | exclude, exclude_name = tempdir.default().mkstemp_file() |
3880 | @@ -162,7 +148,7 @@ | |||
3881 | 162 | exclude.close() | 148 | exclude.close() |
3882 | 163 | commandline = ("%s --recursive --delete --exclude-from=%s %s/ %s" % | 149 | commandline = ("%s --recursive --delete --exclude-from=%s %s/ %s" % |
3883 | 164 | (self.cmd, exclude_name, dir, self.url_string)) | 150 | (self.cmd, exclude_name, dir, self.url_string)) |
3885 | 165 | self.run_command(commandline) | 151 | self.subprocess_popen(commandline) |
3886 | 166 | for file in to_delete: | 152 | for file in to_delete: |
3887 | 167 | util.ignore_missing(os.unlink, file) | 153 | util.ignore_missing(os.unlink, file) |
3888 | 168 | os.rmdir (dir) | 154 | os.rmdir (dir) |
3889 | 169 | 155 | ||
3890 | === modified file 'duplicity/backends/sshbackend.py' | |||
3891 | --- duplicity/backends/sshbackend.py 2014-04-17 21:54:04 +0000 | |||
3892 | +++ duplicity/backends/sshbackend.py 2014-04-28 02:49:55 +0000 | |||
3893 | @@ -18,6 +18,7 @@ | |||
3894 | 18 | # along with duplicity; if not, write to the Free Software Foundation, | 18 | # along with duplicity; if not, write to the Free Software Foundation, |
3895 | 19 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | 19 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
3896 | 20 | 20 | ||
3897 | 21 | import duplicity.backend | ||
3898 | 21 | from duplicity import globals, log | 22 | from duplicity import globals, log |
3899 | 22 | 23 | ||
3900 | 23 | def warn_option(option, optionvar): | 24 | def warn_option(option, optionvar): |
3901 | @@ -26,11 +27,15 @@ | |||
3902 | 26 | 27 | ||
3903 | 27 | if (globals.ssh_backend and | 28 | if (globals.ssh_backend and |
3904 | 28 | globals.ssh_backend.lower().strip() == 'pexpect'): | 29 | globals.ssh_backend.lower().strip() == 'pexpect'): |
3906 | 29 | from . import _ssh_pexpect | 30 | from ._ssh_pexpect import SSHPExpectBackend as SSHBackend |
3907 | 30 | else: | 31 | else: |
3908 | 31 | # take user by the hand to prevent typo driven bug reports | 32 | # take user by the hand to prevent typo driven bug reports |
3909 | 32 | if globals.ssh_backend.lower().strip() != 'paramiko': | 33 | if globals.ssh_backend.lower().strip() != 'paramiko': |
3910 | 33 | log.Warn(_("Warning: Selected ssh backend '%s' is neither 'paramiko nor 'pexpect'. Will use default paramiko instead.") % globals.ssh_backend) | 34 | log.Warn(_("Warning: Selected ssh backend '%s' is neither 'paramiko nor 'pexpect'. Will use default paramiko instead.") % globals.ssh_backend) |
3911 | 34 | warn_option("--scp-command", globals.scp_command) | 35 | warn_option("--scp-command", globals.scp_command) |
3912 | 35 | warn_option("--sftp-command", globals.sftp_command) | 36 | warn_option("--sftp-command", globals.sftp_command) |
3914 | 36 | from . import _ssh_paramiko | 37 | from ._ssh_paramiko import SSHParamikoBackend as SSHBackend |
3915 | 38 | |||
3916 | 39 | duplicity.backend.register_backend("sftp", SSHBackend) | ||
3917 | 40 | duplicity.backend.register_backend("scp", SSHBackend) | ||
3918 | 41 | duplicity.backend.register_backend("ssh", SSHBackend) | ||
3919 | 37 | 42 | ||
3920 | === modified file 'duplicity/backends/swiftbackend.py' | |||
3921 | --- duplicity/backends/swiftbackend.py 2014-04-17 22:03:10 +0000 | |||
3922 | +++ duplicity/backends/swiftbackend.py 2014-04-28 02:49:55 +0000 | |||
3923 | @@ -19,14 +19,11 @@ | |||
3924 | 19 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | 19 | # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA |
3925 | 20 | 20 | ||
3926 | 21 | import os | 21 | import os |
3927 | 22 | import time | ||
3928 | 23 | 22 | ||
3929 | 24 | import duplicity.backend | 23 | import duplicity.backend |
3930 | 25 | from duplicity import globals | ||
3931 | 26 | from duplicity import log | 24 | from duplicity import log |
3935 | 27 | from duplicity.errors import * #@UnusedWildImport | 25 | from duplicity.errors import BackendException |
3936 | 28 | from duplicity.util import exception_traceback | 26 | |
3934 | 29 | from duplicity.backend import retry | ||
3937 | 30 | 27 | ||
3938 | 31 | class SwiftBackend(duplicity.backend.Backend): | 28 | class SwiftBackend(duplicity.backend.Backend): |
3939 | 32 | """ | 29 | """ |
3940 | @@ -82,121 +79,30 @@ | |||
3941 | 82 | % (e.__class__.__name__, str(e)), | 79 | % (e.__class__.__name__, str(e)), |
3942 | 83 | log.ErrorCode.connection_failed) | 80 | log.ErrorCode.connection_failed) |
3943 | 84 | 81 | ||
3992 | 85 | def put(self, source_path, remote_filename = None): | 82 | def _error_code(self, operation, e): |
3993 | 86 | if not remote_filename: | 83 | if isinstance(e, self.resp_exc): |
3994 | 87 | remote_filename = source_path.get_filename() | 84 | if e.http_status == 404: |
3995 | 88 | 85 | return log.ErrorCode.backend_not_found | |
3996 | 89 | for n in range(1, globals.num_retries+1): | 86 | |
3997 | 90 | log.Info("Uploading '%s/%s' " % (self.container, remote_filename)) | 87 | def _put(self, source_path, remote_filename): |
3998 | 91 | try: | 88 | self.conn.put_object(self.container, remote_filename, |
3999 | 92 | self.conn.put_object(self.container, | 89 | file(source_path.name)) |
4000 | 93 | remote_filename, | 90 | |
4001 | 94 | file(source_path.name)) | 91 | def _get(self, remote_filename, local_path): |
4002 | 95 | return | 92 | headers, body = self.conn.get_object(self.container, remote_filename) |
4003 | 96 | except self.resp_exc as error: | 93 | with open(local_path.name, 'wb') as f: |
4004 | 97 | log.Warn("Upload of '%s' failed (attempt %d): Swift server returned: %s %s" | 94 | for chunk in body: |
4005 | 98 | % (remote_filename, n, error.http_status, error.message)) | 95 | f.write(chunk) |
3958 | 99 | except Exception as e: | ||
3959 | 100 | log.Warn("Upload of '%s' failed (attempt %s): %s: %s" | ||
3960 | 101 | % (remote_filename, n, e.__class__.__name__, str(e))) | ||
3961 | 102 | log.Debug("Backtrace of previous error: %s" | ||
3962 | 103 | % exception_traceback()) | ||
3963 | 104 | time.sleep(30) | ||
3964 | 105 | log.Warn("Giving up uploading '%s' after %s attempts" | ||
3965 | 106 | % (remote_filename, globals.num_retries)) | ||
3966 | 107 | raise BackendException("Error uploading '%s'" % remote_filename) | ||
3967 | 108 | |||
3968 | 109 | def get(self, remote_filename, local_path): | ||
3969 | 110 | for n in range(1, globals.num_retries+1): | ||
3970 | 111 | log.Info("Downloading '%s/%s'" % (self.container, remote_filename)) | ||
3971 | 112 | try: | ||
3972 | 113 | headers, body = self.conn.get_object(self.container, | ||
3973 | 114 | remote_filename) | ||
3974 | 115 | f = open(local_path.name, 'w') | ||
3975 | 116 | for chunk in body: | ||
3976 | 117 | f.write(chunk) | ||
3977 | 118 | local_path.setdata() | ||
3978 | 119 | return | ||
3979 | 120 | except self.resp_exc as resperr: | ||
3980 | 121 | log.Warn("Download of '%s' failed (attempt %s): Swift server returned: %s %s" | ||
3981 | 122 | % (remote_filename, n, resperr.http_status, resperr.message)) | ||
3982 | 123 | except Exception as e: | ||
3983 | 124 | log.Warn("Download of '%s' failed (attempt %s): %s: %s" | ||
3984 | 125 | % (remote_filename, n, e.__class__.__name__, str(e))) | ||
3985 | 126 | log.Debug("Backtrace of previous error: %s" | ||
3986 | 127 | % exception_traceback()) | ||
3987 | 128 | time.sleep(30) | ||
3988 | 129 | log.Warn("Giving up downloading '%s' after %s attempts" | ||
3989 | 130 | % (remote_filename, globals.num_retries)) | ||
3990 | 131 | raise BackendException("Error downloading '%s/%s'" | ||
3991 | 132 | % (self.container, remote_filename)) | ||
4006 | 133 | 96 | ||
4007 | 134 | def _list(self): | 97 | def _list(self): |
4074 | 135 | for n in range(1, globals.num_retries+1): | 98 | headers, objs = self.conn.get_container(self.container) |
4075 | 136 | log.Info("Listing '%s'" % (self.container)) | 99 | return [ o['name'] for o in objs ] |
4076 | 137 | try: | 100 | |
4077 | 138 | # Cloud Files will return a max of 10,000 objects. We have | 101 | def _delete(self, filename): |
4078 | 139 | # to make multiple requests to get them all. | 102 | self.conn.delete_object(self.container, filename) |
4079 | 140 | headers, objs = self.conn.get_container(self.container) | 103 | |
4080 | 141 | return [ o['name'] for o in objs ] | 104 | def _query(self, filename): |
4081 | 142 | except self.resp_exc as resperr: | 105 | sobject = self.conn.head_object(self.container, filename) |
4082 | 143 | log.Warn("Listing of '%s' failed (attempt %s): Swift server returned: %s %s" | 106 | return {'size': int(sobject['content-length'])} |
4017 | 144 | % (self.container, n, resperr.http_status, resperr.message)) | ||
4018 | 145 | except Exception as e: | ||
4019 | 146 | log.Warn("Listing of '%s' failed (attempt %s): %s: %s" | ||
4020 | 147 | % (self.container, n, e.__class__.__name__, str(e))) | ||
4021 | 148 | log.Debug("Backtrace of previous error: %s" | ||
4022 | 149 | % exception_traceback()) | ||
4023 | 150 | time.sleep(30) | ||
4024 | 151 | log.Warn("Giving up listing of '%s' after %s attempts" | ||
4025 | 152 | % (self.container, globals.num_retries)) | ||
4026 | 153 | raise BackendException("Error listing '%s'" | ||
4027 | 154 | % (self.container)) | ||
4028 | 155 | |||
4029 | 156 | def delete_one(self, remote_filename): | ||
4030 | 157 | for n in range(1, globals.num_retries+1): | ||
4031 | 158 | log.Info("Deleting '%s/%s'" % (self.container, remote_filename)) | ||
4032 | 159 | try: | ||
4033 | 160 | self.conn.delete_object(self.container, remote_filename) | ||
4034 | 161 | return | ||
4035 | 162 | except self.resp_exc as resperr: | ||
4036 | 163 | if n > 1 and resperr.http_status == 404: | ||
4037 | 164 | # We failed on a timeout, but delete succeeded on the server | ||
4038 | 165 | log.Warn("Delete of '%s' missing after retry - must have succeded earlier" % remote_filename ) | ||
4039 | 166 | return | ||
4040 | 167 | log.Warn("Delete of '%s' failed (attempt %s): Swift server returned: %s %s" | ||
4041 | 168 | % (remote_filename, n, resperr.http_status, resperr.message)) | ||
4042 | 169 | except Exception as e: | ||
4043 | 170 | log.Warn("Delete of '%s' failed (attempt %s): %s: %s" | ||
4044 | 171 | % (remote_filename, n, e.__class__.__name__, str(e))) | ||
4045 | 172 | log.Debug("Backtrace of previous error: %s" | ||
4046 | 173 | % exception_traceback()) | ||
4047 | 174 | time.sleep(30) | ||
4048 | 175 | log.Warn("Giving up deleting '%s' after %s attempts" | ||
4049 | 176 | % (remote_filename, globals.num_retries)) | ||
4050 | 177 | raise BackendException("Error deleting '%s/%s'" | ||
4051 | 178 | % (self.container, remote_filename)) | ||
4052 | 179 | |||
4053 | 180 | def delete(self, filename_list): | ||
4054 | 181 | for file in filename_list: | ||
4055 | 182 | self.delete_one(file) | ||
4056 | 183 | log.Debug("Deleted '%s/%s'" % (self.container, file)) | ||
4057 | 184 | |||
4058 | 185 | @retry | ||
4059 | 186 | def _query_file_info(self, filename, raise_errors=False): | ||
4060 | 187 | try: | ||
4061 | 188 | sobject = self.conn.head_object(self.container, filename) | ||
4062 | 189 | return {'size': int(sobject['content-length'])} | ||
4063 | 190 | except self.resp_exc: | ||
4064 | 191 | return {'size': -1} | ||
4065 | 192 | except Exception as e: | ||
4066 | 193 | log.Warn("Error querying '%s/%s': %s" | ||
4067 | 194 | "" % (self.container, | ||
4068 | 195 | filename, | ||
4069 | 196 | str(e))) | ||
4070 | 197 | if raise_errors: | ||
4071 | 198 | raise e | ||
4072 | 199 | else: | ||
4073 | 200 | return {'size': None} | ||
4083 | 201 | 107 | ||
4084 | 202 | duplicity.backend.register_backend("swift", SwiftBackend) | 108 | duplicity.backend.register_backend("swift", SwiftBackend) |
4085 | 203 | 109 | ||
4086 | === modified file 'duplicity/backends/tahoebackend.py' | |||
4087 | --- duplicity/backends/tahoebackend.py 2013-12-27 06:39:00 +0000 | |||
4088 | +++ duplicity/backends/tahoebackend.py 2014-04-28 02:49:55 +0000 | |||
4089 | @@ -20,9 +20,8 @@ | |||
4090 | 20 | 20 | ||
4091 | 21 | import duplicity.backend | 21 | import duplicity.backend |
4092 | 22 | from duplicity import log | 22 | from duplicity import log |
4094 | 23 | from duplicity.errors import * #@UnusedWildImport | 23 | from duplicity.errors import BackendException |
4095 | 24 | 24 | ||
4096 | 25 | from commands import getstatusoutput | ||
4097 | 26 | 25 | ||
4098 | 27 | class TAHOEBackend(duplicity.backend.Backend): | 26 | class TAHOEBackend(duplicity.backend.Backend): |
4099 | 28 | """ | 27 | """ |
4100 | @@ -36,10 +35,8 @@ | |||
4101 | 36 | 35 | ||
4102 | 37 | self.alias = url[0] | 36 | self.alias = url[0] |
4103 | 38 | 37 | ||
4105 | 39 | if len(url) > 2: | 38 | if len(url) > 1: |
4106 | 40 | self.directory = "/".join(url[1:]) | 39 | self.directory = "/".join(url[1:]) |
4107 | 41 | elif len(url) == 2: | ||
4108 | 42 | self.directory = url[1] | ||
4109 | 43 | else: | 40 | else: |
4110 | 44 | self.directory = "" | 41 | self.directory = "" |
4111 | 45 | 42 | ||
4112 | @@ -59,28 +56,20 @@ | |||
4113 | 59 | 56 | ||
4114 | 60 | def run(self, *args): | 57 | def run(self, *args): |
4115 | 61 | cmd = " ".join(args) | 58 | cmd = " ".join(args) |
4125 | 62 | log.Debug("tahoe execute: %s" % cmd) | 59 | _, output, _ = self.subprocess_popen(cmd) |
4126 | 63 | (status, output) = getstatusoutput(cmd) | 60 | return output |
4127 | 64 | 61 | ||
4128 | 65 | if status != 0: | 62 | def _put(self, source_path, remote_filename): |
4120 | 66 | raise BackendException("Error running %s" % cmd) | ||
4121 | 67 | else: | ||
4122 | 68 | return output | ||
4123 | 69 | |||
4124 | 70 | def put(self, source_path, remote_filename=None): | ||
4129 | 71 | self.run("tahoe", "cp", source_path.name, self.get_remote_path(remote_filename)) | 63 | self.run("tahoe", "cp", source_path.name, self.get_remote_path(remote_filename)) |
4130 | 72 | 64 | ||
4132 | 73 | def get(self, remote_filename, local_path): | 65 | def _get(self, remote_filename, local_path): |
4133 | 74 | self.run("tahoe", "cp", self.get_remote_path(remote_filename), local_path.name) | 66 | self.run("tahoe", "cp", self.get_remote_path(remote_filename), local_path.name) |
4134 | 75 | local_path.setdata() | ||
4135 | 76 | 67 | ||
4136 | 77 | def _list(self): | 68 | def _list(self): |
4139 | 78 | log.Debug("tahoe: List") | 69 | output = self.run("tahoe", "ls", self.get_remote_path()) |
4140 | 79 | return self.run("tahoe", "ls", self.get_remote_path()).split('\n') | 70 | return output.split('\n') if output else [] |
4141 | 80 | 71 | ||
4146 | 81 | def delete(self, filename_list): | 72 | def _delete(self, filename): |
4147 | 82 | log.Debug("tahoe: delete(%s)" % filename_list) | 73 | self.run("tahoe", "rm", self.get_remote_path(filename)) |
4144 | 83 | for filename in filename_list: | ||
4145 | 84 | self.run("tahoe", "rm", self.get_remote_path(filename)) | ||
4148 | 85 | 74 | ||
4149 | 86 | duplicity.backend.register_backend("tahoe", TAHOEBackend) | 75 | duplicity.backend.register_backend("tahoe", TAHOEBackend) |
4150 | 87 | 76 | ||
4151 | === modified file 'duplicity/backends/webdavbackend.py' | |||
4152 | --- duplicity/backends/webdavbackend.py 2014-04-25 15:03:00 +0000 | |||
4153 | +++ duplicity/backends/webdavbackend.py 2014-04-28 02:49:55 +0000 | |||
4154 | @@ -32,8 +32,7 @@ | |||
4155 | 32 | import duplicity.backend | 32 | import duplicity.backend |
4156 | 33 | from duplicity import globals | 33 | from duplicity import globals |
4157 | 34 | from duplicity import log | 34 | from duplicity import log |
4160 | 35 | from duplicity.errors import * #@UnusedWildImport | 35 | from duplicity.errors import BackendException, FatalBackendException |
4159 | 36 | from duplicity.backend import retry_fatal | ||
4161 | 37 | 36 | ||
4162 | 38 | class CustomMethodRequest(urllib2.Request): | 37 | class CustomMethodRequest(urllib2.Request): |
4163 | 39 | """ | 38 | """ |
4164 | @@ -54,7 +53,7 @@ | |||
4165 | 54 | global socket, ssl | 53 | global socket, ssl |
4166 | 55 | import socket, ssl | 54 | import socket, ssl |
4167 | 56 | except ImportError: | 55 | except ImportError: |
4169 | 57 | raise FatalBackendError("Missing socket or ssl libraries.") | 56 | raise FatalBackendException("Missing socket or ssl libraries.") |
4170 | 58 | 57 | ||
4171 | 59 | httplib.HTTPSConnection.__init__(self, *args, **kwargs) | 58 | httplib.HTTPSConnection.__init__(self, *args, **kwargs) |
4172 | 60 | 59 | ||
4173 | @@ -71,21 +70,21 @@ | |||
4174 | 71 | break | 70 | break |
4175 | 72 | # still no cacert file, inform user | 71 | # still no cacert file, inform user |
4176 | 73 | if not self.cacert_file: | 72 | if not self.cacert_file: |
4178 | 74 | raise FatalBackendError("""For certificate verification a cacert database file is needed in one of these locations: %s | 73 | raise FatalBackendException("""For certificate verification a cacert database file is needed in one of these locations: %s |
4179 | 75 | Hints: | 74 | Hints: |
4180 | 76 | Consult the man page, chapter 'SSL Certificate Verification'. | 75 | Consult the man page, chapter 'SSL Certificate Verification'. |
4181 | 77 | Consider using the options --ssl-cacert-file, --ssl-no-check-certificate .""" % ", ".join(cacert_candidates) ) | 76 | Consider using the options --ssl-cacert-file, --ssl-no-check-certificate .""" % ", ".join(cacert_candidates) ) |
4182 | 78 | # check if file is accessible (libssl errors are not very detailed) | 77 | # check if file is accessible (libssl errors are not very detailed) |
4183 | 79 | if not os.access(self.cacert_file, os.R_OK): | 78 | if not os.access(self.cacert_file, os.R_OK): |
4185 | 80 | raise FatalBackendError("Cacert database file '%s' is not readable." % cacert_file) | 79 | raise FatalBackendException("Cacert database file '%s' is not readable." % cacert_file) |
4186 | 81 | 80 | ||
4187 | 82 | def connect(self): | 81 | def connect(self): |
4188 | 83 | # create new socket | 82 | # create new socket |
4189 | 84 | sock = socket.create_connection((self.host, self.port), | 83 | sock = socket.create_connection((self.host, self.port), |
4190 | 85 | self.timeout) | 84 | self.timeout) |
4192 | 86 | if self._tunnel_host: | 85 | if self.tunnel_host: |
4193 | 87 | self.sock = sock | 86 | self.sock = sock |
4195 | 88 | self._tunnel() | 87 | self.tunnel() |
4196 | 89 | 88 | ||
4197 | 90 | # wrap the socket in ssl using verification | 89 | # wrap the socket in ssl using verification |
4198 | 91 | self.sock = ssl.wrap_socket(sock, | 90 | self.sock = ssl.wrap_socket(sock, |
4199 | @@ -126,7 +125,7 @@ | |||
4200 | 126 | 125 | ||
4201 | 127 | self.username = parsed_url.username | 126 | self.username = parsed_url.username |
4202 | 128 | self.password = self.get_password() | 127 | self.password = self.get_password() |
4204 | 129 | self.directory = self._sanitize_path(parsed_url.path) | 128 | self.directory = self.sanitize_path(parsed_url.path) |
4205 | 130 | 129 | ||
4206 | 131 | log.Info("Using WebDAV protocol %s" % (globals.webdav_proto,)) | 130 | log.Info("Using WebDAV protocol %s" % (globals.webdav_proto,)) |
4207 | 132 | log.Info("Using WebDAV host %s port %s" % (parsed_url.hostname, parsed_url.port)) | 131 | log.Info("Using WebDAV host %s port %s" % (parsed_url.hostname, parsed_url.port)) |
4208 | @@ -134,30 +133,33 @@ | |||
4209 | 134 | 133 | ||
4210 | 135 | self.conn = None | 134 | self.conn = None |
4211 | 136 | 135 | ||
4213 | 137 | def _sanitize_path(self,path): | 136 | def sanitize_path(self,path): |
4214 | 138 | if path: | 137 | if path: |
4215 | 139 | foldpath = re.compile('/+') | 138 | foldpath = re.compile('/+') |
4216 | 140 | return foldpath.sub('/', path + '/' ) | 139 | return foldpath.sub('/', path + '/' ) |
4217 | 141 | else: | 140 | else: |
4218 | 142 | return '/' | 141 | return '/' |
4219 | 143 | 142 | ||
4221 | 144 | def _getText(self,nodelist): | 143 | def getText(self,nodelist): |
4222 | 145 | rc = "" | 144 | rc = "" |
4223 | 146 | for node in nodelist: | 145 | for node in nodelist: |
4224 | 147 | if node.nodeType == node.TEXT_NODE: | 146 | if node.nodeType == node.TEXT_NODE: |
4225 | 148 | rc = rc + node.data | 147 | rc = rc + node.data |
4226 | 149 | return rc | 148 | return rc |
4227 | 150 | 149 | ||
4229 | 151 | def _connect(self, forced=False): | 150 | def _retry_cleanup(self): |
4230 | 151 | self.connect(forced=True) | ||
4231 | 152 | |||
4232 | 153 | def connect(self, forced=False): | ||
4233 | 152 | """ | 154 | """ |
4234 | 153 | Connect or re-connect to the server, updates self.conn | 155 | Connect or re-connect to the server, updates self.conn |
4235 | 154 | # reconnect on errors as a precaution, there are errors e.g. | 156 | # reconnect on errors as a precaution, there are errors e.g. |
4236 | 155 | # "[Errno 32] Broken pipe" or SSl errors that render the connection unusable | 157 | # "[Errno 32] Broken pipe" or SSl errors that render the connection unusable |
4237 | 156 | """ | 158 | """ |
4239 | 157 | if self.retry_count<=1 and self.conn \ | 159 | if not forced and self.conn \ |
4240 | 158 | and self.conn.host == self.parsed_url.hostname: return | 160 | and self.conn.host == self.parsed_url.hostname: return |
4241 | 159 | 161 | ||
4243 | 160 | log.Info("WebDAV create connection on '%s' (retry %s) " % (self.parsed_url.hostname,self.retry_count) ) | 162 | log.Info("WebDAV create connection on '%s'" % (self.parsed_url.hostname)) |
4244 | 161 | if self.conn: self.conn.close() | 163 | if self.conn: self.conn.close() |
4245 | 162 | # http schemes needed for redirect urls from servers | 164 | # http schemes needed for redirect urls from servers |
4246 | 163 | if self.parsed_url.scheme in ['webdav','http']: | 165 | if self.parsed_url.scheme in ['webdav','http']: |
4247 | @@ -168,9 +170,9 @@ | |||
4248 | 168 | else: | 170 | else: |
4249 | 169 | self.conn = VerifiedHTTPSConnection(self.parsed_url.hostname, self.parsed_url.port) | 171 | self.conn = VerifiedHTTPSConnection(self.parsed_url.hostname, self.parsed_url.port) |
4250 | 170 | else: | 172 | else: |
4252 | 171 | raise FatalBackendError("WebDAV Unknown URI scheme: %s" % (self.parsed_url.scheme)) | 173 | raise FatalBackendException("WebDAV Unknown URI scheme: %s" % (self.parsed_url.scheme)) |
4253 | 172 | 174 | ||
4255 | 173 | def close(self): | 175 | def _close(self): |
4256 | 174 | self.conn.close() | 176 | self.conn.close() |
4257 | 175 | 177 | ||
4258 | 176 | def request(self, method, path, data=None, redirected=0): | 178 | def request(self, method, path, data=None, redirected=0): |
4259 | @@ -178,7 +180,7 @@ | |||
4260 | 178 | Wraps the connection.request method to retry once if authentication is | 180 | Wraps the connection.request method to retry once if authentication is |
4261 | 179 | required | 181 | required |
4262 | 180 | """ | 182 | """ |
4264 | 181 | self._connect() | 183 | self.connect() |
4265 | 182 | 184 | ||
4266 | 183 | quoted_path = urllib.quote(path,"/:~") | 185 | quoted_path = urllib.quote(path,"/:~") |
4267 | 184 | 186 | ||
4268 | @@ -197,12 +199,12 @@ | |||
4269 | 197 | if redirect_url: | 199 | if redirect_url: |
4270 | 198 | log.Notice("WebDAV redirect to: %s " % urllib.unquote(redirect_url) ) | 200 | log.Notice("WebDAV redirect to: %s " % urllib.unquote(redirect_url) ) |
4271 | 199 | if redirected > 10: | 201 | if redirected > 10: |
4273 | 200 | raise FatalBackendError("WebDAV redirected 10 times. Giving up.") | 202 | raise FatalBackendException("WebDAV redirected 10 times. Giving up.") |
4274 | 201 | self.parsed_url = duplicity.backend.ParsedUrl(redirect_url) | 203 | self.parsed_url = duplicity.backend.ParsedUrl(redirect_url) |
4276 | 202 | self.directory = self._sanitize_path(self.parsed_url.path) | 204 | self.directory = self.sanitize_path(self.parsed_url.path) |
4277 | 203 | return self.request(method,self.directory,data,redirected+1) | 205 | return self.request(method,self.directory,data,redirected+1) |
4278 | 204 | else: | 206 | else: |
4280 | 205 | raise FatalBackendError("WebDAV missing location header in redirect response.") | 207 | raise FatalBackendException("WebDAV missing location header in redirect response.") |
4281 | 206 | elif response.status == 401: | 208 | elif response.status == 401: |
4282 | 207 | response.close() | 209 | response.close() |
4283 | 208 | self.headers['Authorization'] = self.get_authorization(response, quoted_path) | 210 | self.headers['Authorization'] = self.get_authorization(response, quoted_path) |
4284 | @@ -261,10 +263,7 @@ | |||
4285 | 261 | auth_string = self.digest_auth_handler.get_authorization(dummy_req, self.digest_challenge) | 263 | auth_string = self.digest_auth_handler.get_authorization(dummy_req, self.digest_challenge) |
4286 | 262 | return 'Digest %s' % auth_string | 264 | return 'Digest %s' % auth_string |
4287 | 263 | 265 | ||
4288 | 264 | @retry_fatal | ||
4289 | 265 | def _list(self): | 266 | def _list(self): |
4290 | 266 | """List files in directory""" | ||
4291 | 267 | log.Info("Listing directory %s on WebDAV server" % (self.directory,)) | ||
4292 | 268 | response = None | 267 | response = None |
4293 | 269 | try: | 268 | try: |
4294 | 270 | self.headers['Depth'] = "1" | 269 | self.headers['Depth'] = "1" |
4295 | @@ -289,7 +288,7 @@ | |||
4296 | 289 | dom = xml.dom.minidom.parseString(document) | 288 | dom = xml.dom.minidom.parseString(document) |
4297 | 290 | result = [] | 289 | result = [] |
4298 | 291 | for href in dom.getElementsByTagName('d:href') + dom.getElementsByTagName('D:href'): | 290 | for href in dom.getElementsByTagName('d:href') + dom.getElementsByTagName('D:href'): |
4300 | 292 | filename = self.__taste_href(href) | 291 | filename = self.taste_href(href) |
4301 | 293 | if filename: | 292 | if filename: |
4302 | 294 | result.append(filename) | 293 | result.append(filename) |
4303 | 295 | return result | 294 | return result |
4304 | @@ -308,7 +307,7 @@ | |||
4305 | 308 | for i in range(1,len(dirs)): | 307 | for i in range(1,len(dirs)): |
4306 | 309 | d="/".join(dirs[0:i+1])+"/" | 308 | d="/".join(dirs[0:i+1])+"/" |
4307 | 310 | 309 | ||
4309 | 311 | self.close() # or we get previous request's data or exception | 310 | self._close() # or we get previous request's data or exception |
4310 | 312 | self.headers['Depth'] = "1" | 311 | self.headers['Depth'] = "1" |
4311 | 313 | response = self.request("PROPFIND", d) | 312 | response = self.request("PROPFIND", d) |
4312 | 314 | del self.headers['Depth'] | 313 | del self.headers['Depth'] |
4313 | @@ -317,21 +316,21 @@ | |||
4314 | 317 | 316 | ||
4315 | 318 | if response.status == 404: | 317 | if response.status == 404: |
4316 | 319 | log.Info("Creating missing directory %s" % d) | 318 | log.Info("Creating missing directory %s" % d) |
4318 | 320 | self.close() # or we get previous request's data or exception | 319 | self._close() # or we get previous request's data or exception |
4319 | 321 | 320 | ||
4320 | 322 | res = self.request("MKCOL", d) | 321 | res = self.request("MKCOL", d) |
4321 | 323 | if res.status != 201: | 322 | if res.status != 201: |
4322 | 324 | raise BackendException("WebDAV MKCOL %s failed: %s %s" % (d,res.status,res.reason)) | 323 | raise BackendException("WebDAV MKCOL %s failed: %s %s" % (d,res.status,res.reason)) |
4324 | 325 | self.close() | 324 | self._close() |
4325 | 326 | 325 | ||
4327 | 327 | def __taste_href(self, href): | 326 | def taste_href(self, href): |
4328 | 328 | """ | 327 | """ |
4329 | 329 | Internal helper to taste the given href node and, if | 328 | Internal helper to taste the given href node and, if |
4330 | 330 | it is a duplicity file, collect it as a result file. | 329 | it is a duplicity file, collect it as a result file. |
4331 | 331 | 330 | ||
4332 | 332 | @return: A matching filename, or None if the href did not match. | 331 | @return: A matching filename, or None if the href did not match. |
4333 | 333 | """ | 332 | """ |
4335 | 334 | raw_filename = self._getText(href.childNodes).strip() | 333 | raw_filename = self.getText(href.childNodes).strip() |
4336 | 335 | parsed_url = urlparse.urlparse(urllib.unquote(raw_filename)) | 334 | parsed_url = urlparse.urlparse(urllib.unquote(raw_filename)) |
4337 | 336 | filename = parsed_url.path | 335 | filename = parsed_url.path |
4338 | 337 | log.Debug("webdav path decoding and translation: " | 336 | log.Debug("webdav path decoding and translation: " |
4339 | @@ -361,11 +360,8 @@ | |||
4340 | 361 | else: | 360 | else: |
4341 | 362 | return None | 361 | return None |
4342 | 363 | 362 | ||
4346 | 364 | @retry_fatal | 363 | def _get(self, remote_filename, local_path): |
4344 | 365 | def get(self, remote_filename, local_path): | ||
4345 | 366 | """Get remote filename, saving it to local_path""" | ||
4347 | 367 | url = self.directory + remote_filename | 364 | url = self.directory + remote_filename |
4348 | 368 | log.Info("Retrieving %s from WebDAV server" % (url ,)) | ||
4349 | 369 | response = None | 365 | response = None |
4350 | 370 | try: | 366 | try: |
4351 | 371 | target_file = local_path.open("wb") | 367 | target_file = local_path.open("wb") |
4352 | @@ -376,7 +372,6 @@ | |||
4353 | 376 | #import hashlib | 372 | #import hashlib |
4354 | 377 | #log.Info("WebDAV GOT %s bytes with md5=%s" % (len(data),hashlib.md5(data).hexdigest()) ) | 373 | #log.Info("WebDAV GOT %s bytes with md5=%s" % (len(data),hashlib.md5(data).hexdigest()) ) |
4355 | 378 | assert not target_file.close() | 374 | assert not target_file.close() |
4356 | 379 | local_path.setdata() | ||
4357 | 380 | response.close() | 375 | response.close() |
4358 | 381 | else: | 376 | else: |
4359 | 382 | status = response.status | 377 | status = response.status |
4360 | @@ -388,13 +383,8 @@ | |||
4361 | 388 | finally: | 383 | finally: |
4362 | 389 | if response: response.close() | 384 | if response: response.close() |
4363 | 390 | 385 | ||
4369 | 391 | @retry_fatal | 386 | def _put(self, source_path, remote_filename): |
4365 | 392 | def put(self, source_path, remote_filename = None): | ||
4366 | 393 | """Transfer source_path to remote_filename""" | ||
4367 | 394 | if not remote_filename: | ||
4368 | 395 | remote_filename = source_path.get_filename() | ||
4370 | 396 | url = self.directory + remote_filename | 387 | url = self.directory + remote_filename |
4371 | 397 | log.Info("Saving %s on WebDAV server" % (url ,)) | ||
4372 | 398 | response = None | 388 | response = None |
4373 | 399 | try: | 389 | try: |
4374 | 400 | source_file = source_path.open("rb") | 390 | source_file = source_path.open("rb") |
4375 | @@ -412,27 +402,23 @@ | |||
4376 | 412 | finally: | 402 | finally: |
4377 | 413 | if response: response.close() | 403 | if response: response.close() |
4378 | 414 | 404 | ||
4400 | 415 | @retry_fatal | 405 | def _delete(self, filename): |
4401 | 416 | def delete(self, filename_list): | 406 | url = self.directory + filename |
4402 | 417 | """Delete files in filename_list""" | 407 | response = None |
4403 | 418 | for filename in filename_list: | 408 | try: |
4404 | 419 | url = self.directory + filename | 409 | response = self.request("DELETE", url) |
4405 | 420 | log.Info("Deleting %s from WebDAV server" % (url ,)) | 410 | if response.status in [200, 204]: |
4406 | 421 | response = None | 411 | response.read() |
4407 | 422 | try: | 412 | response.close() |
4408 | 423 | response = self.request("DELETE", url) | 413 | else: |
4409 | 424 | if response.status in [200, 204]: | 414 | status = response.status |
4410 | 425 | response.read() | 415 | reason = response.reason |
4411 | 426 | response.close() | 416 | response.close() |
4412 | 427 | else: | 417 | raise BackendException("Bad status code %s reason %s." % (status,reason)) |
4413 | 428 | status = response.status | 418 | except Exception as e: |
4414 | 429 | reason = response.reason | 419 | raise e |
4415 | 430 | response.close() | 420 | finally: |
4416 | 431 | raise BackendException("Bad status code %s reason %s." % (status,reason)) | 421 | if response: response.close() |
4396 | 432 | except Exception as e: | ||
4397 | 433 | raise e | ||
4398 | 434 | finally: | ||
4399 | 435 | if response: response.close() | ||
4417 | 436 | 422 | ||
4418 | 437 | duplicity.backend.register_backend("webdav", WebDAVBackend) | 423 | duplicity.backend.register_backend("webdav", WebDAVBackend) |
4419 | 438 | duplicity.backend.register_backend("webdavs", WebDAVBackend) | 424 | duplicity.backend.register_backend("webdavs", WebDAVBackend) |
4420 | 439 | 425 | ||
4421 | === modified file 'duplicity/commandline.py' | |||
4422 | --- duplicity/commandline.py 2014-04-25 23:20:12 +0000 | |||
4423 | +++ duplicity/commandline.py 2014-04-28 02:49:55 +0000 | |||
4424 | @@ -210,13 +210,6 @@ | |||
4425 | 210 | global select_opts, select_files, full_backup | 210 | global select_opts, select_files, full_backup |
4426 | 211 | global list_current, collection_status, cleanup, remove_time, verify | 211 | global list_current, collection_status, cleanup, remove_time, verify |
4427 | 212 | 212 | ||
4428 | 213 | def use_gio(*args): | ||
4429 | 214 | try: | ||
4430 | 215 | import duplicity.backends.giobackend | ||
4431 | 216 | backend.force_backend(duplicity.backends.giobackend.GIOBackend) | ||
4432 | 217 | except ImportError: | ||
4433 | 218 | log.FatalError(_("Unable to load gio backend: %s") % str(sys.exc_info()[1]), log.ErrorCode.gio_not_available) | ||
4434 | 219 | |||
4435 | 220 | def set_log_fd(fd): | 213 | def set_log_fd(fd): |
4436 | 221 | if fd < 1: | 214 | if fd < 1: |
4437 | 222 | raise optparse.OptionValueError("log-fd must be greater than zero.") | 215 | raise optparse.OptionValueError("log-fd must be greater than zero.") |
4438 | @@ -365,7 +358,9 @@ | |||
4439 | 365 | # the time specified | 358 | # the time specified |
4440 | 366 | parser.add_option("--full-if-older-than", type = "time", dest = "full_force_time", metavar = _("time")) | 359 | parser.add_option("--full-if-older-than", type = "time", dest = "full_force_time", metavar = _("time")) |
4441 | 367 | 360 | ||
4443 | 368 | parser.add_option("--gio", action = "callback", callback = use_gio) | 361 | parser.add_option("--gio",action = "callback", dest = "use_gio", |
4444 | 362 | callback = lambda o, s, v, p: (setattr(p.values, o.dest, True), | ||
4445 | 363 | old_fn_deprecation(s))) | ||
4446 | 369 | 364 | ||
4447 | 370 | parser.add_option("--gpg-options", action = "extend", metavar = _("options")) | 365 | parser.add_option("--gpg-options", action = "extend", metavar = _("options")) |
4448 | 371 | 366 | ||
4449 | @@ -521,8 +516,8 @@ | |||
4450 | 521 | # sftp command to use (ssh pexpect backend) | 516 | # sftp command to use (ssh pexpect backend) |
4451 | 522 | parser.add_option("--sftp-command", metavar = _("command")) | 517 | parser.add_option("--sftp-command", metavar = _("command")) |
4452 | 523 | 518 | ||
4455 | 524 | # sftp command to use (ssh pexpect backend) | 519 | # allow the user to switch cloudfiles backend |
4456 | 525 | parser.add_option("--cf-command", metavar = _("command")) | 520 | parser.add_option("--cf-backend", metavar = _("pyrax|cloudfiles")) |
4457 | 526 | 521 | ||
4458 | 527 | # If set, use short (< 30 char) filenames for all the remote files. | 522 | # If set, use short (< 30 char) filenames for all the remote files. |
4459 | 528 | parser.add_option("--short-filenames", action = "callback", | 523 | parser.add_option("--short-filenames", action = "callback", |
4460 | 529 | 524 | ||
4461 | === modified file 'duplicity/errors.py' | |||
4462 | --- duplicity/errors.py 2013-01-10 19:04:39 +0000 | |||
4463 | +++ duplicity/errors.py 2014-04-28 02:49:55 +0000 | |||
4464 | @@ -23,6 +23,8 @@ | |||
4465 | 23 | Error/exception classes that do not fit naturally anywhere else. | 23 | Error/exception classes that do not fit naturally anywhere else. |
4466 | 24 | """ | 24 | """ |
4467 | 25 | 25 | ||
4468 | 26 | from duplicity import log | ||
4469 | 27 | |||
4470 | 26 | class DuplicityError(Exception): | 28 | class DuplicityError(Exception): |
4471 | 27 | pass | 29 | pass |
4472 | 28 | 30 | ||
4473 | @@ -68,9 +70,11 @@ | |||
4474 | 68 | """ | 70 | """ |
4475 | 69 | Raised to indicate a backend specific problem. | 71 | Raised to indicate a backend specific problem. |
4476 | 70 | """ | 72 | """ |
4478 | 71 | pass | 73 | def __init__(self, msg, code=log.ErrorCode.backend_error): |
4479 | 74 | super(BackendException, self).__init__(msg) | ||
4480 | 75 | self.code = code | ||
4481 | 72 | 76 | ||
4483 | 73 | class FatalBackendError(DuplicityError): | 77 | class FatalBackendException(BackendException): |
4484 | 74 | """ | 78 | """ |
4485 | 75 | Raised to indicate a backend failed fatally. | 79 | Raised to indicate a backend failed fatally. |
4486 | 76 | """ | 80 | """ |
4487 | 77 | 81 | ||
4488 | === modified file 'duplicity/globals.py' | |||
4489 | --- duplicity/globals.py 2014-04-17 21:49:37 +0000 | |||
4490 | +++ duplicity/globals.py 2014-04-28 02:49:55 +0000 | |||
4491 | @@ -284,3 +284,6 @@ | |||
4492 | 284 | 284 | ||
4493 | 285 | # Level of Redundancy in % for Par2 files | 285 | # Level of Redundancy in % for Par2 files |
4494 | 286 | par2_redundancy = 10 | 286 | par2_redundancy = 10 |
4495 | 287 | |||
4496 | 288 | # Whether to enable gio backend | ||
4497 | 289 | use_gio = False | ||
4498 | 287 | 290 | ||
4499 | === modified file 'po/duplicity.pot' | |||
4500 | --- po/duplicity.pot 2014-04-25 15:03:00 +0000 | |||
4501 | +++ po/duplicity.pot 2014-04-28 02:49:55 +0000 | |||
4502 | @@ -8,7 +8,7 @@ | |||
4503 | 8 | msgstr "" | 8 | msgstr "" |
4504 | 9 | "Project-Id-Version: PACKAGE VERSION\n" | 9 | "Project-Id-Version: PACKAGE VERSION\n" |
4505 | 10 | "Report-Msgid-Bugs-To: Kenneth Loafman <kenneth@loafman.com>\n" | 10 | "Report-Msgid-Bugs-To: Kenneth Loafman <kenneth@loafman.com>\n" |
4507 | 11 | "POT-Creation-Date: 2014-04-21 11:04-0500\n" | 11 | "POT-Creation-Date: 2014-04-27 22:39-0400\n" |
4508 | 12 | "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" | 12 | "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" |
4509 | 13 | "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" | 13 | "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" |
4510 | 14 | "Language-Team: LANGUAGE <LL@li.org>\n" | 14 | "Language-Team: LANGUAGE <LL@li.org>\n" |
4511 | @@ -69,243 +69,243 @@ | |||
4512 | 69 | "Continuing restart on file %s." | 69 | "Continuing restart on file %s." |
4513 | 70 | msgstr "" | 70 | msgstr "" |
4514 | 71 | 71 | ||
4516 | 72 | #: ../bin/duplicity:299 | 72 | #: ../bin/duplicity:297 |
4517 | 73 | #, python-format | 73 | #, python-format |
4518 | 74 | msgid "File %s was corrupted during upload." | 74 | msgid "File %s was corrupted during upload." |
4519 | 75 | msgstr "" | 75 | msgstr "" |
4520 | 76 | 76 | ||
4522 | 77 | #: ../bin/duplicity:333 | 77 | #: ../bin/duplicity:331 |
4523 | 78 | msgid "" | 78 | msgid "" |
4524 | 79 | "Restarting backup, but current encryption settings do not match original " | 79 | "Restarting backup, but current encryption settings do not match original " |
4525 | 80 | "settings" | 80 | "settings" |
4526 | 81 | msgstr "" | 81 | msgstr "" |
4527 | 82 | 82 | ||
4529 | 83 | #: ../bin/duplicity:356 | 83 | #: ../bin/duplicity:354 |
4530 | 84 | #, python-format | 84 | #, python-format |
4531 | 85 | msgid "Restarting after volume %s, file %s, block %s" | 85 | msgid "Restarting after volume %s, file %s, block %s" |
4532 | 86 | msgstr "" | 86 | msgstr "" |
4533 | 87 | 87 | ||
4535 | 88 | #: ../bin/duplicity:423 | 88 | #: ../bin/duplicity:421 |
4536 | 89 | #, python-format | 89 | #, python-format |
4537 | 90 | msgid "Processed volume %d" | 90 | msgid "Processed volume %d" |
4538 | 91 | msgstr "" | 91 | msgstr "" |
4539 | 92 | 92 | ||
4541 | 93 | #: ../bin/duplicity:572 | 93 | #: ../bin/duplicity:570 |
4542 | 94 | msgid "" | 94 | msgid "" |
4543 | 95 | "Fatal Error: Unable to start incremental backup. Old signatures not found " | 95 | "Fatal Error: Unable to start incremental backup. Old signatures not found " |
4544 | 96 | "and incremental specified" | 96 | "and incremental specified" |
4545 | 97 | msgstr "" | 97 | msgstr "" |
4546 | 98 | 98 | ||
4548 | 99 | #: ../bin/duplicity:576 | 99 | #: ../bin/duplicity:574 |
4549 | 100 | msgid "No signatures found, switching to full backup." | 100 | msgid "No signatures found, switching to full backup." |
4550 | 101 | msgstr "" | 101 | msgstr "" |
4551 | 102 | 102 | ||
4553 | 103 | #: ../bin/duplicity:590 | 103 | #: ../bin/duplicity:588 |
4554 | 104 | msgid "Backup Statistics" | 104 | msgid "Backup Statistics" |
4555 | 105 | msgstr "" | 105 | msgstr "" |
4556 | 106 | 106 | ||
4558 | 107 | #: ../bin/duplicity:695 | 107 | #: ../bin/duplicity:693 |
4559 | 108 | #, python-format | 108 | #, python-format |
4560 | 109 | msgid "%s not found in archive, no files restored." | 109 | msgid "%s not found in archive, no files restored." |
4561 | 110 | msgstr "" | 110 | msgstr "" |
4562 | 111 | 111 | ||
4564 | 112 | #: ../bin/duplicity:699 | 112 | #: ../bin/duplicity:697 |
4565 | 113 | msgid "No files found in archive - nothing restored." | 113 | msgid "No files found in archive - nothing restored." |
4566 | 114 | msgstr "" | 114 | msgstr "" |
4567 | 115 | 115 | ||
4569 | 116 | #: ../bin/duplicity:732 | 116 | #: ../bin/duplicity:730 |
4570 | 117 | #, python-format | 117 | #, python-format |
4571 | 118 | msgid "Processed volume %d of %d" | 118 | msgid "Processed volume %d of %d" |
4572 | 119 | msgstr "" | 119 | msgstr "" |
4573 | 120 | 120 | ||
4574 | 121 | #: ../bin/duplicity:764 | ||
4575 | 122 | #, python-format | ||
4576 | 123 | msgid "Invalid data - %s hash mismatch for file:" | ||
4577 | 124 | msgstr "" | ||
4578 | 125 | |||
4579 | 121 | #: ../bin/duplicity:766 | 126 | #: ../bin/duplicity:766 |
4580 | 122 | #, python-format | 127 | #, python-format |
4581 | 123 | msgid "Invalid data - %s hash mismatch for file:" | ||
4582 | 124 | msgstr "" | ||
4583 | 125 | |||
4584 | 126 | #: ../bin/duplicity:768 | ||
4585 | 127 | #, python-format | ||
4586 | 128 | msgid "Calculated hash: %s" | 128 | msgid "Calculated hash: %s" |
4587 | 129 | msgstr "" | 129 | msgstr "" |
4588 | 130 | 130 | ||
4590 | 131 | #: ../bin/duplicity:769 | 131 | #: ../bin/duplicity:767 |
4591 | 132 | #, python-format | 132 | #, python-format |
4592 | 133 | msgid "Manifest hash: %s" | 133 | msgid "Manifest hash: %s" |
4593 | 134 | msgstr "" | 134 | msgstr "" |
4594 | 135 | 135 | ||
4596 | 136 | #: ../bin/duplicity:807 | 136 | #: ../bin/duplicity:805 |
4597 | 137 | #, python-format | 137 | #, python-format |
4598 | 138 | msgid "Volume was signed by key %s, not %s" | 138 | msgid "Volume was signed by key %s, not %s" |
4599 | 139 | msgstr "" | 139 | msgstr "" |
4600 | 140 | 140 | ||
4602 | 141 | #: ../bin/duplicity:837 | 141 | #: ../bin/duplicity:835 |
4603 | 142 | #, python-format | 142 | #, python-format |
4604 | 143 | msgid "Verify complete: %s, %s." | 143 | msgid "Verify complete: %s, %s." |
4605 | 144 | msgstr "" | 144 | msgstr "" |
4606 | 145 | 145 | ||
4608 | 146 | #: ../bin/duplicity:838 | 146 | #: ../bin/duplicity:836 |
4609 | 147 | #, python-format | 147 | #, python-format |
4610 | 148 | msgid "%d file compared" | 148 | msgid "%d file compared" |
4611 | 149 | msgid_plural "%d files compared" | 149 | msgid_plural "%d files compared" |
4612 | 150 | msgstr[0] "" | 150 | msgstr[0] "" |
4613 | 151 | msgstr[1] "" | 151 | msgstr[1] "" |
4614 | 152 | 152 | ||
4616 | 153 | #: ../bin/duplicity:840 | 153 | #: ../bin/duplicity:838 |
4617 | 154 | #, python-format | 154 | #, python-format |
4618 | 155 | msgid "%d difference found" | 155 | msgid "%d difference found" |
4619 | 156 | msgid_plural "%d differences found" | 156 | msgid_plural "%d differences found" |
4620 | 157 | msgstr[0] "" | 157 | msgstr[0] "" |
4621 | 158 | msgstr[1] "" | 158 | msgstr[1] "" |
4622 | 159 | 159 | ||
4624 | 160 | #: ../bin/duplicity:859 | 160 | #: ../bin/duplicity:857 |
4625 | 161 | msgid "No extraneous files found, nothing deleted in cleanup." | 161 | msgid "No extraneous files found, nothing deleted in cleanup." |
4626 | 162 | msgstr "" | 162 | msgstr "" |
4627 | 163 | 163 | ||
4629 | 164 | #: ../bin/duplicity:864 | 164 | #: ../bin/duplicity:862 |
4630 | 165 | msgid "Deleting this file from backend:" | 165 | msgid "Deleting this file from backend:" |
4631 | 166 | msgid_plural "Deleting these files from backend:" | 166 | msgid_plural "Deleting these files from backend:" |
4632 | 167 | msgstr[0] "" | 167 | msgstr[0] "" |
4633 | 168 | msgstr[1] "" | 168 | msgstr[1] "" |
4634 | 169 | 169 | ||
4636 | 170 | #: ../bin/duplicity:876 | 170 | #: ../bin/duplicity:874 |
4637 | 171 | msgid "Found the following file to delete:" | 171 | msgid "Found the following file to delete:" |
4638 | 172 | msgid_plural "Found the following files to delete:" | 172 | msgid_plural "Found the following files to delete:" |
4639 | 173 | msgstr[0] "" | 173 | msgstr[0] "" |
4640 | 174 | msgstr[1] "" | 174 | msgstr[1] "" |
4641 | 175 | 175 | ||
4643 | 176 | #: ../bin/duplicity:880 | 176 | #: ../bin/duplicity:878 |
4644 | 177 | msgid "Run duplicity again with the --force option to actually delete." | 177 | msgid "Run duplicity again with the --force option to actually delete." |
4645 | 178 | msgstr "" | 178 | msgstr "" |
4646 | 179 | 179 | ||
4647 | 180 | #: ../bin/duplicity:921 | ||
4648 | 181 | msgid "There are backup set(s) at time(s):" | ||
4649 | 182 | msgstr "" | ||
4650 | 183 | |||
4651 | 180 | #: ../bin/duplicity:923 | 184 | #: ../bin/duplicity:923 |
4652 | 181 | msgid "There are backup set(s) at time(s):" | ||
4653 | 182 | msgstr "" | ||
4654 | 183 | |||
4655 | 184 | #: ../bin/duplicity:925 | ||
4656 | 185 | msgid "Which can't be deleted because newer sets depend on them." | 185 | msgid "Which can't be deleted because newer sets depend on them." |
4657 | 186 | msgstr "" | 186 | msgstr "" |
4658 | 187 | 187 | ||
4660 | 188 | #: ../bin/duplicity:929 | 188 | #: ../bin/duplicity:927 |
4661 | 189 | msgid "" | 189 | msgid "" |
4662 | 190 | "Current active backup chain is older than specified time. However, it will " | 190 | "Current active backup chain is older than specified time. However, it will " |
4663 | 191 | "not be deleted. To remove all your backups, manually purge the repository." | 191 | "not be deleted. To remove all your backups, manually purge the repository." |
4664 | 192 | msgstr "" | 192 | msgstr "" |
4665 | 193 | 193 | ||
4667 | 194 | #: ../bin/duplicity:935 | 194 | #: ../bin/duplicity:933 |
4668 | 195 | msgid "No old backup sets found, nothing deleted." | 195 | msgid "No old backup sets found, nothing deleted." |
4669 | 196 | msgstr "" | 196 | msgstr "" |
4670 | 197 | 197 | ||
4672 | 198 | #: ../bin/duplicity:938 | 198 | #: ../bin/duplicity:936 |
4673 | 199 | msgid "Deleting backup chain at time:" | 199 | msgid "Deleting backup chain at time:" |
4674 | 200 | msgid_plural "Deleting backup chains at times:" | 200 | msgid_plural "Deleting backup chains at times:" |
4675 | 201 | msgstr[0] "" | 201 | msgstr[0] "" |
4676 | 202 | msgstr[1] "" | 202 | msgstr[1] "" |
4677 | 203 | 203 | ||
4678 | 204 | #: ../bin/duplicity:947 | ||
4679 | 205 | #, python-format | ||
4680 | 206 | msgid "Deleting incremental signature chain %s" | ||
4681 | 207 | msgstr "" | ||
4682 | 208 | |||
4683 | 204 | #: ../bin/duplicity:949 | 209 | #: ../bin/duplicity:949 |
4684 | 205 | #, python-format | 210 | #, python-format |
4685 | 206 | msgid "Deleting incremental signature chain %s" | ||
4686 | 207 | msgstr "" | ||
4687 | 208 | |||
4688 | 209 | #: ../bin/duplicity:951 | ||
4689 | 210 | #, python-format | ||
4690 | 211 | msgid "Deleting incremental backup chain %s" | 211 | msgid "Deleting incremental backup chain %s" |
4691 | 212 | msgstr "" | 212 | msgstr "" |
4692 | 213 | 213 | ||
4693 | 214 | #: ../bin/duplicity:952 | ||
4694 | 215 | #, python-format | ||
4695 | 216 | msgid "Deleting complete signature chain %s" | ||
4696 | 217 | msgstr "" | ||
4697 | 218 | |||
4698 | 214 | #: ../bin/duplicity:954 | 219 | #: ../bin/duplicity:954 |
4699 | 215 | #, python-format | 220 | #, python-format |
4700 | 216 | msgid "Deleting complete signature chain %s" | ||
4701 | 217 | msgstr "" | ||
4702 | 218 | |||
4703 | 219 | #: ../bin/duplicity:956 | ||
4704 | 220 | #, python-format | ||
4705 | 221 | msgid "Deleting complete backup chain %s" | 221 | msgid "Deleting complete backup chain %s" |
4706 | 222 | msgstr "" | 222 | msgstr "" |
4707 | 223 | 223 | ||
4709 | 224 | #: ../bin/duplicity:962 | 224 | #: ../bin/duplicity:960 |
4710 | 225 | msgid "Found old backup chain at the following time:" | 225 | msgid "Found old backup chain at the following time:" |
4711 | 226 | msgid_plural "Found old backup chains at the following times:" | 226 | msgid_plural "Found old backup chains at the following times:" |
4712 | 227 | msgstr[0] "" | 227 | msgstr[0] "" |
4713 | 228 | msgstr[1] "" | 228 | msgstr[1] "" |
4714 | 229 | 229 | ||
4716 | 230 | #: ../bin/duplicity:966 | 230 | #: ../bin/duplicity:964 |
4717 | 231 | msgid "Rerun command with --force option to actually delete." | 231 | msgid "Rerun command with --force option to actually delete." |
4718 | 232 | msgstr "" | 232 | msgstr "" |
4719 | 233 | 233 | ||
4721 | 234 | #: ../bin/duplicity:1043 | 234 | #: ../bin/duplicity:1041 |
4722 | 235 | #, python-format | 235 | #, python-format |
4723 | 236 | msgid "Deleting local %s (not authoritative at backend)." | 236 | msgid "Deleting local %s (not authoritative at backend)." |
4724 | 237 | msgstr "" | 237 | msgstr "" |
4725 | 238 | 238 | ||
4727 | 239 | #: ../bin/duplicity:1047 | 239 | #: ../bin/duplicity:1045 |
4728 | 240 | #, python-format | 240 | #, python-format |
4729 | 241 | msgid "Unable to delete %s: %s" | 241 | msgid "Unable to delete %s: %s" |
4730 | 242 | msgstr "" | 242 | msgstr "" |
4731 | 243 | 243 | ||
4733 | 244 | #: ../bin/duplicity:1075 ../duplicity/dup_temp.py:263 | 244 | #: ../bin/duplicity:1073 ../duplicity/dup_temp.py:263 |
4734 | 245 | #, python-format | 245 | #, python-format |
4735 | 246 | msgid "Failed to read %s: %s" | 246 | msgid "Failed to read %s: %s" |
4736 | 247 | msgstr "" | 247 | msgstr "" |
4737 | 248 | 248 | ||
4739 | 249 | #: ../bin/duplicity:1089 | 249 | #: ../bin/duplicity:1087 |
4740 | 250 | #, python-format | 250 | #, python-format |
4741 | 251 | msgid "Copying %s to local cache." | 251 | msgid "Copying %s to local cache." |
4742 | 252 | msgstr "" | 252 | msgstr "" |
4743 | 253 | 253 | ||
4745 | 254 | #: ../bin/duplicity:1137 | 254 | #: ../bin/duplicity:1135 |
4746 | 255 | msgid "Local and Remote metadata are synchronized, no sync needed." | 255 | msgid "Local and Remote metadata are synchronized, no sync needed." |
4747 | 256 | msgstr "" | 256 | msgstr "" |
4748 | 257 | 257 | ||
4750 | 258 | #: ../bin/duplicity:1142 | 258 | #: ../bin/duplicity:1140 |
4751 | 259 | msgid "Synchronizing remote metadata to local cache..." | 259 | msgid "Synchronizing remote metadata to local cache..." |
4752 | 260 | msgstr "" | 260 | msgstr "" |
4753 | 261 | 261 | ||
4755 | 262 | #: ../bin/duplicity:1157 | 262 | #: ../bin/duplicity:1155 |
4756 | 263 | msgid "Sync would copy the following from remote to local:" | 263 | msgid "Sync would copy the following from remote to local:" |
4757 | 264 | msgstr "" | 264 | msgstr "" |
4758 | 265 | 265 | ||
4760 | 266 | #: ../bin/duplicity:1160 | 266 | #: ../bin/duplicity:1158 |
4761 | 267 | msgid "Sync would remove the following spurious local files:" | 267 | msgid "Sync would remove the following spurious local files:" |
4762 | 268 | msgstr "" | 268 | msgstr "" |
4763 | 269 | 269 | ||
4765 | 270 | #: ../bin/duplicity:1203 | 270 | #: ../bin/duplicity:1201 |
4766 | 271 | msgid "Unable to get free space on temp." | 271 | msgid "Unable to get free space on temp." |
4767 | 272 | msgstr "" | 272 | msgstr "" |
4768 | 273 | 273 | ||
4770 | 274 | #: ../bin/duplicity:1211 | 274 | #: ../bin/duplicity:1209 |
4771 | 275 | #, python-format | 275 | #, python-format |
4772 | 276 | msgid "Temp space has %d available, backup needs approx %d." | 276 | msgid "Temp space has %d available, backup needs approx %d." |
4773 | 277 | msgstr "" | 277 | msgstr "" |
4774 | 278 | 278 | ||
4776 | 279 | #: ../bin/duplicity:1214 | 279 | #: ../bin/duplicity:1212 |
4777 | 280 | #, python-format | 280 | #, python-format |
4778 | 281 | msgid "Temp has %d available, backup will use approx %d." | 281 | msgid "Temp has %d available, backup will use approx %d." |
4779 | 282 | msgstr "" | 282 | msgstr "" |
4780 | 283 | 283 | ||
4782 | 284 | #: ../bin/duplicity:1222 | 284 | #: ../bin/duplicity:1220 |
4783 | 285 | msgid "Unable to get max open files." | 285 | msgid "Unable to get max open files." |
4784 | 286 | msgstr "" | 286 | msgstr "" |
4785 | 287 | 287 | ||
4787 | 288 | #: ../bin/duplicity:1226 | 288 | #: ../bin/duplicity:1224 |
4788 | 289 | #, python-format | 289 | #, python-format |
4789 | 290 | msgid "" | 290 | msgid "" |
4790 | 291 | "Max open files of %s is too low, should be >= 1024.\n" | 291 | "Max open files of %s is too low, should be >= 1024.\n" |
4791 | 292 | "Use 'ulimit -n 1024' or higher to correct.\n" | 292 | "Use 'ulimit -n 1024' or higher to correct.\n" |
4792 | 293 | msgstr "" | 293 | msgstr "" |
4793 | 294 | 294 | ||
4795 | 295 | #: ../bin/duplicity:1275 | 295 | #: ../bin/duplicity:1273 |
4796 | 296 | msgid "" | 296 | msgid "" |
4797 | 297 | "RESTART: The first volume failed to upload before termination.\n" | 297 | "RESTART: The first volume failed to upload before termination.\n" |
4798 | 298 | " Restart is impossible...starting backup from beginning." | 298 | " Restart is impossible...starting backup from beginning." |
4799 | 299 | msgstr "" | 299 | msgstr "" |
4800 | 300 | 300 | ||
4802 | 301 | #: ../bin/duplicity:1281 | 301 | #: ../bin/duplicity:1279 |
4803 | 302 | #, python-format | 302 | #, python-format |
4804 | 303 | msgid "" | 303 | msgid "" |
4805 | 304 | "RESTART: Volumes %d to %d failed to upload before termination.\n" | 304 | "RESTART: Volumes %d to %d failed to upload before termination.\n" |
4806 | 305 | " Restarting backup at volume %d." | 305 | " Restarting backup at volume %d." |
4807 | 306 | msgstr "" | 306 | msgstr "" |
4808 | 307 | 307 | ||
4810 | 308 | #: ../bin/duplicity:1288 | 308 | #: ../bin/duplicity:1286 |
4811 | 309 | #, python-format | 309 | #, python-format |
4812 | 310 | msgid "" | 310 | msgid "" |
4813 | 311 | "RESTART: Impossible backup state: manifest has %d vols, remote has %d vols.\n" | 311 | "RESTART: Impossible backup state: manifest has %d vols, remote has %d vols.\n" |
4814 | @@ -314,7 +314,7 @@ | |||
4815 | 314 | " backup then restart the backup from the beginning." | 314 | " backup then restart the backup from the beginning." |
4816 | 315 | msgstr "" | 315 | msgstr "" |
4817 | 316 | 316 | ||
4819 | 317 | #: ../bin/duplicity:1310 | 317 | #: ../bin/duplicity:1308 |
4820 | 318 | msgid "" | 318 | msgid "" |
4821 | 319 | "\n" | 319 | "\n" |
4822 | 320 | "PYTHONOPTIMIZE in the environment causes duplicity to fail to\n" | 320 | "PYTHONOPTIMIZE in the environment causes duplicity to fail to\n" |
4823 | @@ -324,54 +324,54 @@ | |||
4824 | 324 | "See https://bugs.launchpad.net/duplicity/+bug/931175\n" | 324 | "See https://bugs.launchpad.net/duplicity/+bug/931175\n" |
4825 | 325 | msgstr "" | 325 | msgstr "" |
4826 | 326 | 326 | ||
4828 | 327 | #: ../bin/duplicity:1401 | 327 | #: ../bin/duplicity:1399 |
4829 | 328 | #, python-format | 328 | #, python-format |
4830 | 329 | msgid "Last %s backup left a partial set, restarting." | 329 | msgid "Last %s backup left a partial set, restarting." |
4831 | 330 | msgstr "" | 330 | msgstr "" |
4832 | 331 | 331 | ||
4834 | 332 | #: ../bin/duplicity:1405 | 332 | #: ../bin/duplicity:1403 |
4835 | 333 | #, python-format | 333 | #, python-format |
4836 | 334 | msgid "Cleaning up previous partial %s backup set, restarting." | 334 | msgid "Cleaning up previous partial %s backup set, restarting." |
4837 | 335 | msgstr "" | 335 | msgstr "" |
4838 | 336 | 336 | ||
4839 | 337 | #: ../bin/duplicity:1414 | ||
4840 | 338 | msgid "Last full backup date:" | ||
4841 | 339 | msgstr "" | ||
4842 | 340 | |||
4843 | 337 | #: ../bin/duplicity:1416 | 341 | #: ../bin/duplicity:1416 |
4845 | 338 | msgid "Last full backup date:" | 342 | msgid "Last full backup date: none" |
4846 | 339 | msgstr "" | 343 | msgstr "" |
4847 | 340 | 344 | ||
4848 | 341 | #: ../bin/duplicity:1418 | 345 | #: ../bin/duplicity:1418 |
4849 | 342 | msgid "Last full backup date: none" | ||
4850 | 343 | msgstr "" | ||
4851 | 344 | |||
4852 | 345 | #: ../bin/duplicity:1420 | ||
4853 | 346 | msgid "Last full backup is too old, forcing full backup" | 346 | msgid "Last full backup is too old, forcing full backup" |
4854 | 347 | msgstr "" | 347 | msgstr "" |
4855 | 348 | 348 | ||
4857 | 349 | #: ../bin/duplicity:1463 | 349 | #: ../bin/duplicity:1461 |
4858 | 350 | msgid "" | 350 | msgid "" |
4859 | 351 | "When using symmetric encryption, the signing passphrase must equal the " | 351 | "When using symmetric encryption, the signing passphrase must equal the " |
4860 | 352 | "encryption passphrase." | 352 | "encryption passphrase." |
4861 | 353 | msgstr "" | 353 | msgstr "" |
4862 | 354 | 354 | ||
4864 | 355 | #: ../bin/duplicity:1516 | 355 | #: ../bin/duplicity:1514 |
4865 | 356 | msgid "INT intercepted...exiting." | 356 | msgid "INT intercepted...exiting." |
4866 | 357 | msgstr "" | 357 | msgstr "" |
4867 | 358 | 358 | ||
4869 | 359 | #: ../bin/duplicity:1524 | 359 | #: ../bin/duplicity:1522 |
4870 | 360 | #, python-format | 360 | #, python-format |
4871 | 361 | msgid "GPG error detail: %s" | 361 | msgid "GPG error detail: %s" |
4872 | 362 | msgstr "" | 362 | msgstr "" |
4873 | 363 | 363 | ||
4875 | 364 | #: ../bin/duplicity:1534 | 364 | #: ../bin/duplicity:1532 |
4876 | 365 | #, python-format | 365 | #, python-format |
4877 | 366 | msgid "User error detail: %s" | 366 | msgid "User error detail: %s" |
4878 | 367 | msgstr "" | 367 | msgstr "" |
4879 | 368 | 368 | ||
4881 | 369 | #: ../bin/duplicity:1544 | 369 | #: ../bin/duplicity:1542 |
4882 | 370 | #, python-format | 370 | #, python-format |
4883 | 371 | msgid "Backend error detail: %s" | 371 | msgid "Backend error detail: %s" |
4884 | 372 | msgstr "" | 372 | msgstr "" |
4885 | 373 | 373 | ||
4887 | 374 | #: ../bin/rdiffdir:56 ../duplicity/commandline.py:238 | 374 | #: ../bin/rdiffdir:56 ../duplicity/commandline.py:233 |
4888 | 375 | #, python-format | 375 | #, python-format |
4889 | 376 | msgid "Error opening file %s" | 376 | msgid "Error opening file %s" |
4890 | 377 | msgstr "" | 377 | msgstr "" |
4891 | @@ -381,33 +381,33 @@ | |||
4892 | 381 | msgid "File %s already exists, will not overwrite." | 381 | msgid "File %s already exists, will not overwrite." |
4893 | 382 | msgstr "" | 382 | msgstr "" |
4894 | 383 | 383 | ||
4896 | 384 | #: ../duplicity/selection.py:119 | 384 | #: ../duplicity/selection.py:121 |
4897 | 385 | #, python-format | 385 | #, python-format |
4898 | 386 | msgid "Skipping socket %s" | 386 | msgid "Skipping socket %s" |
4899 | 387 | msgstr "" | 387 | msgstr "" |
4900 | 388 | 388 | ||
4902 | 389 | #: ../duplicity/selection.py:123 | 389 | #: ../duplicity/selection.py:125 |
4903 | 390 | #, python-format | 390 | #, python-format |
4904 | 391 | msgid "Error initializing file %s" | 391 | msgid "Error initializing file %s" |
4905 | 392 | msgstr "" | 392 | msgstr "" |
4906 | 393 | 393 | ||
4908 | 394 | #: ../duplicity/selection.py:127 ../duplicity/selection.py:148 | 394 | #: ../duplicity/selection.py:129 ../duplicity/selection.py:150 |
4909 | 395 | #, python-format | 395 | #, python-format |
4910 | 396 | msgid "Error accessing possibly locked file %s" | 396 | msgid "Error accessing possibly locked file %s" |
4911 | 397 | msgstr "" | 397 | msgstr "" |
4912 | 398 | 398 | ||
4914 | 399 | #: ../duplicity/selection.py:163 | 399 | #: ../duplicity/selection.py:165 |
4915 | 400 | #, python-format | 400 | #, python-format |
4916 | 401 | msgid "Warning: base %s doesn't exist, continuing" | 401 | msgid "Warning: base %s doesn't exist, continuing" |
4917 | 402 | msgstr "" | 402 | msgstr "" |
4918 | 403 | 403 | ||
4921 | 404 | #: ../duplicity/selection.py:166 ../duplicity/selection.py:184 | 404 | #: ../duplicity/selection.py:168 ../duplicity/selection.py:186 |
4922 | 405 | #: ../duplicity/selection.py:187 | 405 | #: ../duplicity/selection.py:189 |
4923 | 406 | #, python-format | 406 | #, python-format |
4924 | 407 | msgid "Selecting %s" | 407 | msgid "Selecting %s" |
4925 | 408 | msgstr "" | 408 | msgstr "" |
4926 | 409 | 409 | ||
4928 | 410 | #: ../duplicity/selection.py:268 | 410 | #: ../duplicity/selection.py:270 |
4929 | 411 | #, python-format | 411 | #, python-format |
4930 | 412 | msgid "" | 412 | msgid "" |
4931 | 413 | "Fatal Error: The file specification\n" | 413 | "Fatal Error: The file specification\n" |
4932 | @@ -418,14 +418,14 @@ | |||
4933 | 418 | "pattern (such as '**') which matches the base directory." | 418 | "pattern (such as '**') which matches the base directory." |
4934 | 419 | msgstr "" | 419 | msgstr "" |
4935 | 420 | 420 | ||
4937 | 421 | #: ../duplicity/selection.py:276 | 421 | #: ../duplicity/selection.py:278 |
4938 | 422 | #, python-format | 422 | #, python-format |
4939 | 423 | msgid "" | 423 | msgid "" |
4940 | 424 | "Fatal Error while processing expression\n" | 424 | "Fatal Error while processing expression\n" |
4941 | 425 | "%s" | 425 | "%s" |
4942 | 426 | msgstr "" | 426 | msgstr "" |
4943 | 427 | 427 | ||
4945 | 428 | #: ../duplicity/selection.py:286 | 428 | #: ../duplicity/selection.py:288 |
4946 | 429 | #, python-format | 429 | #, python-format |
4947 | 430 | msgid "" | 430 | msgid "" |
4948 | 431 | "Last selection expression:\n" | 431 | "Last selection expression:\n" |
4949 | @@ -435,49 +435,49 @@ | |||
4950 | 435 | "probably isn't what you meant." | 435 | "probably isn't what you meant." |
4951 | 436 | msgstr "" | 436 | msgstr "" |
4952 | 437 | 437 | ||
4954 | 438 | #: ../duplicity/selection.py:311 | 438 | #: ../duplicity/selection.py:313 |
4955 | 439 | #, python-format | 439 | #, python-format |
4956 | 440 | msgid "Reading filelist %s" | 440 | msgid "Reading filelist %s" |
4957 | 441 | msgstr "" | 441 | msgstr "" |
4958 | 442 | 442 | ||
4960 | 443 | #: ../duplicity/selection.py:314 | 443 | #: ../duplicity/selection.py:316 |
4961 | 444 | #, python-format | 444 | #, python-format |
4962 | 445 | msgid "Sorting filelist %s" | 445 | msgid "Sorting filelist %s" |
4963 | 446 | msgstr "" | 446 | msgstr "" |
4964 | 447 | 447 | ||
4966 | 448 | #: ../duplicity/selection.py:341 | 448 | #: ../duplicity/selection.py:343 |
4967 | 449 | #, python-format | 449 | #, python-format |
4968 | 450 | msgid "" | 450 | msgid "" |
4969 | 451 | "Warning: file specification '%s' in filelist %s\n" | 451 | "Warning: file specification '%s' in filelist %s\n" |
4970 | 452 | "doesn't start with correct prefix %s. Ignoring." | 452 | "doesn't start with correct prefix %s. Ignoring." |
4971 | 453 | msgstr "" | 453 | msgstr "" |
4972 | 454 | 454 | ||
4974 | 455 | #: ../duplicity/selection.py:345 | 455 | #: ../duplicity/selection.py:347 |
4975 | 456 | msgid "Future prefix errors will not be logged." | 456 | msgid "Future prefix errors will not be logged." |
4976 | 457 | msgstr "" | 457 | msgstr "" |
4977 | 458 | 458 | ||
4979 | 459 | #: ../duplicity/selection.py:361 | 459 | #: ../duplicity/selection.py:363 |
4980 | 460 | #, python-format | 460 | #, python-format |
4981 | 461 | msgid "Error closing filelist %s" | 461 | msgid "Error closing filelist %s" |
4982 | 462 | msgstr "" | 462 | msgstr "" |
4983 | 463 | 463 | ||
4985 | 464 | #: ../duplicity/selection.py:428 | 464 | #: ../duplicity/selection.py:430 |
4986 | 465 | #, python-format | 465 | #, python-format |
4987 | 466 | msgid "Reading globbing filelist %s" | 466 | msgid "Reading globbing filelist %s" |
4988 | 467 | msgstr "" | 467 | msgstr "" |
4989 | 468 | 468 | ||
4991 | 469 | #: ../duplicity/selection.py:461 | 469 | #: ../duplicity/selection.py:463 |
4992 | 470 | #, python-format | 470 | #, python-format |
4993 | 471 | msgid "Error compiling regular expression %s" | 471 | msgid "Error compiling regular expression %s" |
4994 | 472 | msgstr "" | 472 | msgstr "" |
4995 | 473 | 473 | ||
4997 | 474 | #: ../duplicity/selection.py:477 | 474 | #: ../duplicity/selection.py:479 |
4998 | 475 | msgid "" | 475 | msgid "" |
4999 | 476 | "Warning: exclude-device-files is not the first selector.\n" | 476 | "Warning: exclude-device-files is not the first selector.\n" |
5000 | 477 | "This may not be what you intended" | 477 | "This may not be what you intended" |
A+ for effort ;) .. we should probably make that a 0.7 release, stating that there might be some hickups (bugs) due to major code revision.
two more comments below..
On 28.04.2014 04:55, Michael Terry wrote: block(orig_ size, dest_filename): query_info( [dest_filename] )[dest_ filename]
SNIP
>
>
> === modified file 'bin/duplicity'
> --- bin/duplicity 2014-04-19 19:54:54 +0000
> +++ bin/duplicity 2014-04-28 02:49:55 +0000
> @@ -289,8 +289,6 @@
>
> def validate_
> info = backend.
> - if 'size' not in info:
> - return # backend didn't know how to query size
this could raise a key not found error. can you really guaratee the key will be there?
> size = info['size'] backend. py' backend. py 2014-04-25 23:53:46 +0000 backend. py 2014-04-28 02:49:55 +0000
> if size is None:
> return # error querying file
>
> === modified file 'duplicity/
> --- duplicity/
> +++ duplicity/
> @@ -24,6 +24,7 @@
SNIP
>
> # These URL schemes have a backend with a notion of an RFC "network location".
> # The 'file' and 's3+http' schemes should not be in this list.
> @@ -69,7 +74,6 @@
> uses_netloc = ['ftp',
> 'ftps',
> 'hsi',
> - 'rsync',
> 's3',
> 'scp', 'ssh', 'sftp',
> 'webdav', 'webdavs',
any reason why you removed rsync here?
..ede