Merge lp:~mterry/duplicity/backend-unification into lp:duplicity/0.6

Proposed by Michael Terry
Status: Merged
Merged at revision: 981
Proposed branch: lp:~mterry/duplicity/backend-unification
Merge into: lp:duplicity/0.6
Diff against target: 7230 lines (+2212/-2635)
43 files modified
bin/duplicity (+0/-2)
duplicity/backend.py (+293/-279)
duplicity/backends/README (+79/-0)
duplicity/backends/_boto_multi.py (+2/-2)
duplicity/backends/_boto_single.py (+77/-186)
duplicity/backends/_cf_cloudfiles.py (+33/-124)
duplicity/backends/_cf_pyrax.py (+34/-124)
duplicity/backends/_ssh_paramiko.py (+84/-156)
duplicity/backends/_ssh_pexpect.py (+131/-168)
duplicity/backends/botobackend.py (+6/-8)
duplicity/backends/cfbackend.py (+5/-2)
duplicity/backends/dpbxbackend.py (+17/-32)
duplicity/backends/ftpbackend.py (+11/-20)
duplicity/backends/ftpsbackend.py (+10/-24)
duplicity/backends/gdocsbackend.py (+84/-143)
duplicity/backends/giobackend.py (+79/-114)
duplicity/backends/hsibackend.py (+8/-22)
duplicity/backends/imapbackend.py (+46/-50)
duplicity/backends/localbackend.py (+30/-86)
duplicity/backends/megabackend.py (+48/-98)
duplicity/backends/par2backend.py (+63/-79)
duplicity/backends/rsyncbackend.py (+13/-27)
duplicity/backends/sshbackend.py (+7/-2)
duplicity/backends/swiftbackend.py (+25/-119)
duplicity/backends/tahoebackend.py (+11/-22)
duplicity/backends/webdavbackend.py (+46/-60)
duplicity/commandline.py (+5/-10)
duplicity/errors.py (+6/-2)
duplicity/globals.py (+3/-0)
po/duplicity.pot (+270/-293)
testing/__init__.py (+5/-1)
testing/functional/__init__.py (+5/-3)
testing/functional/test_badupload.py (+1/-1)
testing/manual/backendtest.py (+208/-290)
testing/manual/config.py.tmpl (+18/-86)
testing/overrides/bin/hsi (+16/-0)
testing/overrides/bin/lftp (+23/-0)
testing/overrides/bin/ncftpget (+6/-0)
testing/overrides/bin/ncftpls (+19/-0)
testing/overrides/bin/ncftpput (+6/-0)
testing/overrides/bin/tahoe (+10/-0)
testing/unit/test_backend.py (+128/-0)
testing/unit/test_backend_instance.py (+241/-0)
To merge this branch: bzr merge lp:~mterry/duplicity/backend-unification
Reviewer Review Type Date Requested Status
duplicity-team Pending
Review via email: mp+216764@code.launchpad.net

Description of the change

Reorganize and simplify backend code. Specifically:

- Formalize the expected API between backends and duplicity. See the new file duplicity/backends/README for the instructions I've given authors.

- Add some tests for our backend wrapper class as well as some tests for individual backends. For several backends that have some commands do all the heavy lifting (hsi, tahoe, ftp), I've added fake little mock commands so that we can test them locally. This doesn't truly test our integration with those commands, but at least lets us test the backend glue code itself.

- Removed a lot of duplicate and unused code which backends were using (or not using). This branch drops 700 lines of code (~20%) in duplicity/backends!

- Simplified expectations of backends. Our wrapper code now does all the retrying, and all the exception handling. Backends can 'fire and forget' trusting our wrappers to give the user a reasonable error message. Obviously, backends can also add more details and make nicer error messages. But they don't *have* to.

- Separate out the backend classes from our wrapper class. Now there is no possibility of namespace collision. All our API methods use one underscore. Anything else (zero or two underscores) are for the backend class's use.

- Added the concept of a 'backend prefix' which is used by par2 and gio backends to provide generic support for "schema+" in urls -- like par2+ or gio+. I've since marked the '--gio' flag as deprecated, in favor of 'gio+'. Now you can even nest such backends like par2+gio+file://blah/blah.

- The switch to control which cloudfiles backend had a typo. I fixed this, but I'm not sure I should have? If we haven't had complaints, maybe we can just drop the old backend.

- I manually tested all the backends we have (except hsi and tahoe -- but those are simple wrappers around commands and I did test those via mocks per above). I also added a bunch more manual backend tests to ./testing/manual/backendtest.py, which can now be run like the above to test all the files you have configured in config.py or you can pass it a URL which it will use for testing (useful for backend authors).

To post a comment you must log in.
Revision history for this message
edso (ed.so) wrote :

A+ for effort ;) .. we should probably make that a 0.7 release, stating that there might be some hickups (bugs) due to major code revision.
two more comments below..

On 28.04.2014 04:55, Michael Terry wrote:
SNIP
>
>
> === modified file 'bin/duplicity'
> --- bin/duplicity 2014-04-19 19:54:54 +0000
> +++ bin/duplicity 2014-04-28 02:49:55 +0000
> @@ -289,8 +289,6 @@
>
> def validate_block(orig_size, dest_filename):
> info = backend.query_info([dest_filename])[dest_filename]
> - if 'size' not in info:
> - return # backend didn't know how to query size

this could raise a key not found error. can you really guaratee the key will be there?

> size = info['size']
> if size is None:
> return # error querying file
>
> === modified file 'duplicity/backend.py'
> --- duplicity/backend.py 2014-04-25 23:53:46 +0000
> +++ duplicity/backend.py 2014-04-28 02:49:55 +0000
> @@ -24,6 +24,7 @@
SNIP
>
> # These URL schemes have a backend with a notion of an RFC "network location".
> # The 'file' and 's3+http' schemes should not be in this list.
> @@ -69,7 +74,6 @@
> uses_netloc = ['ftp',
> 'ftps',
> 'hsi',
> - 'rsync',
> 's3',
> 'scp', 'ssh', 'sftp',
> 'webdav', 'webdavs',

any reason why you removed rsync here?

..ede

Revision history for this message
Kenneth Loafman (kenneth-loafman) wrote :

ede, I don't see a problem with a missing key.

As to 0.7 -- I'm thinking that some of the latest changes, not bug fixes,
should be removed from 0.6 and applied to 0.7. We could make the next
release, 0.6.24, the last of the 0.6 series. We probably need to start an
email thread to discuss this, but I think it's time.

On Mon, Apr 28, 2014 at 4:51 AM, edso <email address hidden> wrote:

> A+ for effort ;) .. we should probably make that a 0.7 release, stating
> that there might be some hickups (bugs) due to major code revision.
> two more comments below..
>
> On 28.04.2014 04:55, Michael Terry wrote:
> SNIP
> >
> >
> > === modified file 'bin/duplicity'
> > --- bin/duplicity 2014-04-19 19:54:54 +0000
> > +++ bin/duplicity 2014-04-28 02:49:55 +0000
> > @@ -289,8 +289,6 @@
> >
> > def validate_block(orig_size, dest_filename):
> > info = backend.query_info([dest_filename])[dest_filename]
> > - if 'size' not in info:
> > - return # backend didn't know how to query size
>
> this could raise a key not found error. can you really guaratee the key
> will be there?
>
> > size = info['size']
> > if size is None:
> > return # error querying file
> >
> > === modified file 'duplicity/backend.py'
> > --- duplicity/backend.py 2014-04-25 23:53:46 +0000
> > +++ duplicity/backend.py 2014-04-28 02:49:55 +0000
> > @@ -24,6 +24,7 @@
> SNIP
> >
> > # These URL schemes have a backend with a notion of an RFC "network
> location".
> > # The 'file' and 's3+http' schemes should not be in this list.
> > @@ -69,7 +74,6 @@
> > uses_netloc = ['ftp',
> > 'ftps',
> > 'hsi',
> > - 'rsync',
> > 's3',
> > 'scp', 'ssh', 'sftp',
> > 'webdav', 'webdavs',
>
> any reason why you removed rsync here?
>
> ..ede
>
> --
>
> https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764
> Your team duplicity-team is requested to review the proposed merge of
> lp:~mterry/duplicity/backend-unification into lp:duplicity.
>
> _______________________________________________
> Mailing list: https://launchpad.net/~duplicity-team
> Post to : <email address hidden>
> Unsubscribe : https://launchpad.net/~duplicity-team
> More help : https://help.launchpad.net/ListHelp
>

Revision history for this message
edso (ed.so) wrote :

On 28.04.2014 13:21, Kenneth Loafman wrote:
> ede, I don't see a problem with a missing key.

        size = info['size']
will throw an error if the dictionary misses they key. we had that earlier. but MT will probably know if he hacked the info routines safely enough to guarantee it's existence.

> As to 0.7 -- I'm thinking that some of the latest changes, not bug fixes,
> should be removed from 0.6 and applied to 0.7. We could make the next
> release, 0.6.24, the last of the 0.6 series. We probably need to start an
> email thread to discuss this, but I think it's time.

that's an alternative route to go, although i am not aware of major bugs since 0.6.23 ..ede

>
>
> On Mon, Apr 28, 2014 at 4:51 AM, edso <email address hidden> wrote:
>
>> A+ for effort ;) .. we should probably make that a 0.7 release, stating
>> that there might be some hickups (bugs) due to major code revision.
>> two more comments below..
>>
>> On 28.04.2014 04:55, Michael Terry wrote:
>> SNIP
>>>
>>>
>>> === modified file 'bin/duplicity'
>>> --- bin/duplicity 2014-04-19 19:54:54 +0000
>>> +++ bin/duplicity 2014-04-28 02:49:55 +0000
>>> @@ -289,8 +289,6 @@
>>>
>>> def validate_block(orig_size, dest_filename):
>>> info = backend.query_info([dest_filename])[dest_filename]
>>> - if 'size' not in info:
>>> - return # backend didn't know how to query size
>>
>> this could raise a key not found error. can you really guaratee the key
>> will be there?
>>
>>> size = info['size']
>>> if size is None:
>>> return # error querying file
>>>
>>> === modified file 'duplicity/backend.py'
>>> --- duplicity/backend.py 2014-04-25 23:53:46 +0000
>>> +++ duplicity/backend.py 2014-04-28 02:49:55 +0000
>>> @@ -24,6 +24,7 @@
>> SNIP
>>>
>>> # These URL schemes have a backend with a notion of an RFC "network
>> location".
>>> # The 'file' and 's3+http' schemes should not be in this list.
>>> @@ -69,7 +74,6 @@
>>> uses_netloc = ['ftp',
>>> 'ftps',
>>> 'hsi',
>>> - 'rsync',
>>> 's3',
>>> 'scp', 'ssh', 'sftp',
>>> 'webdav', 'webdavs',
>>
>> any reason why you removed rsync here?
>>
>> ..ede
>>
>> --
>>
>> https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764
>> Your team duplicity-team is requested to review the proposed merge of
>> lp:~mterry/duplicity/backend-unification into lp:duplicity.
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~duplicity-team
>> Post to : <email address hidden>
>> Unsubscribe : https://launchpad.net/~duplicity-team
>> More help : https://help.launchpad.net/ListHelp
>>
>

Revision history for this message
Kenneth Loafman (kenneth-loafman) wrote :

Sorry, was looking above the comment, not below. Not enough coffee!

There are a number of bugs/additions in the current CHANGELOG slated for 0.6.24. Looking at them again, this change is the trigger for 0.7 startup.

Mike, I really like tox. It's really easy to merge changes and retest.

Revision history for this message
Michael Terry (mterry) wrote :

> this could raise a key not found error. can you really guaratee the key will be there?

Yes, because now query_info calls the backend query function, then makes sure that 'size' exists for each filename passed. This was part of the "allow backends to be dumb" strategy -- if they skip a filename or don't provide 'size', the rest of duplicity doesn't have to care. Guarantees come from the wrapper class, not the backends.

> any reason why you removed rsync here?

Yeah, I changed the rsync backend to support relative/local urls. This was so we could test it as part of our normal automated tests (without needing a server). The list I removed 'rsync' from was a list of backends that require a hostname.

Revision history for this message
edso (ed.so) wrote :

On 28.04.2014 14:55, Michael Terry wrote:
>> this could raise a key not found error. can you really guaratee the key will be there?
>
> Yes, because now query_info calls the backend query function, then makes sure that 'size' exists for each filename passed. This was part of the "allow backends to be dumb" strategy -- if they skip a filename or don't provide 'size', the rest of duplicity doesn't have to care. Guarantees come from the wrapper class, not the backends.
>
>> any reason why you removed rsync here?
>
> Yeah, I changed the rsync backend to support relative/local urls. This was so we could test it as part of our normal automated tests (without needing a server). The list I removed 'rsync' from was a list of backends that require a hostname.
>

well done.. thx ede

Revision history for this message
Kenneth Loafman (kenneth-loafman) wrote :

Either of you see a show-stopper to keep 0.6.24 from going out as-is? We
have an impressive list of changes already.

I've resurrected the old 0.7 series, merged in the current 0.6, and will
merge this set of changes in later. Does everyone agree that this is a
good place to make the switch to 0.7? Do we want to make 0.7 the current
focus of development after the release of 0.6.24?

On Mon, Apr 28, 2014 at 9:15 AM, edso <email address hidden> wrote:

> On 28.04.2014 14:55, Michael Terry wrote:
> >> this could raise a key not found error. can you really guaratee the key
> will be there?
> >
> > Yes, because now query_info calls the backend query function, then makes
> sure that 'size' exists for each filename passed. This was part of the
> "allow backends to be dumb" strategy -- if they skip a filename or don't
> provide 'size', the rest of duplicity doesn't have to care. Guarantees
> come from the wrapper class, not the backends.
> >
> >> any reason why you removed rsync here?
> >
> > Yeah, I changed the rsync backend to support relative/local urls. This
> was so we could test it as part of our normal automated tests (without
> needing a server). The list I removed 'rsync' from was a list of backends
> that require a hostname.
> >
>
> well done.. thx ede
>
> --
>
> https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764
> You are subscribed to branch lp:duplicity.
>

Revision history for this message
Michael Terry (mterry) wrote :

Well, two separate decisions: release 0.6.24 or not and whether we want to go to 0.7.

But I don't have a reason to wait on releasing 0.6.24.

And as for 0.7, I'm fine with that too. Just a number. :)

But this branch itself feels like not a very momentous change. By which I mean, end users shouldn't care about this at all. Doesn't fix any big bugs (except maybe the --cf-backend argument) and doesn't add any new features.

The bump to require python 2.6 feels like a more natural jumping off point, but as edso noted, 0.6.23 already has 2.6isms in it. So not really a change there either.

So... :shrug:

Revision history for this message
edso (ed.so) wrote :

main concern for me is to have a "stable" version, to point users to if they stumble over a showstopper after Mike's reworks. that can be 0.6.24
i wonder if it'd make sense to also exclude all the other modifications Mike in his current run (since drop-pexpect) as they may also contain some side effects that may be better kept in 0.7dev.

considering the major backend differences of Mike's latest commit i agree to make a 0.7 the new devel focus.

..ede

On 28.04.2014 16:45, Kenneth Loafman wrote:
> Either of you see a show-stopper to keep 0.6.24 from going out as-is? We
> have an impressive list of changes already.
>
> I've resurrected the old 0.7 series, merged in the current 0.6, and will
> merge this set of changes in later. Does everyone agree that this is a
> good place to make the switch to 0.7? Do we want to make 0.7 the current
> focus of development after the release of 0.6.24?
>
>
> On Mon, Apr 28, 2014 at 9:15 AM, edso <email address hidden> wrote:
>
>> On 28.04.2014 14:55, Michael Terry wrote:
>>>> this could raise a key not found error. can you really guaratee the key
>> will be there?
>>>
>>> Yes, because now query_info calls the backend query function, then makes
>> sure that 'size' exists for each filename passed. This was part of the
>> "allow backends to be dumb" strategy -- if they skip a filename or don't
>> provide 'size', the rest of duplicity doesn't have to care. Guarantees
>> come from the wrapper class, not the backends.
>>>
>>>> any reason why you removed rsync here?
>>>
>>> Yeah, I changed the rsync backend to support relative/local urls. This
>> was so we could test it as part of our normal automated tests (without
>> needing a server). The list I removed 'rsync' from was a list of backends
>> that require a hostname.
>>>
>>
>> well done.. thx ede
>>
>> --
>>
>> https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764
>> You are subscribed to branch lp:duplicity.
>>
>

Revision history for this message
edso (ed.so) wrote :

On 28.04.2014 16:53, Michael Terry wrote:
> Well, two separate decisions: release 0.6.24 or not and whether we want to go to 0.7.
>
> But I don't have a reason to wait on releasing 0.6.24.
>
> And as for 0.7, I'm fine with that too. Just a number. :)
>
> But this branch itself feels like not a very momentous change. By which I mean, end users shouldn't care about this at all. Doesn't fix any big bugs (except maybe the --cf-backend argument) and doesn't add any new features.
>

it's just the sheer amount of modified code that worries me a bit. as i wrote in the other email

also,
1. old backend code e.g. released by somebody privately will not work anymore when dropped into duplicity/backends
2. true, officially switching to 2.6 might be another point, somebody might want to remove the 2.6'isms from the 0.6 branch if they really need duplicity on some oldold distro. supposing we also keep the 2.6 changes only in 0.7 branch.

..ede

Revision history for this message
Kenneth Loafman (kenneth-loafman) wrote :

Hmmm, this is getting complicated, but I see your points.

We can test against 2.4 and 2.5 using the answers here:
http://askubuntu.com/questions/125342/how-can-i-install-python-2-6-on-12-04(the
deadsnakes repositories, love the name). So, if we wanted, we could
move the modernizations over to 0.7 and fix 0.6.24 to work on older
Pythons.

That seems like a cleaner and more rational break to me. Thoughts?

On Mon, Apr 28, 2014 at 10:06 AM, edso <email address hidden> wrote:

> On 28.04.2014 16:53, Michael Terry wrote:
> > Well, two separate decisions: release 0.6.24 or not and whether we want
> to go to 0.7.
> >
> > But I don't have a reason to wait on releasing 0.6.24.
> >
> > And as for 0.7, I'm fine with that too. Just a number. :)
> >
> > But this branch itself feels like not a very momentous change. By which
> I mean, end users shouldn't care about this at all. Doesn't fix any big
> bugs (except maybe the --cf-backend argument) and doesn't add any new
> features.
> >
>
> it's just the sheer amount of modified code that worries me a bit. as i
> wrote in the other email
>
> also,
> 1. old backend code e.g. released by somebody privately will not work
> anymore when dropped into duplicity/backends
> 2. true, officially switching to 2.6 might be another point, somebody
> might want to remove the 2.6'isms from the 0.6 branch if they really need
> duplicity on some oldold distro. supposing we also keep the 2.6 changes
> only in 0.7 branch.
>
> ..ede
>
> --
>
> https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764
> You are subscribed to branch lp:duplicity.
>

Revision history for this message
edso (ed.so) wrote :

exactly my point, from a stability standpoint though :) ..ede

On 28.04.2014 17:18, Kenneth Loafman wrote:
> Hmmm, this is getting complicated, but I see your points.
>
> We can test against 2.4 and 2.5 using the answers here:
> http://askubuntu.com/questions/125342/how-can-i-install-python-2-6-on-12-04(the
> deadsnakes repositories, love the name). So, if we wanted, we could
> move the modernizations over to 0.7 and fix 0.6.24 to work on older
> Pythons.
>
> That seems like a cleaner and more rational break to me. Thoughts?
>
>
>
> On Mon, Apr 28, 2014 at 10:06 AM, edso <email address hidden> wrote:
>
>> On 28.04.2014 16:53, Michael Terry wrote:
>>> Well, two separate decisions: release 0.6.24 or not and whether we want
>> to go to 0.7.
>>>
>>> But I don't have a reason to wait on releasing 0.6.24.
>>>
>>> And as for 0.7, I'm fine with that too. Just a number. :)
>>>
>>> But this branch itself feels like not a very momentous change. By which
>> I mean, end users shouldn't care about this at all. Doesn't fix any big
>> bugs (except maybe the --cf-backend argument) and doesn't add any new
>> features.
>>>
>>
>> it's just the sheer amount of modified code that worries me a bit. as i
>> wrote in the other email
>>
>> also,
>> 1. old backend code e.g. released by somebody privately will not work
>> anymore when dropped into duplicity/backends
>> 2. true, officially switching to 2.6 might be another point, somebody
>> might want to remove the 2.6'isms from the 0.6 branch if they really need
>> duplicity on some oldold distro. supposing we also keep the 2.6 changes
>> only in 0.7 branch.
>>
>> ..ede
>>
>> --
>>
>> https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764
>> You are subscribed to branch lp:duplicity.
>>
>

Revision history for this message
Michael Terry (mterry) wrote :

> it's just the sheer amount of modified code that worries me a bit.
> as i wrote in the other email

Fair. I tried to thoroughly test, but I'm human. :)

> 1. old backend code e.g. released by somebody privately will not work
> anymore when dropped into duplicity/backends

This is a good point. I'm curious how many people do that; I can easily imagine it happening.

As for your comments, Ken:

> So, if we wanted, we could move the modernizations over to 0.7 and
> fix 0.6.24 to work on older Pythons.

As much as I hate giving any thoughts to py2.4/2.5, this is probably a nice thing to do. We clearly have interested users still on RHEL5, from my last survey on the mailing list.

If I have time in the next week, I can do this. Else, if you tackle this, note that you shouldn't do the testing on a recently-released distro. Because recent versions of tox/setuptools/virtualenv all dropped support for <2.6. Maybe try Ubuntu 12.04? I haven't tried an older distro myself yet, but ideally you'd be able to add "py24,py25" to the tox.ini file and just run tox to confirm that we work with older pythons. If that doesn't work, you'd have to go to an earlier release of your distro of choice.

Revision history for this message
Kenneth Loafman (kenneth-loafman) wrote :

Luckily, I'm still on 12.04. I'll test and see if 2.4/2.5 work.

On Mon, Apr 28, 2014 at 10:25 AM, Michael Terry <<email address hidden>
> wrote:

> > it's just the sheer amount of modified code that worries me a bit.
> > as i wrote in the other email
>
> Fair. I tried to thoroughly test, but I'm human. :)
>
> > 1. old backend code e.g. released by somebody privately will not work
> > anymore when dropped into duplicity/backends
>
> This is a good point. I'm curious how many people do that; I can easily
> imagine it happening.
>
> As for your comments, Ken:
>
> > So, if we wanted, we could move the modernizations over to 0.7 and
> > fix 0.6.24 to work on older Pythons.
>
> As much as I hate giving any thoughts to py2.4/2.5, this is probably a
> nice thing to do. We clearly have interested users still on RHEL5, from my
> last survey on the mailing list.
>
> If I have time in the next week, I can do this. Else, if you tackle this,
> note that you shouldn't do the testing on a recently-released distro.
> Because recent versions of tox/setuptools/virtualenv all dropped support
> for <2.6. Maybe try Ubuntu 12.04? I haven't tried an older distro myself
> yet, but ideally you'd be able to add "py24,py25" to the tox.ini file and
> just run tox to confirm that we work with older pythons. If that doesn't
> work, you'd have to go to an earlier release of your distro of choice.
> --
>
> https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764
> You are subscribed to branch lp:duplicity.
>

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'bin/duplicity'
2--- bin/duplicity 2014-04-19 19:54:54 +0000
3+++ bin/duplicity 2014-04-28 02:49:55 +0000
4@@ -289,8 +289,6 @@
5
6 def validate_block(orig_size, dest_filename):
7 info = backend.query_info([dest_filename])[dest_filename]
8- if 'size' not in info:
9- return # backend didn't know how to query size
10 size = info['size']
11 if size is None:
12 return # error querying file
13
14=== modified file 'duplicity/backend.py'
15--- duplicity/backend.py 2014-04-25 23:53:46 +0000
16+++ duplicity/backend.py 2014-04-28 02:49:55 +0000
17@@ -24,6 +24,7 @@
18 intended to be used by the backends themselves.
19 """
20
21+import errno
22 import os
23 import sys
24 import socket
25@@ -31,6 +32,7 @@
26 import re
27 import getpass
28 import gettext
29+import types
30 import urllib
31 import urlparse
32
33@@ -38,11 +40,14 @@
34 from duplicity import file_naming
35 from duplicity import globals
36 from duplicity import log
37+from duplicity import path
38 from duplicity import progress
39+from duplicity import util
40
41 from duplicity.util import exception_traceback
42
43-from duplicity.errors import BackendException, FatalBackendError
44+from duplicity.errors import BackendException
45+from duplicity.errors import FatalBackendException
46 from duplicity.errors import TemporaryLoadException
47 from duplicity.errors import ConflictingScheme
48 from duplicity.errors import InvalidBackendURL
49@@ -54,8 +59,8 @@
50 # todo: this should really NOT be done here
51 socket.setdefaulttimeout(globals.timeout)
52
53-_forced_backend = None
54 _backends = {}
55+_backend_prefixes = {}
56
57 # These URL schemes have a backend with a notion of an RFC "network location".
58 # The 'file' and 's3+http' schemes should not be in this list.
59@@ -69,7 +74,6 @@
60 uses_netloc = ['ftp',
61 'ftps',
62 'hsi',
63- 'rsync',
64 's3',
65 'scp', 'ssh', 'sftp',
66 'webdav', 'webdavs',
67@@ -96,8 +100,6 @@
68 if fn.endswith("backend.py"):
69 fn = fn[:-3]
70 imp = "duplicity.backends.%s" % (fn,)
71- # ignore gio as it is explicitly loaded in commandline.parse_cmdline_options()
72- if fn == "giobackend": continue
73 try:
74 __import__(imp)
75 res = "Succeeded"
76@@ -110,14 +112,6 @@
77 continue
78
79
80-def force_backend(backend):
81- """
82- Forces the use of a particular backend, regardless of schema
83- """
84- global _forced_backend
85- _forced_backend = backend
86-
87-
88 def register_backend(scheme, backend_factory):
89 """
90 Register a given backend factory responsible for URL:s with the
91@@ -144,6 +138,32 @@
92 _backends[scheme] = backend_factory
93
94
95+def register_backend_prefix(scheme, backend_factory):
96+ """
97+ Register a given backend factory responsible for URL:s with the
98+ given scheme prefix.
99+
100+ The backend must be a callable which, when called with a URL as
101+ the single parameter, returns an object implementing the backend
102+ protocol (i.e., a subclass of Backend).
103+
104+ Typically the callable will be the Backend subclass itself.
105+
106+ This function is not thread-safe and is intended to be called
107+ during module importation or start-up.
108+ """
109+ global _backend_prefixes
110+
111+ assert callable(backend_factory), "backend factory must be callable"
112+
113+ if scheme in _backend_prefixes:
114+ raise ConflictingScheme("the prefix %s already has a backend "
115+ "associated with it"
116+ "" % (scheme,))
117+
118+ _backend_prefixes[scheme] = backend_factory
119+
120+
121 def is_backend_url(url_string):
122 """
123 @return Whether the given string looks like a backend URL.
124@@ -157,9 +177,9 @@
125 return False
126
127
128-def get_backend(url_string):
129+def get_backend_object(url_string):
130 """
131- Instantiate a backend suitable for the given URL, or return None
132+ Find the right backend class instance for the given URL, or return None
133 if the given string looks like a local path rather than a URL.
134
135 Raise InvalidBackendURL if the URL is not a valid URL.
136@@ -167,22 +187,44 @@
137 if not is_backend_url(url_string):
138 return None
139
140+ global _backends, _backend_prefixes
141+
142 pu = ParsedUrl(url_string)
143-
144- # Implicit local path
145 assert pu.scheme, "should be a backend url according to is_backend_url"
146
147- global _backends, _forced_backend
148-
149- if _forced_backend:
150- return _forced_backend(pu)
151- elif not pu.scheme in _backends:
152- raise UnsupportedBackendScheme(url_string)
153- else:
154- try:
155- return _backends[pu.scheme](pu)
156- except ImportError:
157- raise BackendException(_("Could not initialize backend: %s") % str(sys.exc_info()[1]))
158+ factory = None
159+
160+ for prefix in _backend_prefixes:
161+ if url_string.startswith(prefix + '+'):
162+ factory = _backend_prefixes[prefix]
163+ pu = ParsedUrl(url_string.lstrip(prefix + '+'))
164+ break
165+
166+ if factory is None:
167+ if not pu.scheme in _backends:
168+ raise UnsupportedBackendScheme(url_string)
169+ else:
170+ factory = _backends[pu.scheme]
171+
172+ try:
173+ return factory(pu)
174+ except ImportError:
175+ raise BackendException(_("Could not initialize backend: %s") % str(sys.exc_info()[1]))
176+
177+
178+def get_backend(url_string):
179+ """
180+ Instantiate a backend suitable for the given URL, or return None
181+ if the given string looks like a local path rather than a URL.
182+
183+ Raise InvalidBackendURL if the URL is not a valid URL.
184+ """
185+ if globals.use_gio:
186+ url_string = 'gio+' + url_string
187+ obj = get_backend_object(url_string)
188+ if obj:
189+ obj = BackendWrapper(obj)
190+ return obj
191
192
193 class ParsedUrl:
194@@ -296,165 +338,74 @@
195 # Replace the full network location with the stripped copy.
196 return parsed_url.geturl().replace(parsed_url.netloc, straight_netloc, 1)
197
198-
199-# Decorator for backend operation functions to simplify writing one that
200-# retries. Make sure to add a keyword argument 'raise_errors' to your function
201-# and if it is true, raise an exception on an error. If false, fatal-log it.
202-def retry(fn):
203- def iterate(*args):
204- for n in range(1, globals.num_retries):
205- try:
206- kwargs = {"raise_errors" : True}
207- return fn(*args, **kwargs)
208- except Exception as e:
209- log.Warn(_("Attempt %s failed: %s: %s")
210- % (n, e.__class__.__name__, str(e)))
211- log.Debug(_("Backtrace of previous error: %s")
212- % exception_traceback())
213- if isinstance(e, TemporaryLoadException):
214- time.sleep(30) # wait longer before trying again
215- else:
216- time.sleep(10) # wait a bit before trying again
217- # Now try one last time, but fatal-log instead of raising errors
218- kwargs = {"raise_errors" : False}
219- return fn(*args, **kwargs)
220- return iterate
221-
222-# same as above, a bit dumber and always dies fatally if last trial fails
223-# hence no need for the raise_errors var ;), we really catch everything here
224-# as we don't know what the underlying code comes up with and we really *do*
225-# want to retry globals.num_retries times under all circumstances
226-def retry_fatal(fn):
227- def _retry_fatal(self, *args):
228- try:
229- n = 0
230- for n in range(1, globals.num_retries):
231+def _get_code_from_exception(backend, operation, e):
232+ if isinstance(e, BackendException) and e.code != log.ErrorCode.backend_error:
233+ return e.code
234+ elif hasattr(backend, '_error_code'):
235+ return backend._error_code(operation, e) or log.ErrorCode.backend_error
236+ elif hasattr(e, 'errno'):
237+ # A few backends return such errors (local, paramiko, etc)
238+ if e.errno == errno.EACCES:
239+ return log.ErrorCode.backend_permission_denied
240+ elif e.errno == errno.ENOENT:
241+ return log.ErrorCode.backend_not_found
242+ elif e.errno == errno.ENOSPC:
243+ return log.ErrorCode.backend_no_space
244+ return log.ErrorCode.backend_error
245+
246+def retry(operation, fatal=True):
247+ # Decorators with arguments introduce a new level of indirection. So we
248+ # have to return a decorator function (which itself returns a function!)
249+ def outer_retry(fn):
250+ def inner_retry(self, *args):
251+ for n in range(1, globals.num_retries + 1):
252 try:
253- self.retry_count = n
254 return fn(self, *args)
255- except FatalBackendError as e:
256+ except FatalBackendException as e:
257 # die on fatal errors
258 raise e
259 except Exception as e:
260 # retry on anything else
261- log.Warn(_("Attempt %s failed. %s: %s")
262- % (n, e.__class__.__name__, str(e)))
263 log.Debug(_("Backtrace of previous error: %s")
264 % exception_traceback())
265- time.sleep(10) # wait a bit before trying again
266- # final trial, die on exception
267- self.retry_count = n+1
268- return fn(self, *args)
269- except Exception as e:
270- log.Debug(_("Backtrace of previous error: %s")
271- % exception_traceback())
272- log.FatalError(_("Giving up after %s attempts. %s: %s")
273- % (self.retry_count, e.__class__.__name__, str(e)),
274- log.ErrorCode.backend_error)
275- self.retry_count = 0
276-
277- return _retry_fatal
278+ at_end = n == globals.num_retries
279+ code = _get_code_from_exception(self.backend, operation, e)
280+ if code == log.ErrorCode.backend_not_found:
281+ # If we tried to do something, but the file just isn't there,
282+ # no need to retry.
283+ at_end = True
284+ if at_end and fatal:
285+ def make_filename(f):
286+ if isinstance(f, path.ROPath):
287+ return util.escape(f.name)
288+ else:
289+ return util.escape(f)
290+ extra = ' '.join([operation] + [make_filename(x) for x in args if x])
291+ log.FatalError(_("Giving up after %s attempts. %s: %s")
292+ % (n, e.__class__.__name__,
293+ str(e)), code=code, extra=extra)
294+ else:
295+ log.Warn(_("Attempt %s failed. %s: %s")
296+ % (n, e.__class__.__name__, str(e)))
297+ if not at_end:
298+ if isinstance(e, TemporaryLoadException):
299+ time.sleep(90) # wait longer before trying again
300+ else:
301+ time.sleep(30) # wait a bit before trying again
302+ if hasattr(self.backend, '_retry_cleanup'):
303+ self.backend._retry_cleanup()
304+
305+ return inner_retry
306+ return outer_retry
307+
308
309 class Backend(object):
310 """
311- Represents a generic duplicity backend, capable of storing and
312- retrieving files.
313-
314- Concrete sub-classes are expected to implement:
315-
316- - put
317- - get
318- - list
319- - delete
320- - close (if needed)
321-
322- Optional:
323-
324- - move
325+ See README in backends directory for information on how to write a backend.
326 """
327-
328 def __init__(self, parsed_url):
329 self.parsed_url = parsed_url
330
331- def put(self, source_path, remote_filename = None):
332- """
333- Transfer source_path (Path object) to remote_filename (string)
334-
335- If remote_filename is None, get the filename from the last
336- path component of pathname.
337- """
338- raise NotImplementedError()
339-
340- def move(self, source_path, remote_filename = None):
341- """
342- Move source_path (Path object) to remote_filename (string)
343-
344- Same as put(), but unlinks source_path in the process. This allows the
345- local backend to do this more efficiently using rename.
346- """
347- self.put(source_path, remote_filename)
348- source_path.delete()
349-
350- def get(self, remote_filename, local_path):
351- """Retrieve remote_filename and place in local_path"""
352- raise NotImplementedError()
353-
354- def list(self):
355- """
356- Return list of filenames (byte strings) present in backend
357- """
358- def tobytes(filename):
359- "Convert a (maybe unicode) filename to bytes"
360- if isinstance(filename, unicode):
361- # There shouldn't be any encoding errors for files we care
362- # about, since duplicity filenames are ascii. But user files
363- # may be in the same directory. So just replace characters.
364- return filename.encode(sys.getfilesystemencoding(), 'replace')
365- else:
366- return filename
367-
368- if hasattr(self, '_list'):
369- # Make sure that duplicity internals only ever see byte strings
370- # for filenames, no matter what the backend thinks it is talking.
371- return [tobytes(x) for x in self._list()]
372- else:
373- raise NotImplementedError()
374-
375- def delete(self, filename_list):
376- """
377- Delete each filename in filename_list, in order if possible.
378- """
379- raise NotImplementedError()
380-
381- # Should never cause FatalError.
382- # Returns a dictionary of dictionaries. The outer dictionary maps
383- # filenames to metadata dictionaries. Supported metadata are:
384- #
385- # 'size': if >= 0, size of file
386- # if -1, file is not found
387- # if None, error querying file
388- #
389- # Returned dictionary is guaranteed to contain a metadata dictionary for
390- # each filename, but not all metadata are guaranteed to be present.
391- def query_info(self, filename_list, raise_errors=True):
392- """
393- Return metadata about each filename in filename_list
394- """
395- info = {}
396- if hasattr(self, '_query_list_info'):
397- info = self._query_list_info(filename_list)
398- elif hasattr(self, '_query_file_info'):
399- for filename in filename_list:
400- info[filename] = self._query_file_info(filename)
401-
402- # Fill out any missing entries (may happen if backend has no support
403- # or its query_list support is lazy)
404- for filename in filename_list:
405- if filename not in info:
406- info[filename] = {}
407-
408- return info
409-
410 """ use getpass by default, inherited backends may overwrite this behaviour """
411 use_getpass = True
412
413@@ -493,27 +444,7 @@
414 else:
415 return commandline
416
417- """
418- DEPRECATED:
419- run_command(_persist) - legacy wrappers for subprocess_popen(_persist)
420- """
421- def run_command(self, commandline):
422- return self.subprocess_popen(commandline)
423- def run_command_persist(self, commandline):
424- return self.subprocess_popen_persist(commandline)
425-
426- """
427- DEPRECATED:
428- popen(_persist) - legacy wrappers for subprocess_popen(_persist)
429- """
430- def popen(self, commandline):
431- result, stdout, stderr = self.subprocess_popen(commandline)
432- return stdout
433- def popen_persist(self, commandline):
434- result, stdout, stderr = self.subprocess_popen_persist(commandline)
435- return stdout
436-
437- def _subprocess_popen(self, commandline):
438+ def __subprocess_popen(self, commandline):
439 """
440 For internal use.
441 Execute the given command line, interpreted as a shell command.
442@@ -525,6 +456,10 @@
443
444 return p.returncode, stdout, stderr
445
446+ """ a dictionary for breaking exceptions, syntax is
447+ { 'command' : [ code1, code2 ], ... } see ftpbackend for an example """
448+ popen_breaks = {}
449+
450 def subprocess_popen(self, commandline):
451 """
452 Execute the given command line with error check.
453@@ -534,54 +469,179 @@
454 """
455 private = self.munge_password(commandline)
456 log.Info(_("Reading results of '%s'") % private)
457- result, stdout, stderr = self._subprocess_popen(commandline)
458+ result, stdout, stderr = self.__subprocess_popen(commandline)
459 if result != 0:
460- raise BackendException("Error running '%s'" % private)
461- return result, stdout, stderr
462-
463- """ a dictionary for persist breaking exceptions, syntax is
464- { 'command' : [ code1, code2 ], ... } see ftpbackend for an example """
465- popen_persist_breaks = {}
466-
467- def subprocess_popen_persist(self, commandline):
468- """
469- Execute the given command line with error check.
470- Retries globals.num_retries times with 30s delay.
471- Returns int Exitcode, string StdOut, string StdErr
472-
473- Raise a BackendException on failure.
474- """
475- private = self.munge_password(commandline)
476-
477- for n in range(1, globals.num_retries+1):
478- # sleep before retry
479- if n > 1:
480- time.sleep(30)
481- log.Info(_("Reading results of '%s'") % private)
482- result, stdout, stderr = self._subprocess_popen(commandline)
483- if result == 0:
484- return result, stdout, stderr
485-
486 try:
487 m = re.search("^\s*([\S]+)", commandline)
488 cmd = m.group(1)
489- ignores = self.popen_persist_breaks[ cmd ]
490+ ignores = self.popen_breaks[ cmd ]
491 ignores.index(result)
492 """ ignore a predefined set of error codes """
493 return 0, '', ''
494 except (KeyError, ValueError):
495- pass
496-
497- log.Warn(ngettext("Running '%s' failed with code %d (attempt #%d)",
498- "Running '%s' failed with code %d (attempt #%d)", n) %
499- (private, result, n))
500- if stdout or stderr:
501- log.Warn(_("Error is:\n%s") % stderr + (stderr and stdout and "\n") + stdout)
502-
503- log.Warn(ngettext("Giving up trying to execute '%s' after %d attempt",
504- "Giving up trying to execute '%s' after %d attempts",
505- globals.num_retries) % (private, globals.num_retries))
506- raise BackendException("Error running '%s'" % private)
507+ raise BackendException("Error running '%s': returned %d, with output:\n%s" %
508+ (private, result, stdout + '\n' + stderr))
509+ return result, stdout, stderr
510+
511+
512+class BackendWrapper(object):
513+ """
514+ Represents a generic duplicity backend, capable of storing and
515+ retrieving files.
516+ """
517+
518+ def __init__(self, backend):
519+ self.backend = backend
520+
521+ def __do_put(self, source_path, remote_filename):
522+ if hasattr(self.backend, '_put'):
523+ log.Info(_("Writing %s") % remote_filename)
524+ self.backend._put(source_path, remote_filename)
525+ else:
526+ raise NotImplementedError()
527+
528+ @retry('put', fatal=True)
529+ def put(self, source_path, remote_filename=None):
530+ """
531+ Transfer source_path (Path object) to remote_filename (string)
532+
533+ If remote_filename is None, get the filename from the last
534+ path component of pathname.
535+ """
536+ if not remote_filename:
537+ remote_filename = source_path.get_filename()
538+ self.__do_put(source_path, remote_filename)
539+
540+ @retry('move', fatal=True)
541+ def move(self, source_path, remote_filename=None):
542+ """
543+ Move source_path (Path object) to remote_filename (string)
544+
545+ Same as put(), but unlinks source_path in the process. This allows the
546+ local backend to do this more efficiently using rename.
547+ """
548+ if not remote_filename:
549+ remote_filename = source_path.get_filename()
550+ if hasattr(self.backend, '_move'):
551+ if self.backend._move(source_path, remote_filename) is not False:
552+ source_path.setdata()
553+ return
554+ self.__do_put(source_path, remote_filename)
555+ source_path.delete()
556+
557+ @retry('get', fatal=True)
558+ def get(self, remote_filename, local_path):
559+ """Retrieve remote_filename and place in local_path"""
560+ if hasattr(self.backend, '_get'):
561+ self.backend._get(remote_filename, local_path)
562+ if not local_path.exists():
563+ raise BackendException(_("File %s not found locally after get "
564+ "from backend") % util.ufn(local_path.name))
565+ local_path.setdata()
566+ else:
567+ raise NotImplementedError()
568+
569+ @retry('list', fatal=True)
570+ def list(self):
571+ """
572+ Return list of filenames (byte strings) present in backend
573+ """
574+ def tobytes(filename):
575+ "Convert a (maybe unicode) filename to bytes"
576+ if isinstance(filename, unicode):
577+ # There shouldn't be any encoding errors for files we care
578+ # about, since duplicity filenames are ascii. But user files
579+ # may be in the same directory. So just replace characters.
580+ return filename.encode(sys.getfilesystemencoding(), 'replace')
581+ else:
582+ return filename
583+
584+ if hasattr(self.backend, '_list'):
585+ # Make sure that duplicity internals only ever see byte strings
586+ # for filenames, no matter what the backend thinks it is talking.
587+ return [tobytes(x) for x in self.backend._list()]
588+ else:
589+ raise NotImplementedError()
590+
591+ def delete(self, filename_list):
592+ """
593+ Delete each filename in filename_list, in order if possible.
594+ """
595+ assert type(filename_list) is not types.StringType
596+ if hasattr(self.backend, '_delete_list'):
597+ self._do_delete_list(filename_list)
598+ elif hasattr(self.backend, '_delete'):
599+ for filename in filename_list:
600+ self._do_delete(filename)
601+ else:
602+ raise NotImplementedError()
603+
604+ @retry('delete', fatal=False)
605+ def _do_delete_list(self, filename_list):
606+ self.backend._delete_list(filename_list)
607+
608+ @retry('delete', fatal=False)
609+ def _do_delete(self, filename):
610+ self.backend._delete(filename)
611+
612+ # Should never cause FatalError.
613+ # Returns a dictionary of dictionaries. The outer dictionary maps
614+ # filenames to metadata dictionaries. Supported metadata are:
615+ #
616+ # 'size': if >= 0, size of file
617+ # if -1, file is not found
618+ # if None, error querying file
619+ #
620+ # Returned dictionary is guaranteed to contain a metadata dictionary for
621+ # each filename, and all metadata are guaranteed to be present.
622+ def query_info(self, filename_list):
623+ """
624+ Return metadata about each filename in filename_list
625+ """
626+ info = {}
627+ if hasattr(self.backend, '_query_list'):
628+ info = self._do_query_list(filename_list)
629+ if info is None:
630+ info = {}
631+ elif hasattr(self.backend, '_query'):
632+ for filename in filename_list:
633+ info[filename] = self._do_query(filename)
634+
635+ # Fill out any missing entries (may happen if backend has no support
636+ # or its query_list support is lazy)
637+ for filename in filename_list:
638+ if filename not in info or info[filename] is None:
639+ info[filename] = {}
640+ for metadata in ['size']:
641+ info[filename].setdefault(metadata, None)
642+
643+ return info
644+
645+ @retry('query', fatal=False)
646+ def _do_query_list(self, filename_list):
647+ info = self.backend._query_list(filename_list)
648+ if info is None:
649+ info = {}
650+ return info
651+
652+ @retry('query', fatal=False)
653+ def _do_query(self, filename):
654+ try:
655+ return self.backend._query(filename)
656+ except Exception as e:
657+ code = _get_code_from_exception(self.backend, 'query', e)
658+ if code == log.ErrorCode.backend_not_found:
659+ return {'size': -1}
660+ else:
661+ raise e
662+
663+ def close(self):
664+ """
665+ Close the backend, releasing any resources held and
666+ invalidating any file objects obtained from the backend.
667+ """
668+ if hasattr(self.backend, '_close'):
669+ self.backend._close()
670
671 def get_fileobj_read(self, filename, parseresults = None):
672 """
673@@ -598,37 +658,6 @@
674 tdp.setdata()
675 return tdp.filtered_open_with_delete("rb")
676
677- def get_fileobj_write(self, filename,
678- parseresults = None,
679- sizelist = None):
680- """
681- Return fileobj opened for writing, which will cause the file
682- to be written to the backend on close().
683-
684- The file will be encoded as specified in parseresults (or as
685- read from the filename), and stored in a temp file until it
686- can be copied over and deleted.
687-
688- If sizelist is not None, it should be set to an empty list.
689- The number of bytes will be inserted into the list.
690- """
691- if not parseresults:
692- parseresults = file_naming.parse(filename)
693- assert parseresults, u"Filename %s not correctly parsed" % util.ufn(filename)
694- tdp = dup_temp.new_tempduppath(parseresults)
695-
696- def close_file_hook():
697- """This is called when returned fileobj is closed"""
698- self.put(tdp, filename)
699- if sizelist is not None:
700- tdp.setdata()
701- sizelist.append(tdp.getsize())
702- tdp.delete()
703-
704- fh = dup_temp.FileobjHooked(tdp.filtered_open("wb"))
705- fh.addhook(close_file_hook)
706- return fh
707-
708 def get_data(self, filename, parseresults = None):
709 """
710 Retrieve a file from backend, process it, return contents.
711@@ -637,18 +666,3 @@
712 buf = fin.read()
713 assert not fin.close()
714 return buf
715-
716- def put_data(self, buffer, filename, parseresults = None):
717- """
718- Put buffer into filename on backend after processing.
719- """
720- fout = self.get_fileobj_write(filename, parseresults)
721- fout.write(buffer)
722- assert not fout.close()
723-
724- def close(self):
725- """
726- Close the backend, releasing any resources held and
727- invalidating any file objects obtained from the backend.
728- """
729- pass
730
731=== added file 'duplicity/backends/README'
732--- duplicity/backends/README 1970-01-01 00:00:00 +0000
733+++ duplicity/backends/README 2014-04-28 02:49:55 +0000
734@@ -0,0 +1,79 @@
735+= How to write a backend, in five easy steps! =
736+
737+There are five main methods you want to implement:
738+
739+__init__ - Initial setup
740+_get
741+ - Get one file
742+ - Retried if an exception is thrown
743+_put
744+ - Upload one file
745+ - Retried if an exception is thrown
746+_list
747+ - List all files in the backend
748+ - Return a list of filenames
749+ - Retried if an exception is thrown
750+_delete
751+ - Delete one file
752+ - Retried if an exception is thrown
753+
754+There are other methods you may optionally implement:
755+
756+_delete_list
757+ - Delete list of files
758+ - This is used in preference of _delete if defined
759+ - Must gracefully handle individual file errors itself
760+ - Retried if an exception is thrown
761+_query
762+ - Query metadata of one file
763+ - Return a dict with a 'size' key, and a file size value (-1 for not found)
764+ - Retried if an exception is thrown
765+_query_list
766+ - Query metadata of a list of files
767+ - Return a dict of filenames mapping to a dict with a 'size' key,
768+ and a file size value (-1 for not found)
769+ - This is used in preference of _query if defined
770+ - Must gracefully handle individual file errors itself
771+ - Retried if an exception is thrown
772+_retry_cleanup
773+ - If the backend wants to do any bookkeeping or connection resetting inbetween
774+ retries, do it here.
775+_error_code
776+ - Passed an exception thrown by your backend, return a log.ErrorCode that
777+ corresponds to that exception
778+_move
779+ - If your backend can more optimally move a local file into its backend,
780+ implement this. If it's not implemented or returns False, _put will be
781+ called instead (and duplicity will delete the source file after).
782+ - Retried if an exception is thrown
783+_close
784+ - If your backend needs to clean up after itself, do that here.
785+
786+== Subclassing ==
787+
788+Always subclass from duplicity.backend.Backend
789+
790+== Registering ==
791+
792+You can register your class as a single backend like so:
793+
794+duplicity.backend.register_backend("foo", FooBackend)
795+
796+This will allow a URL like so: foo://hostname/path
797+
798+Or you can register your class as a meta backend like so:
799+duplicity.backend.register_backend_prefix("bar", BarBackend)
800+
801+Which will allow a URL like so: bar+foo://hostname/path and your class will
802+be passed the inner URL to either interpret how you like or create a new
803+inner backend instance with duplicity.backend.get_backend_object(url).
804+
805+== Naming ==
806+
807+Any method that duplicity calls will start with one underscore. Please use
808+zero or two underscores in your method names to avoid conflicts.
809+
810+== Testing ==
811+
812+Use "./testing/manual/backendtest.py foo://hostname/path" to test your new
813+backend. It will load your backend from your current branch.
814
815=== modified file 'duplicity/backends/_boto_multi.py'
816--- duplicity/backends/_boto_multi.py 2014-04-17 21:54:04 +0000
817+++ duplicity/backends/_boto_multi.py 2014-04-28 02:49:55 +0000
818@@ -98,8 +98,8 @@
819
820 self._pool = multiprocessing.Pool(processes=number_of_procs)
821
822- def close(self):
823- BotoSingleBackend.close(self)
824+ def _close(self):
825+ BotoSingleBackend._close(self)
826 log.Debug("Closing pool")
827 self._pool.terminate()
828 self._pool.join()
829
830=== modified file 'duplicity/backends/_boto_single.py'
831--- duplicity/backends/_boto_single.py 2014-04-25 23:20:12 +0000
832+++ duplicity/backends/_boto_single.py 2014-04-28 02:49:55 +0000
833@@ -25,9 +25,7 @@
834 import duplicity.backend
835 from duplicity import globals
836 from duplicity import log
837-from duplicity.errors import * #@UnusedWildImport
838-from duplicity.util import exception_traceback
839-from duplicity.backend import retry
840+from duplicity.errors import FatalBackendException, BackendException
841 from duplicity import progress
842
843 BOTO_MIN_VERSION = "2.1.1"
844@@ -163,7 +161,7 @@
845 self.resetConnection()
846 self._listed_keys = {}
847
848- def close(self):
849+ def _close(self):
850 del self._listed_keys
851 self._listed_keys = {}
852 self.bucket = None
853@@ -185,137 +183,69 @@
854 self.conn = get_connection(self.scheme, self.parsed_url, self.storage_uri)
855 self.bucket = self.conn.lookup(self.bucket_name)
856
857- def put(self, source_path, remote_filename=None):
858+ def _retry_cleanup(self):
859+ self.resetConnection()
860+
861+ def _put(self, source_path, remote_filename):
862 from boto.s3.connection import Location
863 if globals.s3_european_buckets:
864 if not globals.s3_use_new_style:
865- log.FatalError("European bucket creation was requested, but not new-style "
866- "bucket addressing (--s3-use-new-style)",
867- log.ErrorCode.s3_bucket_not_style)
868- #Network glitch may prevent first few attempts of creating/looking up a bucket
869- for n in range(1, globals.num_retries+1):
870- if self.bucket:
871- break
872- if n > 1:
873- time.sleep(30)
874- self.resetConnection()
875+ raise FatalBackendException("European bucket creation was requested, but not new-style "
876+ "bucket addressing (--s3-use-new-style)",
877+ code=log.ErrorCode.s3_bucket_not_style)
878+
879+ if self.bucket is None:
880 try:
881- try:
882- self.bucket = self.conn.get_bucket(self.bucket_name, validate=True)
883- except Exception as e:
884- if "NoSuchBucket" in str(e):
885- if globals.s3_european_buckets:
886- self.bucket = self.conn.create_bucket(self.bucket_name,
887- location=Location.EU)
888- else:
889- self.bucket = self.conn.create_bucket(self.bucket_name)
890+ self.bucket = self.conn.get_bucket(self.bucket_name, validate=True)
891+ except Exception as e:
892+ if "NoSuchBucket" in str(e):
893+ if globals.s3_european_buckets:
894+ self.bucket = self.conn.create_bucket(self.bucket_name,
895+ location=Location.EU)
896 else:
897- raise e
898- except Exception as e:
899- log.Warn("Failed to create bucket (attempt #%d) '%s' failed (reason: %s: %s)"
900- "" % (n, self.bucket_name,
901- e.__class__.__name__,
902- str(e)))
903+ self.bucket = self.conn.create_bucket(self.bucket_name)
904+ else:
905+ raise
906
907- if not remote_filename:
908- remote_filename = source_path.get_filename()
909 key = self.bucket.new_key(self.key_prefix + remote_filename)
910
911- for n in range(1, globals.num_retries+1):
912- if n > 1:
913- # sleep before retry (new connection to a **hopeful** new host, so no need to wait so long)
914- time.sleep(10)
915-
916- if globals.s3_use_rrs:
917- storage_class = 'REDUCED_REDUNDANCY'
918- else:
919- storage_class = 'STANDARD'
920- log.Info("Uploading %s/%s to %s Storage" % (self.straight_url, remote_filename, storage_class))
921- try:
922- if globals.s3_use_sse:
923- headers = {
924- 'Content-Type': 'application/octet-stream',
925- 'x-amz-storage-class': storage_class,
926- 'x-amz-server-side-encryption': 'AES256'
927- }
928- else:
929- headers = {
930- 'Content-Type': 'application/octet-stream',
931- 'x-amz-storage-class': storage_class
932- }
933-
934- upload_start = time.time()
935- self.upload(source_path.name, key, headers)
936- upload_end = time.time()
937- total_s = abs(upload_end-upload_start) or 1 # prevent a zero value!
938- rough_upload_speed = os.path.getsize(source_path.name)/total_s
939- self.resetConnection()
940- log.Debug("Uploaded %s/%s to %s Storage at roughly %f bytes/second" % (self.straight_url, remote_filename, storage_class, rough_upload_speed))
941- return
942- except Exception as e:
943- log.Warn("Upload '%s/%s' failed (attempt #%d, reason: %s: %s)"
944- "" % (self.straight_url,
945- remote_filename,
946- n,
947- e.__class__.__name__,
948- str(e)))
949- log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
950- self.resetConnection()
951- log.Warn("Giving up trying to upload %s/%s after %d attempts" %
952- (self.straight_url, remote_filename, globals.num_retries))
953- raise BackendException("Error uploading %s/%s" % (self.straight_url, remote_filename))
954-
955- def get(self, remote_filename, local_path):
956+ if globals.s3_use_rrs:
957+ storage_class = 'REDUCED_REDUNDANCY'
958+ else:
959+ storage_class = 'STANDARD'
960+ log.Info("Uploading %s/%s to %s Storage" % (self.straight_url, remote_filename, storage_class))
961+ if globals.s3_use_sse:
962+ headers = {
963+ 'Content-Type': 'application/octet-stream',
964+ 'x-amz-storage-class': storage_class,
965+ 'x-amz-server-side-encryption': 'AES256'
966+ }
967+ else:
968+ headers = {
969+ 'Content-Type': 'application/octet-stream',
970+ 'x-amz-storage-class': storage_class
971+ }
972+
973+ upload_start = time.time()
974+ self.upload(source_path.name, key, headers)
975+ upload_end = time.time()
976+ total_s = abs(upload_end-upload_start) or 1 # prevent a zero value!
977+ rough_upload_speed = os.path.getsize(source_path.name)/total_s
978+ log.Debug("Uploaded %s/%s to %s Storage at roughly %f bytes/second" % (self.straight_url, remote_filename, storage_class, rough_upload_speed))
979+
980+ def _get(self, remote_filename, local_path):
981 key_name = self.key_prefix + remote_filename
982 self.pre_process_download(remote_filename, wait=True)
983 key = self._listed_keys[key_name]
984- for n in range(1, globals.num_retries+1):
985- if n > 1:
986- # sleep before retry (new connection to a **hopeful** new host, so no need to wait so long)
987- time.sleep(10)
988- log.Info("Downloading %s/%s" % (self.straight_url, remote_filename))
989- try:
990- self.resetConnection()
991- key.get_contents_to_filename(local_path.name)
992- local_path.setdata()
993- return
994- except Exception as e:
995- log.Warn("Download %s/%s failed (attempt #%d, reason: %s: %s)"
996- "" % (self.straight_url,
997- remote_filename,
998- n,
999- e.__class__.__name__,
1000- str(e)), 1)
1001- log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
1002-
1003- log.Warn("Giving up trying to download %s/%s after %d attempts" %
1004- (self.straight_url, remote_filename, globals.num_retries))
1005- raise BackendException("Error downloading %s/%s" % (self.straight_url, remote_filename))
1006+ self.resetConnection()
1007+ key.get_contents_to_filename(local_path.name)
1008
1009 def _list(self):
1010 if not self.bucket:
1011 raise BackendException("No connection to backend")
1012-
1013- for n in range(1, globals.num_retries+1):
1014- if n > 1:
1015- # sleep before retry
1016- time.sleep(30)
1017- self.resetConnection()
1018- log.Info("Listing %s" % self.straight_url)
1019- try:
1020- return self._list_filenames_in_bucket()
1021- except Exception as e:
1022- log.Warn("List %s failed (attempt #%d, reason: %s: %s)"
1023- "" % (self.straight_url,
1024- n,
1025- e.__class__.__name__,
1026- str(e)), 1)
1027- log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
1028- log.Warn("Giving up trying to list %s after %d attempts" %
1029- (self.straight_url, globals.num_retries))
1030- raise BackendException("Error listng %s" % self.straight_url)
1031-
1032- def _list_filenames_in_bucket(self):
1033+ return self.list_filenames_in_bucket()
1034+
1035+ def list_filenames_in_bucket(self):
1036 # We add a 'd' to the prefix to make sure it is not null (for boto) and
1037 # to optimize the listing of our filenames, which always begin with 'd'.
1038 # This will cause a failure in the regression tests as below:
1039@@ -336,76 +266,37 @@
1040 pass
1041 return filename_list
1042
1043- def delete(self, filename_list):
1044- for filename in filename_list:
1045- self.bucket.delete_key(self.key_prefix + filename)
1046- log.Debug("Deleted %s/%s" % (self.straight_url, filename))
1047+ def _delete(self, filename):
1048+ self.bucket.delete_key(self.key_prefix + filename)
1049
1050- @retry
1051- def _query_file_info(self, filename, raise_errors=False):
1052- try:
1053- key = self.bucket.lookup(self.key_prefix + filename)
1054- if key is None:
1055- return {'size': -1}
1056- return {'size': key.size}
1057- except Exception as e:
1058- log.Warn("Query %s/%s failed: %s"
1059- "" % (self.straight_url,
1060- filename,
1061- str(e)))
1062- self.resetConnection()
1063- if raise_errors:
1064- raise e
1065- else:
1066- return {'size': None}
1067+ def _query(self, filename):
1068+ key = self.bucket.lookup(self.key_prefix + filename)
1069+ if key is None:
1070+ return {'size': -1}
1071+ return {'size': key.size}
1072
1073 def upload(self, filename, key, headers):
1074- key.set_contents_from_filename(filename, headers,
1075- cb=progress.report_transfer,
1076- num_cb=(max(2, 8 * globals.volsize / (1024 * 1024)))
1077- ) # Max num of callbacks = 8 times x megabyte
1078- key.close()
1079+ key.set_contents_from_filename(filename, headers,
1080+ cb=progress.report_transfer,
1081+ num_cb=(max(2, 8 * globals.volsize / (1024 * 1024)))
1082+ ) # Max num of callbacks = 8 times x megabyte
1083+ key.close()
1084
1085- def pre_process_download(self, files_to_download, wait=False):
1086+ def pre_process_download(self, remote_filename, wait=False):
1087 # Used primarily to move files in Glacier to S3
1088- if isinstance(files_to_download, (bytes, str, unicode)):
1089- files_to_download = [files_to_download]
1090+ key_name = self.key_prefix + remote_filename
1091+ if not self._listed_keys.get(key_name, False):
1092+ self._listed_keys[key_name] = list(self.bucket.list(key_name))[0]
1093+ key = self._listed_keys[key_name]
1094
1095- for remote_filename in files_to_download:
1096- success = False
1097- for n in range(1, globals.num_retries+1):
1098- if n > 1:
1099- # sleep before retry (new connection to a **hopeful** new host, so no need to wait so long)
1100- time.sleep(10)
1101+ if key.storage_class == "GLACIER":
1102+ # We need to move the file out of glacier
1103+ if not self.bucket.get_key(key.key).ongoing_restore:
1104+ log.Info("File %s is in Glacier storage, restoring to S3" % remote_filename)
1105+ key.restore(days=1) # Shouldn't need this again after 1 day
1106+ if wait:
1107+ log.Info("Waiting for file %s to restore from Glacier" % remote_filename)
1108+ while self.bucket.get_key(key.key).ongoing_restore:
1109+ time.sleep(60)
1110 self.resetConnection()
1111- try:
1112- key_name = self.key_prefix + remote_filename
1113- if not self._listed_keys.get(key_name, False):
1114- self._listed_keys[key_name] = list(self.bucket.list(key_name))[0]
1115- key = self._listed_keys[key_name]
1116-
1117- if key.storage_class == "GLACIER":
1118- # We need to move the file out of glacier
1119- if not self.bucket.get_key(key.key).ongoing_restore:
1120- log.Info("File %s is in Glacier storage, restoring to S3" % remote_filename)
1121- key.restore(days=1) # Shouldn't need this again after 1 day
1122- if wait:
1123- log.Info("Waiting for file %s to restore from Glacier" % remote_filename)
1124- while self.bucket.get_key(key.key).ongoing_restore:
1125- time.sleep(60)
1126- self.resetConnection()
1127- log.Info("File %s was successfully restored from Glacier" % remote_filename)
1128- success = True
1129- break
1130- except Exception as e:
1131- log.Warn("Restoration from Glacier for file %s/%s failed (attempt #%d, reason: %s: %s)"
1132- "" % (self.straight_url,
1133- remote_filename,
1134- n,
1135- e.__class__.__name__,
1136- str(e)), 1)
1137- log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
1138- if not success:
1139- log.Warn("Giving up trying to restore %s/%s after %d attempts" %
1140- (self.straight_url, remote_filename, globals.num_retries))
1141- raise BackendException("Error restoring %s/%s from Glacier to S3" % (self.straight_url, remote_filename))
1142+ log.Info("File %s was successfully restored from Glacier" % remote_filename)
1143
1144=== modified file 'duplicity/backends/_cf_cloudfiles.py'
1145--- duplicity/backends/_cf_cloudfiles.py 2014-04-17 22:03:10 +0000
1146+++ duplicity/backends/_cf_cloudfiles.py 2014-04-28 02:49:55 +0000
1147@@ -19,14 +19,10 @@
1148 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
1149
1150 import os
1151-import time
1152
1153 import duplicity.backend
1154-from duplicity import globals
1155 from duplicity import log
1156-from duplicity.errors import * #@UnusedWildImport
1157-from duplicity.util import exception_traceback
1158-from duplicity.backend import retry
1159+from duplicity.errors import BackendException
1160
1161 class CloudFilesBackend(duplicity.backend.Backend):
1162 """
1163@@ -69,124 +65,37 @@
1164 log.ErrorCode.connection_failed)
1165 self.container = conn.create_container(container)
1166
1167- def put(self, source_path, remote_filename = None):
1168- if not remote_filename:
1169- remote_filename = source_path.get_filename()
1170-
1171- for n in range(1, globals.num_retries+1):
1172- log.Info("Uploading '%s/%s' " % (self.container, remote_filename))
1173- try:
1174- sobject = self.container.create_object(remote_filename)
1175- sobject.load_from_filename(source_path.name)
1176- return
1177- except self.resp_exc as error:
1178- log.Warn("Upload of '%s' failed (attempt %d): CloudFiles returned: %s %s"
1179- % (remote_filename, n, error.status, error.reason))
1180- except Exception as e:
1181- log.Warn("Upload of '%s' failed (attempt %s): %s: %s"
1182- % (remote_filename, n, e.__class__.__name__, str(e)))
1183- log.Debug("Backtrace of previous error: %s"
1184- % exception_traceback())
1185- time.sleep(30)
1186- log.Warn("Giving up uploading '%s' after %s attempts"
1187- % (remote_filename, globals.num_retries))
1188- raise BackendException("Error uploading '%s'" % remote_filename)
1189-
1190- def get(self, remote_filename, local_path):
1191- for n in range(1, globals.num_retries+1):
1192- log.Info("Downloading '%s/%s'" % (self.container, remote_filename))
1193- try:
1194- sobject = self.container.create_object(remote_filename)
1195- f = open(local_path.name, 'w')
1196- for chunk in sobject.stream():
1197- f.write(chunk)
1198- local_path.setdata()
1199- return
1200- except self.resp_exc as resperr:
1201- log.Warn("Download of '%s' failed (attempt %s): CloudFiles returned: %s %s"
1202- % (remote_filename, n, resperr.status, resperr.reason))
1203- except Exception as e:
1204- log.Warn("Download of '%s' failed (attempt %s): %s: %s"
1205- % (remote_filename, n, e.__class__.__name__, str(e)))
1206- log.Debug("Backtrace of previous error: %s"
1207- % exception_traceback())
1208- time.sleep(30)
1209- log.Warn("Giving up downloading '%s' after %s attempts"
1210- % (remote_filename, globals.num_retries))
1211- raise BackendException("Error downloading '%s/%s'"
1212- % (self.container, remote_filename))
1213+ def _error_code(self, operation, e):
1214+ from cloudfiles.errors import NoSuchObject
1215+ if isinstance(e, NoSuchObject):
1216+ return log.ErrorCode.backend_not_found
1217+ elif isinstance(e, self.resp_exc):
1218+ if e.status == 404:
1219+ return log.ErrorCode.backend_not_found
1220+
1221+ def _put(self, source_path, remote_filename):
1222+ sobject = self.container.create_object(remote_filename)
1223+ sobject.load_from_filename(source_path.name)
1224+
1225+ def _get(self, remote_filename, local_path):
1226+ sobject = self.container.create_object(remote_filename)
1227+ with open(local_path.name, 'wb') as f:
1228+ for chunk in sobject.stream():
1229+ f.write(chunk)
1230
1231 def _list(self):
1232- for n in range(1, globals.num_retries+1):
1233- log.Info("Listing '%s'" % (self.container))
1234- try:
1235- # Cloud Files will return a max of 10,000 objects. We have
1236- # to make multiple requests to get them all.
1237- objs = self.container.list_objects()
1238- keys = objs
1239- while len(objs) == 10000:
1240- objs = self.container.list_objects(marker=keys[-1])
1241- keys += objs
1242- return keys
1243- except self.resp_exc as resperr:
1244- log.Warn("Listing of '%s' failed (attempt %s): CloudFiles returned: %s %s"
1245- % (self.container, n, resperr.status, resperr.reason))
1246- except Exception as e:
1247- log.Warn("Listing of '%s' failed (attempt %s): %s: %s"
1248- % (self.container, n, e.__class__.__name__, str(e)))
1249- log.Debug("Backtrace of previous error: %s"
1250- % exception_traceback())
1251- time.sleep(30)
1252- log.Warn("Giving up listing of '%s' after %s attempts"
1253- % (self.container, globals.num_retries))
1254- raise BackendException("Error listing '%s'"
1255- % (self.container))
1256-
1257- def delete_one(self, remote_filename):
1258- for n in range(1, globals.num_retries+1):
1259- log.Info("Deleting '%s/%s'" % (self.container, remote_filename))
1260- try:
1261- self.container.delete_object(remote_filename)
1262- return
1263- except self.resp_exc as resperr:
1264- if n > 1 and resperr.status == 404:
1265- # We failed on a timeout, but delete succeeded on the server
1266- log.Warn("Delete of '%s' missing after retry - must have succeded earler" % remote_filename )
1267- return
1268- log.Warn("Delete of '%s' failed (attempt %s): CloudFiles returned: %s %s"
1269- % (remote_filename, n, resperr.status, resperr.reason))
1270- except Exception as e:
1271- log.Warn("Delete of '%s' failed (attempt %s): %s: %s"
1272- % (remote_filename, n, e.__class__.__name__, str(e)))
1273- log.Debug("Backtrace of previous error: %s"
1274- % exception_traceback())
1275- time.sleep(30)
1276- log.Warn("Giving up deleting '%s' after %s attempts"
1277- % (remote_filename, globals.num_retries))
1278- raise BackendException("Error deleting '%s/%s'"
1279- % (self.container, remote_filename))
1280-
1281- def delete(self, filename_list):
1282- for file in filename_list:
1283- self.delete_one(file)
1284- log.Debug("Deleted '%s/%s'" % (self.container, file))
1285-
1286- @retry
1287- def _query_file_info(self, filename, raise_errors=False):
1288- from cloudfiles.errors import NoSuchObject
1289- try:
1290- sobject = self.container.get_object(filename)
1291- return {'size': sobject.size}
1292- except NoSuchObject:
1293- return {'size': -1}
1294- except Exception as e:
1295- log.Warn("Error querying '%s/%s': %s"
1296- "" % (self.container,
1297- filename,
1298- str(e)))
1299- if raise_errors:
1300- raise e
1301- else:
1302- return {'size': None}
1303-
1304-duplicity.backend.register_backend("cf+http", CloudFilesBackend)
1305+ # Cloud Files will return a max of 10,000 objects. We have
1306+ # to make multiple requests to get them all.
1307+ objs = self.container.list_objects()
1308+ keys = objs
1309+ while len(objs) == 10000:
1310+ objs = self.container.list_objects(marker=keys[-1])
1311+ keys += objs
1312+ return keys
1313+
1314+ def _delete(self, filename):
1315+ self.container.delete_object(filename)
1316+
1317+ def _query(self, filename):
1318+ sobject = self.container.get_object(filename)
1319+ return {'size': sobject.size}
1320
1321=== modified file 'duplicity/backends/_cf_pyrax.py'
1322--- duplicity/backends/_cf_pyrax.py 2014-04-17 22:03:10 +0000
1323+++ duplicity/backends/_cf_pyrax.py 2014-04-28 02:49:55 +0000
1324@@ -19,14 +19,11 @@
1325 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
1326
1327 import os
1328-import time
1329
1330 import duplicity.backend
1331-from duplicity import globals
1332 from duplicity import log
1333-from duplicity.errors import * # @UnusedWildImport
1334-from duplicity.util import exception_traceback
1335-from duplicity.backend import retry
1336+from duplicity.errors import BackendException
1337+
1338
1339 class PyraxBackend(duplicity.backend.Backend):
1340 """
1341@@ -69,126 +66,39 @@
1342
1343 self.client_exc = pyrax.exceptions.ClientException
1344 self.nso_exc = pyrax.exceptions.NoSuchObject
1345- self.cloudfiles = pyrax.cloudfiles
1346 self.container = pyrax.cloudfiles.create_container(container)
1347
1348- def put(self, source_path, remote_filename = None):
1349- if not remote_filename:
1350- remote_filename = source_path.get_filename()
1351-
1352- for n in range(1, globals.num_retries + 1):
1353- log.Info("Uploading '%s/%s' " % (self.container, remote_filename))
1354- try:
1355- self.container.upload_file(source_path.name, remote_filename)
1356- return
1357- except self.client_exc as error:
1358- log.Warn("Upload of '%s' failed (attempt %d): pyrax returned: %s %s"
1359- % (remote_filename, n, error.__class__.__name__, error.message))
1360- except Exception as e:
1361- log.Warn("Upload of '%s' failed (attempt %s): %s: %s"
1362- % (remote_filename, n, e.__class__.__name__, str(e)))
1363- log.Debug("Backtrace of previous error: %s"
1364- % exception_traceback())
1365- time.sleep(30)
1366- log.Warn("Giving up uploading '%s' after %s attempts"
1367- % (remote_filename, globals.num_retries))
1368- raise BackendException("Error uploading '%s'" % remote_filename)
1369-
1370- def get(self, remote_filename, local_path):
1371- for n in range(1, globals.num_retries + 1):
1372- log.Info("Downloading '%s/%s'" % (self.container, remote_filename))
1373- try:
1374- sobject = self.container.get_object(remote_filename)
1375- f = open(local_path.name, 'w')
1376- f.write(sobject.get())
1377- local_path.setdata()
1378- return
1379- except self.nso_exc:
1380- return
1381- except self.client_exc as resperr:
1382- log.Warn("Download of '%s' failed (attempt %s): pyrax returned: %s %s"
1383- % (remote_filename, n, resperr.__class__.__name__, resperr.message))
1384- except Exception as e:
1385- log.Warn("Download of '%s' failed (attempt %s): %s: %s"
1386- % (remote_filename, n, e.__class__.__name__, str(e)))
1387- log.Debug("Backtrace of previous error: %s"
1388- % exception_traceback())
1389- time.sleep(30)
1390- log.Warn("Giving up downloading '%s' after %s attempts"
1391- % (remote_filename, globals.num_retries))
1392- raise BackendException("Error downloading '%s/%s'"
1393- % (self.container, remote_filename))
1394+ def _error_code(self, operation, e):
1395+ if isinstance(e, self.nso_exc):
1396+ return log.ErrorCode.backend_not_found
1397+ elif isinstance(e, self.client_exc):
1398+ if e.status == 404:
1399+ return log.ErrorCode.backend_not_found
1400+ elif hasattr(e, 'http_status'):
1401+ if e.http_status == 404:
1402+ return log.ErrorCode.backend_not_found
1403+
1404+ def _put(self, source_path, remote_filename):
1405+ self.container.upload_file(source_path.name, remote_filename)
1406+
1407+ def _get(self, remote_filename, local_path):
1408+ sobject = self.container.get_object(remote_filename)
1409+ with open(local_path.name, 'wb') as f:
1410+ f.write(sobject.get())
1411
1412 def _list(self):
1413- for n in range(1, globals.num_retries + 1):
1414- log.Info("Listing '%s'" % (self.container))
1415- try:
1416- # Cloud Files will return a max of 10,000 objects. We have
1417- # to make multiple requests to get them all.
1418- objs = self.container.get_object_names()
1419- keys = objs
1420- while len(objs) == 10000:
1421- objs = self.container.get_object_names(marker = keys[-1])
1422- keys += objs
1423- return keys
1424- except self.client_exc as resperr:
1425- log.Warn("Listing of '%s' failed (attempt %s): pyrax returned: %s %s"
1426- % (self.container, n, resperr.__class__.__name__, resperr.message))
1427- except Exception as e:
1428- log.Warn("Listing of '%s' failed (attempt %s): %s: %s"
1429- % (self.container, n, e.__class__.__name__, str(e)))
1430- log.Debug("Backtrace of previous error: %s"
1431- % exception_traceback())
1432- time.sleep(30)
1433- log.Warn("Giving up listing of '%s' after %s attempts"
1434- % (self.container, globals.num_retries))
1435- raise BackendException("Error listing '%s'"
1436- % (self.container))
1437-
1438- def delete_one(self, remote_filename):
1439- for n in range(1, globals.num_retries + 1):
1440- log.Info("Deleting '%s/%s'" % (self.container, remote_filename))
1441- try:
1442- self.container.delete_object(remote_filename)
1443- return
1444- except self.client_exc as resperr:
1445- if n > 1 and resperr.status == 404:
1446- # We failed on a timeout, but delete succeeded on the server
1447- log.Warn("Delete of '%s' missing after retry - must have succeded earler" % remote_filename)
1448- return
1449- log.Warn("Delete of '%s' failed (attempt %s): pyrax returned: %s %s"
1450- % (remote_filename, n, resperr.__class__.__name__, resperr.message))
1451- except Exception as e:
1452- log.Warn("Delete of '%s' failed (attempt %s): %s: %s"
1453- % (remote_filename, n, e.__class__.__name__, str(e)))
1454- log.Debug("Backtrace of previous error: %s"
1455- % exception_traceback())
1456- time.sleep(30)
1457- log.Warn("Giving up deleting '%s' after %s attempts"
1458- % (remote_filename, globals.num_retries))
1459- raise BackendException("Error deleting '%s/%s'"
1460- % (self.container, remote_filename))
1461-
1462- def delete(self, filename_list):
1463- for file_ in filename_list:
1464- self.delete_one(file_)
1465- log.Debug("Deleted '%s/%s'" % (self.container, file_))
1466-
1467- @retry
1468- def _query_file_info(self, filename, raise_errors = False):
1469- try:
1470- sobject = self.container.get_object(filename)
1471- return {'size': sobject.total_bytes}
1472- except self.nso_exc:
1473- return {'size': -1}
1474- except Exception as e:
1475- log.Warn("Error querying '%s/%s': %s"
1476- "" % (self.container,
1477- filename,
1478- str(e)))
1479- if raise_errors:
1480- raise e
1481- else:
1482- return {'size': None}
1483-
1484-duplicity.backend.register_backend("cf+http", PyraxBackend)
1485+ # Cloud Files will return a max of 10,000 objects. We have
1486+ # to make multiple requests to get them all.
1487+ objs = self.container.get_object_names()
1488+ keys = objs
1489+ while len(objs) == 10000:
1490+ objs = self.container.get_object_names(marker = keys[-1])
1491+ keys += objs
1492+ return keys
1493+
1494+ def _delete(self, filename):
1495+ self.container.delete_object(filename)
1496+
1497+ def _query(self, filename):
1498+ sobject = self.container.get_object(filename)
1499+ return {'size': sobject.total_bytes}
1500
1501=== modified file 'duplicity/backends/_ssh_paramiko.py'
1502--- duplicity/backends/_ssh_paramiko.py 2014-04-17 20:50:57 +0000
1503+++ duplicity/backends/_ssh_paramiko.py 2014-04-28 02:49:55 +0000
1504@@ -28,7 +28,6 @@
1505 import os
1506 import errno
1507 import sys
1508-import time
1509 import getpass
1510 import logging
1511 from binascii import hexlify
1512@@ -36,7 +35,7 @@
1513 import duplicity.backend
1514 from duplicity import globals
1515 from duplicity import log
1516-from duplicity.errors import *
1517+from duplicity.errors import BackendException
1518
1519 read_blocksize=65635 # for doing scp retrievals, where we need to read ourselves
1520
1521@@ -232,7 +231,6 @@
1522 except Exception as e:
1523 raise BackendException("sftp negotiation failed: %s" % e)
1524
1525-
1526 # move to the appropriate directory, possibly after creating it and its parents
1527 dirs = self.remote_dir.split(os.sep)
1528 if len(dirs) > 0:
1529@@ -257,157 +255,91 @@
1530 except Exception as e:
1531 raise BackendException("sftp chdir to %s failed: %s" % (self.sftp.normalize(".")+"/"+d,e))
1532
1533- def put(self, source_path, remote_filename = None):
1534- """transfers a single file to the remote side.
1535- In scp mode unavoidable quoting issues will make this fail if the remote directory or file name
1536- contain single quotes."""
1537- if not remote_filename:
1538- remote_filename = source_path.get_filename()
1539-
1540- for n in range(1, globals.num_retries+1):
1541- if n > 1:
1542- # sleep before retry
1543- time.sleep(self.retry_delay)
1544- try:
1545- if (globals.use_scp):
1546- f=file(source_path.name,'rb')
1547- try:
1548- chan=self.client.get_transport().open_session()
1549- chan.settimeout(globals.timeout)
1550- chan.exec_command("scp -t '%s'" % self.remote_dir) # scp in sink mode uses the arg as base directory
1551- except Exception as e:
1552- raise BackendException("scp execution failed: %s" % e)
1553- # scp protocol: one 0x0 after startup, one after the Create meta, one after saving
1554- # if there's a problem: 0x1 or 0x02 and some error text
1555- response=chan.recv(1)
1556- if (response!="\0"):
1557- raise BackendException("scp remote error: %s" % chan.recv(-1))
1558- fstat=os.stat(source_path.name)
1559- chan.send('C%s %d %s\n' %(oct(fstat.st_mode)[-4:], fstat.st_size, remote_filename))
1560- response=chan.recv(1)
1561- if (response!="\0"):
1562- raise BackendException("scp remote error: %s" % chan.recv(-1))
1563- chan.sendall(f.read()+'\0')
1564- f.close()
1565- response=chan.recv(1)
1566- if (response!="\0"):
1567- raise BackendException("scp remote error: %s" % chan.recv(-1))
1568- chan.close()
1569- return
1570- else:
1571- try:
1572- self.sftp.put(source_path.name,remote_filename)
1573- return
1574- except Exception as e:
1575- raise BackendException("sftp put of %s (as %s) failed: %s" % (source_path.name,remote_filename,e))
1576- except Exception as e:
1577- log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))
1578- raise BackendException("Giving up trying to upload '%s' after %d attempts" % (remote_filename,n))
1579-
1580-
1581- def get(self, remote_filename, local_path):
1582- """retrieves a single file from the remote side.
1583- In scp mode unavoidable quoting issues will make this fail if the remote directory or file names
1584- contain single quotes."""
1585-
1586- for n in range(1, globals.num_retries+1):
1587- if n > 1:
1588- # sleep before retry
1589- time.sleep(self.retry_delay)
1590- try:
1591- if (globals.use_scp):
1592- try:
1593- chan=self.client.get_transport().open_session()
1594- chan.settimeout(globals.timeout)
1595- chan.exec_command("scp -f '%s/%s'" % (self.remote_dir,remote_filename))
1596- except Exception as e:
1597- raise BackendException("scp execution failed: %s" % e)
1598-
1599- chan.send('\0') # overall ready indicator
1600- msg=chan.recv(-1)
1601- m=re.match(r"C([0-7]{4})\s+(\d+)\s+(\S.*)$",msg)
1602- if (m==None or m.group(3)!=remote_filename):
1603- raise BackendException("scp get %s failed: incorrect response '%s'" % (remote_filename,msg))
1604- chan.recv(1) # dispose of the newline trailing the C message
1605-
1606- size=int(m.group(2))
1607- togo=size
1608- f=file(local_path.name,'wb')
1609- chan.send('\0') # ready for data
1610- try:
1611- while togo>0:
1612- if togo>read_blocksize:
1613- blocksize = read_blocksize
1614- else:
1615- blocksize = togo
1616- buff=chan.recv(blocksize)
1617- f.write(buff)
1618- togo-=len(buff)
1619- except Exception as e:
1620- raise BackendException("scp get %s failed: %s" % (remote_filename,e))
1621-
1622- msg=chan.recv(1) # check the final status
1623- if msg!='\0':
1624- raise BackendException("scp get %s failed: %s" % (remote_filename,chan.recv(-1)))
1625- f.close()
1626- chan.send('\0') # send final done indicator
1627- chan.close()
1628- return
1629- else:
1630- try:
1631- self.sftp.get(remote_filename,local_path.name)
1632- return
1633- except Exception as e:
1634- raise BackendException("sftp get of %s (to %s) failed: %s" % (remote_filename,local_path.name,e))
1635- local_path.setdata()
1636- except Exception as e:
1637- log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))
1638- raise BackendException("Giving up trying to download '%s' after %d attempts" % (remote_filename,n))
1639+ def _put(self, source_path, remote_filename):
1640+ if globals.use_scp:
1641+ f=file(source_path.name,'rb')
1642+ try:
1643+ chan=self.client.get_transport().open_session()
1644+ chan.settimeout(globals.timeout)
1645+ chan.exec_command("scp -t '%s'" % self.remote_dir) # scp in sink mode uses the arg as base directory
1646+ except Exception as e:
1647+ raise BackendException("scp execution failed: %s" % e)
1648+ # scp protocol: one 0x0 after startup, one after the Create meta, one after saving
1649+ # if there's a problem: 0x1 or 0x02 and some error text
1650+ response=chan.recv(1)
1651+ if (response!="\0"):
1652+ raise BackendException("scp remote error: %s" % chan.recv(-1))
1653+ fstat=os.stat(source_path.name)
1654+ chan.send('C%s %d %s\n' %(oct(fstat.st_mode)[-4:], fstat.st_size, remote_filename))
1655+ response=chan.recv(1)
1656+ if (response!="\0"):
1657+ raise BackendException("scp remote error: %s" % chan.recv(-1))
1658+ chan.sendall(f.read()+'\0')
1659+ f.close()
1660+ response=chan.recv(1)
1661+ if (response!="\0"):
1662+ raise BackendException("scp remote error: %s" % chan.recv(-1))
1663+ chan.close()
1664+ else:
1665+ self.sftp.put(source_path.name,remote_filename)
1666+
1667+ def _get(self, remote_filename, local_path):
1668+ if globals.use_scp:
1669+ try:
1670+ chan=self.client.get_transport().open_session()
1671+ chan.settimeout(globals.timeout)
1672+ chan.exec_command("scp -f '%s/%s'" % (self.remote_dir,remote_filename))
1673+ except Exception as e:
1674+ raise BackendException("scp execution failed: %s" % e)
1675+
1676+ chan.send('\0') # overall ready indicator
1677+ msg=chan.recv(-1)
1678+ m=re.match(r"C([0-7]{4})\s+(\d+)\s+(\S.*)$",msg)
1679+ if (m==None or m.group(3)!=remote_filename):
1680+ raise BackendException("scp get %s failed: incorrect response '%s'" % (remote_filename,msg))
1681+ chan.recv(1) # dispose of the newline trailing the C message
1682+
1683+ size=int(m.group(2))
1684+ togo=size
1685+ f=file(local_path.name,'wb')
1686+ chan.send('\0') # ready for data
1687+ try:
1688+ while togo>0:
1689+ if togo>read_blocksize:
1690+ blocksize = read_blocksize
1691+ else:
1692+ blocksize = togo
1693+ buff=chan.recv(blocksize)
1694+ f.write(buff)
1695+ togo-=len(buff)
1696+ except Exception as e:
1697+ raise BackendException("scp get %s failed: %s" % (remote_filename,e))
1698+
1699+ msg=chan.recv(1) # check the final status
1700+ if msg!='\0':
1701+ raise BackendException("scp get %s failed: %s" % (remote_filename,chan.recv(-1)))
1702+ f.close()
1703+ chan.send('\0') # send final done indicator
1704+ chan.close()
1705+ else:
1706+ self.sftp.get(remote_filename,local_path.name)
1707
1708 def _list(self):
1709- """lists the contents of the one-and-only duplicity dir on the remote side.
1710- In scp mode unavoidable quoting issues will make this fail if the directory name
1711- contains single quotes."""
1712- for n in range(1, globals.num_retries+1):
1713- if n > 1:
1714- # sleep before retry
1715- time.sleep(self.retry_delay)
1716- try:
1717- if (globals.use_scp):
1718- output=self.runremote("ls -1 '%s'" % self.remote_dir,False,"scp dir listing ")
1719- return output.splitlines()
1720- else:
1721- try:
1722- return self.sftp.listdir()
1723- except Exception as e:
1724- raise BackendException("sftp listing of %s failed: %s" % (self.sftp.getcwd(),e))
1725- except Exception as e:
1726- log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))
1727- raise BackendException("Giving up trying to list '%s' after %d attempts" % (self.remote_dir,n))
1728-
1729- def delete(self, filename_list):
1730- """deletes all files in the list on the remote side. In scp mode unavoidable quoting issues
1731- will cause failures if filenames containing single quotes are encountered."""
1732- for fn in filename_list:
1733- # Try to delete each file several times before giving up completely.
1734- for n in range(1, globals.num_retries+1):
1735- try:
1736- if (globals.use_scp):
1737- self.runremote("rm '%s/%s'" % (self.remote_dir,fn),False,"scp rm ")
1738- else:
1739- try:
1740- self.sftp.remove(fn)
1741- except Exception as e:
1742- raise BackendException("sftp rm %s failed: %s" % (fn,e))
1743-
1744- # If we get here, we deleted this file successfully. Move on to the next one.
1745- break
1746- except Exception as e:
1747- if n == globals.num_retries:
1748- log.FatalError(str(e), log.ErrorCode.backend_error)
1749- else:
1750- log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))
1751- time.sleep(self.retry_delay)
1752+ # In scp mode unavoidable quoting issues will make this fail if the
1753+ # directory name contains single quotes.
1754+ if globals.use_scp:
1755+ output = self.runremote("ls -1 '%s'" % self.remote_dir, False, "scp dir listing ")
1756+ return output.splitlines()
1757+ else:
1758+ return self.sftp.listdir()
1759+
1760+ def _delete(self, filename):
1761+ # In scp mode unavoidable quoting issues will cause failures if
1762+ # filenames containing single quotes are encountered.
1763+ if globals.use_scp:
1764+ self.runremote("rm '%s/%s'" % (self.remote_dir, filename), False, "scp rm ")
1765+ else:
1766+ self.sftp.remove(filename)
1767
1768 def runremote(self,cmd,ignoreexitcode=False,errorprefix=""):
1769 """small convenience function that opens a shell channel, runs remote command and returns
1770@@ -438,7 +370,3 @@
1771 raise BackendException("could not load '%s', maybe corrupt?" % (file))
1772
1773 return sshconfig.lookup(host)
1774-
1775-duplicity.backend.register_backend("sftp", SSHParamikoBackend)
1776-duplicity.backend.register_backend("scp", SSHParamikoBackend)
1777-duplicity.backend.register_backend("ssh", SSHParamikoBackend)
1778
1779=== modified file 'duplicity/backends/_ssh_pexpect.py'
1780--- duplicity/backends/_ssh_pexpect.py 2014-04-25 23:20:12 +0000
1781+++ duplicity/backends/_ssh_pexpect.py 2014-04-28 02:49:55 +0000
1782@@ -24,18 +24,20 @@
1783 # have the same syntax. Also these strings will be executed by the
1784 # shell, so shouldn't have strange characters in them.
1785
1786+from future_builtins import map
1787+
1788 import re
1789 import string
1790-import time
1791 import os
1792
1793 import duplicity.backend
1794 from duplicity import globals
1795 from duplicity import log
1796-from duplicity.errors import * #@UnusedWildImport
1797+from duplicity.errors import BackendException
1798
1799 class SSHPExpectBackend(duplicity.backend.Backend):
1800- """This backend copies files using scp. List not supported"""
1801+ """This backend copies files using scp. List not supported. Filenames
1802+ should not need any quoting or this will break."""
1803 def __init__(self, parsed_url):
1804 """scpBackend initializer"""
1805 duplicity.backend.Backend.__init__(self, parsed_url)
1806@@ -76,74 +78,67 @@
1807 def run_scp_command(self, commandline):
1808 """ Run an scp command, responding to password prompts """
1809 import pexpect
1810- for n in range(1, globals.num_retries+1):
1811- if n > 1:
1812- # sleep before retry
1813- time.sleep(self.retry_delay)
1814- log.Info("Running '%s' (attempt #%d)" % (commandline, n))
1815- child = pexpect.spawn(commandline, timeout = None)
1816- if globals.ssh_askpass:
1817- state = "authorizing"
1818- else:
1819- state = "copying"
1820- while 1:
1821- if state == "authorizing":
1822- match = child.expect([pexpect.EOF,
1823- "(?i)timeout, server not responding",
1824- "(?i)pass(word|phrase .*):",
1825- "(?i)permission denied",
1826- "authenticity"])
1827- log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
1828- if match == 0:
1829- log.Warn("Failed to authenticate")
1830- break
1831- elif match == 1:
1832- log.Warn("Timeout waiting to authenticate")
1833- break
1834- elif match == 2:
1835- child.sendline(self.password)
1836- state = "copying"
1837- elif match == 3:
1838- log.Warn("Invalid SSH password")
1839- break
1840- elif match == 4:
1841- log.Warn("Remote host authentication failed (missing known_hosts entry?)")
1842- break
1843- elif state == "copying":
1844- match = child.expect([pexpect.EOF,
1845- "(?i)timeout, server not responding",
1846- "stalled",
1847- "authenticity",
1848- "ETA"])
1849- log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
1850- if match == 0:
1851- break
1852- elif match == 1:
1853- log.Warn("Timeout waiting for response")
1854- break
1855- elif match == 2:
1856- state = "stalled"
1857- elif match == 3:
1858- log.Warn("Remote host authentication failed (missing known_hosts entry?)")
1859- break
1860- elif state == "stalled":
1861- match = child.expect([pexpect.EOF,
1862- "(?i)timeout, server not responding",
1863- "ETA"])
1864- log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
1865- if match == 0:
1866- break
1867- elif match == 1:
1868- log.Warn("Stalled for too long, aborted copy")
1869- break
1870- elif match == 2:
1871- state = "copying"
1872- child.close(force = True)
1873- if child.exitstatus == 0:
1874- return
1875- log.Warn("Running '%s' failed (attempt #%d)" % (commandline, n))
1876- log.Warn("Giving up trying to execute '%s' after %d attempts" % (commandline, globals.num_retries))
1877- raise BackendException("Error running '%s'" % commandline)
1878+ log.Info("Running '%s'" % commandline)
1879+ child = pexpect.spawn(commandline, timeout = None)
1880+ if globals.ssh_askpass:
1881+ state = "authorizing"
1882+ else:
1883+ state = "copying"
1884+ while 1:
1885+ if state == "authorizing":
1886+ match = child.expect([pexpect.EOF,
1887+ "(?i)timeout, server not responding",
1888+ "(?i)pass(word|phrase .*):",
1889+ "(?i)permission denied",
1890+ "authenticity"])
1891+ log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
1892+ if match == 0:
1893+ log.Warn("Failed to authenticate")
1894+ break
1895+ elif match == 1:
1896+ log.Warn("Timeout waiting to authenticate")
1897+ break
1898+ elif match == 2:
1899+ child.sendline(self.password)
1900+ state = "copying"
1901+ elif match == 3:
1902+ log.Warn("Invalid SSH password")
1903+ break
1904+ elif match == 4:
1905+ log.Warn("Remote host authentication failed (missing known_hosts entry?)")
1906+ break
1907+ elif state == "copying":
1908+ match = child.expect([pexpect.EOF,
1909+ "(?i)timeout, server not responding",
1910+ "stalled",
1911+ "authenticity",
1912+ "ETA"])
1913+ log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
1914+ if match == 0:
1915+ break
1916+ elif match == 1:
1917+ log.Warn("Timeout waiting for response")
1918+ break
1919+ elif match == 2:
1920+ state = "stalled"
1921+ elif match == 3:
1922+ log.Warn("Remote host authentication failed (missing known_hosts entry?)")
1923+ break
1924+ elif state == "stalled":
1925+ match = child.expect([pexpect.EOF,
1926+ "(?i)timeout, server not responding",
1927+ "ETA"])
1928+ log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
1929+ if match == 0:
1930+ break
1931+ elif match == 1:
1932+ log.Warn("Stalled for too long, aborted copy")
1933+ break
1934+ elif match == 2:
1935+ state = "copying"
1936+ child.close(force = True)
1937+ if child.exitstatus != 0:
1938+ raise BackendException("Error running '%s'" % commandline)
1939
1940 def run_sftp_command(self, commandline, commands):
1941 """ Run an sftp command, responding to password prompts, passing commands from list """
1942@@ -160,76 +155,69 @@
1943 "Couldn't delete file",
1944 "open(.*): Failure"]
1945 max_response_len = max([len(p) for p in responses[1:]])
1946- for n in range(1, globals.num_retries+1):
1947- if n > 1:
1948- # sleep before retry
1949- time.sleep(self.retry_delay)
1950- log.Info("Running '%s' (attempt #%d)" % (commandline, n))
1951- child = pexpect.spawn(commandline, timeout = None, maxread=maxread)
1952- cmdloc = 0
1953- passprompt = 0
1954- while 1:
1955- msg = ""
1956- match = child.expect(responses,
1957- searchwindowsize=maxread+max_response_len)
1958- log.Debug("State = sftp, Before = '%s'" % (child.before.strip()))
1959- if match == 0:
1960- break
1961- elif match == 1:
1962- msg = "Timeout waiting for response"
1963- break
1964- if match == 2:
1965- if cmdloc < len(commands):
1966- command = commands[cmdloc]
1967- log.Info("sftp command: '%s'" % (command,))
1968- child.sendline(command)
1969- cmdloc += 1
1970- else:
1971- command = 'quit'
1972- child.sendline(command)
1973- res = child.before
1974- elif match == 3:
1975- passprompt += 1
1976- child.sendline(self.password)
1977- if (passprompt>1):
1978- raise BackendException("Invalid SSH password.")
1979- elif match == 4:
1980- if not child.before.strip().startswith("mkdir"):
1981- msg = "Permission denied"
1982- break
1983- elif match == 5:
1984- msg = "Host key authenticity could not be verified (missing known_hosts entry?)"
1985- break
1986- elif match == 6:
1987- if not child.before.strip().startswith("rm"):
1988- msg = "Remote file or directory does not exist in command='%s'" % (commandline,)
1989- break
1990- elif match == 7:
1991- if not child.before.strip().startswith("Removing"):
1992- msg = "Could not delete file in command='%s'" % (commandline,)
1993- break;
1994- elif match == 8:
1995+ log.Info("Running '%s'" % (commandline))
1996+ child = pexpect.spawn(commandline, timeout = None, maxread=maxread)
1997+ cmdloc = 0
1998+ passprompt = 0
1999+ while 1:
2000+ msg = ""
2001+ match = child.expect(responses,
2002+ searchwindowsize=maxread+max_response_len)
2003+ log.Debug("State = sftp, Before = '%s'" % (child.before.strip()))
2004+ if match == 0:
2005+ break
2006+ elif match == 1:
2007+ msg = "Timeout waiting for response"
2008+ break
2009+ if match == 2:
2010+ if cmdloc < len(commands):
2011+ command = commands[cmdloc]
2012+ log.Info("sftp command: '%s'" % (command,))
2013+ child.sendline(command)
2014+ cmdloc += 1
2015+ else:
2016+ command = 'quit'
2017+ child.sendline(command)
2018+ res = child.before
2019+ elif match == 3:
2020+ passprompt += 1
2021+ child.sendline(self.password)
2022+ if (passprompt>1):
2023+ raise BackendException("Invalid SSH password.")
2024+ elif match == 4:
2025+ if not child.before.strip().startswith("mkdir"):
2026+ msg = "Permission denied"
2027+ break
2028+ elif match == 5:
2029+ msg = "Host key authenticity could not be verified (missing known_hosts entry?)"
2030+ break
2031+ elif match == 6:
2032+ if not child.before.strip().startswith("rm"):
2033+ msg = "Remote file or directory does not exist in command='%s'" % (commandline,)
2034+ break
2035+ elif match == 7:
2036+ if not child.before.strip().startswith("Removing"):
2037 msg = "Could not delete file in command='%s'" % (commandline,)
2038- break
2039- elif match == 9:
2040- msg = "Could not open file in command='%s'" % (commandline,)
2041- break
2042- child.close(force = True)
2043- if child.exitstatus == 0:
2044- return res
2045- log.Warn("Running '%s' with commands:\n %s\n failed (attempt #%d): %s" % (commandline, "\n ".join(commands), n, msg))
2046- raise BackendException("Giving up trying to execute '%s' with commands:\n %s\n after %d attempts" % (commandline, "\n ".join(commands), globals.num_retries))
2047+ break;
2048+ elif match == 8:
2049+ msg = "Could not delete file in command='%s'" % (commandline,)
2050+ break
2051+ elif match == 9:
2052+ msg = "Could not open file in command='%s'" % (commandline,)
2053+ break
2054+ child.close(force = True)
2055+ if child.exitstatus == 0:
2056+ return res
2057+ else:
2058+ raise BackendException("Error running '%s': %s" % (commandline, msg))
2059
2060- def put(self, source_path, remote_filename = None):
2061+ def _put(self, source_path, remote_filename):
2062 if globals.use_scp:
2063- self.put_scp(source_path, remote_filename = remote_filename)
2064+ self.put_scp(source_path, remote_filename)
2065 else:
2066- self.put_sftp(source_path, remote_filename = remote_filename)
2067+ self.put_sftp(source_path, remote_filename)
2068
2069- def put_sftp(self, source_path, remote_filename = None):
2070- """Use sftp to copy source_dir/filename to remote computer"""
2071- if not remote_filename:
2072- remote_filename = source_path.get_filename()
2073+ def put_sftp(self, source_path, remote_filename):
2074 commands = ["put \"%s\" \"%s.%s.part\"" %
2075 (source_path.name, self.remote_prefix, remote_filename),
2076 "rename \"%s.%s.part\" \"%s%s\"" %
2077@@ -239,53 +227,36 @@
2078 self.host_string))
2079 self.run_sftp_command(commandline, commands)
2080
2081- def put_scp(self, source_path, remote_filename = None):
2082- """Use scp to copy source_dir/filename to remote computer"""
2083- if not remote_filename:
2084- remote_filename = source_path.get_filename()
2085+ def put_scp(self, source_path, remote_filename):
2086 commandline = "%s %s %s %s:%s%s" % \
2087 (self.scp_command, globals.ssh_options, source_path.name, self.host_string,
2088 self.remote_prefix, remote_filename)
2089 self.run_scp_command(commandline)
2090
2091- def get(self, remote_filename, local_path):
2092+ def _get(self, remote_filename, local_path):
2093 if globals.use_scp:
2094 self.get_scp(remote_filename, local_path)
2095 else:
2096 self.get_sftp(remote_filename, local_path)
2097
2098 def get_sftp(self, remote_filename, local_path):
2099- """Use sftp to get a remote file"""
2100 commands = ["get \"%s%s\" \"%s\"" %
2101 (self.remote_prefix, remote_filename, local_path.name)]
2102 commandline = ("%s %s %s" % (self.sftp_command,
2103 globals.ssh_options,
2104 self.host_string))
2105 self.run_sftp_command(commandline, commands)
2106- local_path.setdata()
2107- if not local_path.exists():
2108- raise BackendException("File %s not found locally after get "
2109- "from backend" % local_path.name)
2110
2111 def get_scp(self, remote_filename, local_path):
2112- """Use scp to get a remote file"""
2113 commandline = "%s %s %s:%s%s %s" % \
2114 (self.scp_command, globals.ssh_options, self.host_string, self.remote_prefix,
2115 remote_filename, local_path.name)
2116 self.run_scp_command(commandline)
2117- local_path.setdata()
2118- if not local_path.exists():
2119- raise BackendException("File %s not found locally after get "
2120- "from backend" % local_path.name)
2121
2122 def _list(self):
2123- """
2124- List files available for scp
2125-
2126- Note that this command can get confused when dealing with
2127- files with newlines in them, as the embedded newlines cannot
2128- be distinguished from the file boundaries.
2129- """
2130+ # Note that this command can get confused when dealing with
2131+ # files with newlines in them, as the embedded newlines cannot
2132+ # be distinguished from the file boundaries.
2133 dirs = self.remote_dir.split(os.sep)
2134 if len(dirs) > 0:
2135 if not dirs[0] :
2136@@ -304,16 +275,8 @@
2137
2138 return [x for x in map(string.strip, l) if x]
2139
2140- def delete(self, filename_list):
2141- """
2142- Runs sftp rm to delete files. Files must not require quoting.
2143- """
2144+ def _delete(self, filename):
2145 commands = ["cd \"%s\"" % (self.remote_dir,)]
2146- for fn in filename_list:
2147- commands.append("rm \"%s\"" % fn)
2148+ commands.append("rm \"%s\"" % filename)
2149 commandline = ("%s %s %s" % (self.sftp_command, globals.ssh_options, self.host_string))
2150 self.run_sftp_command(commandline, commands)
2151-
2152-duplicity.backend.register_backend("ssh", SSHPExpectBackend)
2153-duplicity.backend.register_backend("scp", SSHPExpectBackend)
2154-duplicity.backend.register_backend("sftp", SSHPExpectBackend)
2155
2156=== modified file 'duplicity/backends/botobackend.py'
2157--- duplicity/backends/botobackend.py 2014-04-17 21:54:04 +0000
2158+++ duplicity/backends/botobackend.py 2014-04-28 02:49:55 +0000
2159@@ -22,14 +22,12 @@
2160
2161 import duplicity.backend
2162 from duplicity import globals
2163-from ._boto_multi import BotoBackend as BotoMultiUploadBackend
2164-from ._boto_single import BotoBackend as BotoSingleUploadBackend
2165
2166 if globals.s3_use_multiprocessing:
2167- duplicity.backend.register_backend("gs", BotoMultiUploadBackend)
2168- duplicity.backend.register_backend("s3", BotoMultiUploadBackend)
2169- duplicity.backend.register_backend("s3+http", BotoMultiUploadBackend)
2170+ from ._boto_multi import BotoBackend
2171 else:
2172- duplicity.backend.register_backend("gs", BotoSingleUploadBackend)
2173- duplicity.backend.register_backend("s3", BotoSingleUploadBackend)
2174- duplicity.backend.register_backend("s3+http", BotoSingleUploadBackend)
2175+ from ._boto_single import BotoBackend
2176+
2177+duplicity.backend.register_backend("gs", BotoBackend)
2178+duplicity.backend.register_backend("s3", BotoBackend)
2179+duplicity.backend.register_backend("s3+http", BotoBackend)
2180
2181=== modified file 'duplicity/backends/cfbackend.py'
2182--- duplicity/backends/cfbackend.py 2014-04-17 21:54:04 +0000
2183+++ duplicity/backends/cfbackend.py 2014-04-28 02:49:55 +0000
2184@@ -18,10 +18,13 @@
2185 # along with duplicity; if not, write to the Free Software Foundation,
2186 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
2187
2188+import duplicity.backend
2189 from duplicity import globals
2190
2191 if (globals.cf_backend and
2192 globals.cf_backend.lower().strip() == 'pyrax'):
2193- from . import _cf_pyrax
2194+ from ._cf_pyrax import PyraxBackend as CFBackend
2195 else:
2196- from . import _cf_cloudfiles
2197+ from ._cf_cloudfiles import CloudFilesBackend as CFBackend
2198+
2199+duplicity.backend.register_backend("cf+http", CFBackend)
2200
2201=== modified file 'duplicity/backends/dpbxbackend.py'
2202--- duplicity/backends/dpbxbackend.py 2014-04-17 21:49:37 +0000
2203+++ duplicity/backends/dpbxbackend.py 2014-04-28 02:49:55 +0000
2204@@ -32,14 +32,10 @@
2205 from functools import reduce
2206
2207 import traceback, StringIO
2208-from exceptions import Exception
2209
2210 import duplicity.backend
2211-from duplicity import globals
2212 from duplicity import log
2213-from duplicity.errors import *
2214-from duplicity import tempdir
2215-from duplicity.backend import retry_fatal
2216+from duplicity.errors import BackendException
2217
2218
2219 # This application key is registered in my name (jno at pisem dot net).
2220@@ -76,14 +72,14 @@
2221 def wrapper(self, *args):
2222 from dropbox import rest
2223 if login_required and not self.sess.is_linked():
2224- log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin)
2225+ raise BackendException("dpbx Cannot login: check your credentials", log.ErrorCode.dpbx_nologin)
2226 return
2227
2228 try:
2229 return f(self, *args)
2230 except TypeError as e:
2231 log_exception(e)
2232- log.FatalError('dpbx type error "%s"' % (e,), log.ErrorCode.backend_code_error)
2233+ raise BackendException('dpbx type error "%s"' % (e,))
2234 except rest.ErrorResponse as e:
2235 msg = e.user_error_msg or str(e)
2236 log.Error('dpbx error: %s' % (msg,), log.ErrorCode.backend_command_error)
2237@@ -165,25 +161,22 @@
2238 if not self.sess.is_linked(): # stil not logged in
2239 log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin)
2240
2241- @retry_fatal
2242+ def _error_code(self, operation, e):
2243+ from dropbox import rest
2244+ if isinstance(e, rest.ErrorResponse):
2245+ if e.status == 404:
2246+ return log.ErrorCode.backend_not_found
2247+
2248 @command()
2249- def put(self, source_path, remote_filename = None):
2250- """Transfer source_path to remote_filename"""
2251- if not remote_filename:
2252- remote_filename = source_path.get_filename()
2253-
2254+ def _put(self, source_path, remote_filename):
2255 remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/'))
2256 remote_path = os.path.join(remote_dir, remote_filename).rstrip()
2257-
2258 from_file = open(source_path.name, "rb")
2259-
2260 resp = self.api_client.put_file(remote_path, from_file)
2261 log.Debug( 'dpbx,put(%s,%s): %s'%(source_path.name, remote_path, resp))
2262
2263- @retry_fatal
2264 @command()
2265- def get(self, remote_filename, local_path):
2266- """Get remote filename, saving it to local_path"""
2267+ def _get(self, remote_filename, local_path):
2268 remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip()
2269
2270 to_file = open( local_path.name, 'wb' )
2271@@ -196,10 +189,8 @@
2272
2273 local_path.setdata()
2274
2275- @retry_fatal
2276 @command()
2277- def _list(self,none=None):
2278- """List files in directory"""
2279+ def _list(self):
2280 # Do a long listing to avoid connection reset
2281 remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
2282 resp = self.api_client.metadata(remote_dir)
2283@@ -214,21 +205,15 @@
2284 l.append(name.encode(encoding))
2285 return l
2286
2287- @retry_fatal
2288 @command()
2289- def delete(self, filename_list):
2290- """Delete files in filename_list"""
2291- if not filename_list :
2292- log.Debug('dpbx.delete(): no op')
2293- return
2294+ def _delete(self, filename):
2295 remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
2296- for filename in filename_list:
2297- remote_name = os.path.join( remote_dir, filename )
2298- resp = self.api_client.file_delete( remote_name )
2299- log.Debug('dpbx.delete(%s): %s'%(remote_name,resp))
2300+ remote_name = os.path.join( remote_dir, filename )
2301+ resp = self.api_client.file_delete( remote_name )
2302+ log.Debug('dpbx.delete(%s): %s'%(remote_name,resp))
2303
2304 @command()
2305- def close(self):
2306+ def _close(self):
2307 """close backend session? no! just "flush" the data"""
2308 info = self.api_client.account_info()
2309 log.Debug('dpbx.close():')
2310
2311=== modified file 'duplicity/backends/ftpbackend.py'
2312--- duplicity/backends/ftpbackend.py 2014-04-25 23:20:12 +0000
2313+++ duplicity/backends/ftpbackend.py 2014-04-28 02:49:55 +0000
2314@@ -25,7 +25,6 @@
2315 import duplicity.backend
2316 from duplicity import globals
2317 from duplicity import log
2318-from duplicity.errors import * #@UnusedWildImport
2319 from duplicity import tempdir
2320
2321 class FTPBackend(duplicity.backend.Backend):
2322@@ -65,7 +64,7 @@
2323 # This squelches the "file not found" result from ncftpls when
2324 # the ftp backend looks for a collection that does not exist.
2325 # version 3.2.2 has error code 5, 1280 is some legacy value
2326- self.popen_persist_breaks[ 'ncftpls' ] = [ 5, 1280 ]
2327+ self.popen_breaks[ 'ncftpls' ] = [ 5, 1280 ]
2328
2329 # Use an explicit directory name.
2330 if self.url_string[-1] != '/':
2331@@ -88,36 +87,28 @@
2332 if parsed_url.port != None and parsed_url.port != 21:
2333 self.flags += " -P '%s'" % (parsed_url.port)
2334
2335- def put(self, source_path, remote_filename = None):
2336- """Transfer source_path to remote_filename"""
2337- if not remote_filename:
2338- remote_filename = source_path.get_filename()
2339+ def _put(self, source_path, remote_filename):
2340 remote_path = os.path.join(urllib.unquote(self.parsed_url.path.lstrip('/')), remote_filename).rstrip()
2341 commandline = "ncftpput %s -m -V -C '%s' '%s'" % \
2342 (self.flags, source_path.name, remote_path)
2343- self.run_command_persist(commandline)
2344+ self.subprocess_popen(commandline)
2345
2346- def get(self, remote_filename, local_path):
2347- """Get remote filename, saving it to local_path"""
2348+ def _get(self, remote_filename, local_path):
2349 remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip()
2350 commandline = "ncftpget %s -V -C '%s' '%s' '%s'" % \
2351 (self.flags, self.parsed_url.hostname, remote_path.lstrip('/'), local_path.name)
2352- self.run_command_persist(commandline)
2353- local_path.setdata()
2354+ self.subprocess_popen(commandline)
2355
2356 def _list(self):
2357- """List files in directory"""
2358 # Do a long listing to avoid connection reset
2359 commandline = "ncftpls %s -l '%s'" % (self.flags, self.url_string)
2360- l = self.popen_persist(commandline).split('\n')
2361+ _, l, _ = self.subprocess_popen(commandline)
2362 # Look for our files as the last element of a long list line
2363- return [x.split()[-1] for x in l if x and not x.startswith("total ")]
2364+ return [x.split()[-1] for x in l.split('\n') if x and not x.startswith("total ")]
2365
2366- def delete(self, filename_list):
2367- """Delete files in filename_list"""
2368- for filename in filename_list:
2369- commandline = "ncftpls %s -l -X 'DELE %s' '%s'" % \
2370- (self.flags, filename, self.url_string)
2371- self.popen_persist(commandline)
2372+ def _delete(self, filename):
2373+ commandline = "ncftpls %s -l -X 'DELE %s' '%s'" % \
2374+ (self.flags, filename, self.url_string)
2375+ self.subprocess_popen(commandline)
2376
2377 duplicity.backend.register_backend("ftp", FTPBackend)
2378
2379=== modified file 'duplicity/backends/ftpsbackend.py'
2380--- duplicity/backends/ftpsbackend.py 2014-04-25 23:20:12 +0000
2381+++ duplicity/backends/ftpsbackend.py 2014-04-28 02:49:55 +0000
2382@@ -28,7 +28,6 @@
2383 import duplicity.backend
2384 from duplicity import globals
2385 from duplicity import log
2386-from duplicity.errors import *
2387 from duplicity import tempdir
2388
2389 class FTPSBackend(duplicity.backend.Backend):
2390@@ -85,42 +84,29 @@
2391 os.write(self.tempfile, "user %s %s\n" % (self.parsed_url.username, self.password))
2392 os.close(self.tempfile)
2393
2394- self.flags = "-f %s" % self.tempname
2395-
2396- def put(self, source_path, remote_filename = None):
2397- """Transfer source_path to remote_filename"""
2398- if not remote_filename:
2399- remote_filename = source_path.get_filename()
2400+ def _put(self, source_path, remote_filename):
2401 remote_path = os.path.join(urllib.unquote(self.parsed_url.path.lstrip('/')), remote_filename).rstrip()
2402 commandline = "lftp -c 'source %s;put \'%s\' -o \'%s\''" % \
2403 (self.tempname, source_path.name, remote_path)
2404- l = self.run_command_persist(commandline)
2405+ self.subprocess_popen(commandline)
2406
2407- def get(self, remote_filename, local_path):
2408- """Get remote filename, saving it to local_path"""
2409+ def _get(self, remote_filename, local_path):
2410 remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip()
2411 commandline = "lftp -c 'source %s;get %s -o %s'" % \
2412 (self.tempname, remote_path.lstrip('/'), local_path.name)
2413- self.run_command_persist(commandline)
2414- local_path.setdata()
2415+ self.subprocess_popen(commandline)
2416
2417 def _list(self):
2418- """List files in directory"""
2419 # Do a long listing to avoid connection reset
2420 remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
2421 commandline = "lftp -c 'source %s;ls \'%s\''" % (self.tempname, remote_dir)
2422- l = self.popen_persist(commandline).split('\n')
2423+ _, l, _ = self.subprocess_popen(commandline)
2424 # Look for our files as the last element of a long list line
2425- return [x.split()[-1] for x in l if x]
2426+ return [x.split()[-1] for x in l.split('\n') if x]
2427
2428- def delete(self, filename_list):
2429- """Delete files in filename_list"""
2430- filelist = ""
2431- for filename in filename_list:
2432- filelist += "\'%s\' " % filename
2433- if filelist.rstrip():
2434- remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
2435- commandline = "lftp -c 'source %s;cd \'%s\';rm %s'" % (self.tempname, remote_dir, filelist.rstrip())
2436- self.popen_persist(commandline)
2437+ def _delete(self, filename):
2438+ remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
2439+ commandline = "lftp -c 'source %s;cd \'%s\';rm \'%s\''" % (self.tempname, remote_dir, filename)
2440+ self.subprocess_popen(commandline)
2441
2442 duplicity.backend.register_backend("ftps", FTPSBackend)
2443
2444=== modified file 'duplicity/backends/gdocsbackend.py'
2445--- duplicity/backends/gdocsbackend.py 2014-04-17 20:50:57 +0000
2446+++ duplicity/backends/gdocsbackend.py 2014-04-28 02:49:55 +0000
2447@@ -23,9 +23,7 @@
2448 import urllib
2449
2450 import duplicity.backend
2451-from duplicity.backend import retry
2452-from duplicity import log
2453-from duplicity.errors import * #@UnusedWildImport
2454+from duplicity.errors import BackendException
2455
2456
2457 class GDocsBackend(duplicity.backend.Backend):
2458@@ -53,14 +51,14 @@
2459 self.client = gdata.docs.client.DocsClient(source='duplicity $version')
2460 self.client.ssl = True
2461 self.client.http_client.debug = False
2462- self.__authorize(parsed_url.username + '@' + parsed_url.hostname, self.get_password())
2463+ self._authorize(parsed_url.username + '@' + parsed_url.hostname, self.get_password())
2464
2465 # Fetch destination folder entry (and crete hierarchy if required).
2466 folder_names = string.split(parsed_url.path[1:], '/')
2467 parent_folder = None
2468 parent_folder_id = GDocsBackend.ROOT_FOLDER_ID
2469 for folder_name in folder_names:
2470- entries = self.__fetch_entries(parent_folder_id, 'folder', folder_name)
2471+ entries = self._fetch_entries(parent_folder_id, 'folder', folder_name)
2472 if entries is not None:
2473 if len(entries) == 1:
2474 parent_folder = entries[0]
2475@@ -77,106 +75,54 @@
2476 raise BackendException("Error while fetching destination folder '%s'." % folder_name)
2477 self.folder = parent_folder
2478
2479- @retry
2480- def put(self, source_path, remote_filename=None, raise_errors=False):
2481- """Transfer source_path to remote_filename"""
2482- # Default remote file name.
2483- if not remote_filename:
2484- remote_filename = source_path.get_filename()
2485-
2486- # Upload!
2487- try:
2488- # If remote file already exists in destination folder, remove it.
2489- entries = self.__fetch_entries(self.folder.resource_id.text,
2490- GDocsBackend.BACKUP_DOCUMENT_TYPE,
2491- remote_filename)
2492- for entry in entries:
2493- self.client.delete(entry.get_edit_link().href + '?delete=true', force=True)
2494-
2495- # Set uploader instance. Note that resumable uploads are required in order to
2496- # enable uploads for all file types.
2497- # (see http://googleappsdeveloper.blogspot.com/2011/05/upload-all-file-types-to-any-google.html)
2498- file = source_path.open()
2499- uploader = gdata.client.ResumableUploader(
2500- self.client, file, GDocsBackend.BACKUP_DOCUMENT_TYPE, os.path.getsize(file.name),
2501- chunk_size=gdata.client.ResumableUploader.DEFAULT_CHUNK_SIZE,
2502- desired_class=gdata.docs.data.Resource)
2503- if uploader:
2504- # Chunked upload.
2505- entry = gdata.docs.data.Resource(title=atom.data.Title(text=remote_filename))
2506- uri = self.folder.get_resumable_create_media_link().href + '?convert=false'
2507- entry = uploader.UploadFile(uri, entry=entry)
2508- if not entry:
2509- self.__handle_error("Failed to upload file '%s' to remote folder '%s'"
2510- % (source_path.get_filename(), self.folder.title.text), raise_errors)
2511- else:
2512- self.__handle_error("Failed to initialize upload of file '%s' to remote folder '%s'"
2513- % (source_path.get_filename(), self.folder.title.text), raise_errors)
2514- assert not file.close()
2515- except Exception as e:
2516- self.__handle_error("Failed to upload file '%s' to remote folder '%s': %s"
2517- % (source_path.get_filename(), self.folder.title.text, str(e)), raise_errors)
2518-
2519- @retry
2520- def get(self, remote_filename, local_path, raise_errors=False):
2521- """Get remote filename, saving it to local_path"""
2522- try:
2523- entries = self.__fetch_entries(self.folder.resource_id.text,
2524- GDocsBackend.BACKUP_DOCUMENT_TYPE,
2525- remote_filename)
2526- if len(entries) == 1:
2527- entry = entries[0]
2528- self.client.DownloadResource(entry, local_path.name)
2529- local_path.setdata()
2530- return
2531- else:
2532- self.__handle_error("Failed to find file '%s' in remote folder '%s'"
2533- % (remote_filename, self.folder.title.text), raise_errors)
2534- except Exception as e:
2535- self.__handle_error("Failed to download file '%s' in remote folder '%s': %s"
2536- % (remote_filename, self.folder.title.text, str(e)), raise_errors)
2537-
2538- @retry
2539- def _list(self, raise_errors=False):
2540- """List files in folder"""
2541- try:
2542- entries = self.__fetch_entries(self.folder.resource_id.text,
2543- GDocsBackend.BACKUP_DOCUMENT_TYPE)
2544- return [entry.title.text for entry in entries]
2545- except Exception as e:
2546- self.__handle_error("Failed to fetch list of files in remote folder '%s': %s"
2547- % (self.folder.title.text, str(e)), raise_errors)
2548-
2549- @retry
2550- def delete(self, filename_list, raise_errors=False):
2551- """Delete files in filename_list"""
2552- for filename in filename_list:
2553- try:
2554- entries = self.__fetch_entries(self.folder.resource_id.text,
2555- GDocsBackend.BACKUP_DOCUMENT_TYPE,
2556- filename)
2557- if len(entries) > 0:
2558- success = True
2559- for entry in entries:
2560- if not self.client.delete(entry.get_edit_link().href + '?delete=true', force=True):
2561- success = False
2562- if not success:
2563- self.__handle_error("Failed to remove file '%s' in remote folder '%s'"
2564- % (filename, self.folder.title.text), raise_errors)
2565- else:
2566- log.Warn("Failed to fetch file '%s' in remote folder '%s'"
2567- % (filename, self.folder.title.text))
2568- except Exception as e:
2569- self.__handle_error("Failed to remove file '%s' in remote folder '%s': %s"
2570- % (filename, self.folder.title.text, str(e)), raise_errors)
2571-
2572- def __handle_error(self, message, raise_errors=True):
2573- if raise_errors:
2574- raise BackendException(message)
2575- else:
2576- log.FatalError(message, log.ErrorCode.backend_error)
2577-
2578- def __authorize(self, email, password, captcha_token=None, captcha_response=None):
2579+ def _put(self, source_path, remote_filename):
2580+ self._delete(remote_filename)
2581+
2582+ # Set uploader instance. Note that resumable uploads are required in order to
2583+ # enable uploads for all file types.
2584+ # (see http://googleappsdeveloper.blogspot.com/2011/05/upload-all-file-types-to-any-google.html)
2585+ file = source_path.open()
2586+ uploader = gdata.client.ResumableUploader(
2587+ self.client, file, GDocsBackend.BACKUP_DOCUMENT_TYPE, os.path.getsize(file.name),
2588+ chunk_size=gdata.client.ResumableUploader.DEFAULT_CHUNK_SIZE,
2589+ desired_class=gdata.docs.data.Resource)
2590+ if uploader:
2591+ # Chunked upload.
2592+ entry = gdata.docs.data.Resource(title=atom.data.Title(text=remote_filename))
2593+ uri = self.folder.get_resumable_create_media_link().href + '?convert=false'
2594+ entry = uploader.UploadFile(uri, entry=entry)
2595+ if not entry:
2596+ raise BackendException("Failed to upload file '%s' to remote folder '%s'"
2597+ % (source_path.get_filename(), self.folder.title.text))
2598+ else:
2599+ raise BackendException("Failed to initialize upload of file '%s' to remote folder '%s'"
2600+ % (source_path.get_filename(), self.folder.title.text))
2601+ assert not file.close()
2602+
2603+ def _get(self, remote_filename, local_path):
2604+ entries = self._fetch_entries(self.folder.resource_id.text,
2605+ GDocsBackend.BACKUP_DOCUMENT_TYPE,
2606+ remote_filename)
2607+ if len(entries) == 1:
2608+ entry = entries[0]
2609+ self.client.DownloadResource(entry, local_path.name)
2610+ else:
2611+ raise BackendException("Failed to find file '%s' in remote folder '%s'"
2612+ % (remote_filename, self.folder.title.text))
2613+
2614+ def _list(self):
2615+ entries = self._fetch_entries(self.folder.resource_id.text,
2616+ GDocsBackend.BACKUP_DOCUMENT_TYPE)
2617+ return [entry.title.text for entry in entries]
2618+
2619+ def _delete(self, filename):
2620+ entries = self._fetch_entries(self.folder.resource_id.text,
2621+ GDocsBackend.BACKUP_DOCUMENT_TYPE,
2622+ filename)
2623+ for entry in entries:
2624+ self.client.delete(entry.get_edit_link().href + '?delete=true', force=True)
2625+
2626+ def _authorize(self, email, password, captcha_token=None, captcha_response=None):
2627 try:
2628 self.client.client_login(email,
2629 password,
2630@@ -189,17 +135,15 @@
2631 answer = None
2632 while not answer:
2633 answer = raw_input('Answer to the challenge? ')
2634- self.__authorize(email, password, challenge.captcha_token, answer)
2635+ self._authorize(email, password, challenge.captcha_token, answer)
2636 except gdata.client.BadAuthentication:
2637- self.__handle_error('Invalid user credentials given. Be aware that accounts '
2638- 'that use 2-step verification require creating an application specific '
2639- 'access code for using this Duplicity backend. Follow the instrucction in '
2640- 'http://www.google.com/support/accounts/bin/static.py?page=guide.cs&guide=1056283&topic=1056286 '
2641- 'and create your application-specific password to run duplicity backups.')
2642- except Exception as e:
2643- self.__handle_error('Error while authenticating client: %s.' % str(e))
2644+ raise BackendException('Invalid user credentials given. Be aware that accounts '
2645+ 'that use 2-step verification require creating an application specific '
2646+ 'access code for using this Duplicity backend. Follow the instruction in '
2647+ 'http://www.google.com/support/accounts/bin/static.py?page=guide.cs&guide=1056283&topic=1056286 '
2648+ 'and create your application-specific password to run duplicity backups.')
2649
2650- def __fetch_entries(self, folder_id, type, title=None):
2651+ def _fetch_entries(self, folder_id, type, title=None):
2652 # Build URI.
2653 uri = '/feeds/default/private/full/%s/contents' % folder_id
2654 if type == 'folder':
2655@@ -211,34 +155,31 @@
2656 if title:
2657 uri += '&title=' + urllib.quote(title) + '&title-exact=true'
2658
2659- try:
2660- # Fetch entries.
2661- entries = self.client.get_all_resources(uri=uri)
2662-
2663- # When filtering by entry title, API is returning (don't know why) documents in other
2664- # folders (apart from folder_id) matching the title, so some extra filtering is required.
2665- if title:
2666- result = []
2667- for entry in entries:
2668- resource_type = entry.get_resource_type()
2669- if (not type) \
2670- or (type == 'folder' and resource_type == 'folder') \
2671- or (type == GDocsBackend.BACKUP_DOCUMENT_TYPE and resource_type != 'folder'):
2672-
2673- if folder_id != GDocsBackend.ROOT_FOLDER_ID:
2674- for link in entry.in_collections():
2675- folder_entry = self.client.get_entry(link.href, None, None,
2676- desired_class=gdata.docs.data.Resource)
2677- if folder_entry and (folder_entry.resource_id.text == folder_id):
2678- result.append(entry)
2679- elif len(entry.in_collections()) == 0:
2680- result.append(entry)
2681- else:
2682- result = entries
2683-
2684- # Done!
2685- return result
2686- except Exception as e:
2687- self.__handle_error('Error while fetching remote entries: %s.' % str(e))
2688+ # Fetch entries.
2689+ entries = self.client.get_all_resources(uri=uri)
2690+
2691+ # When filtering by entry title, API is returning (don't know why) documents in other
2692+ # folders (apart from folder_id) matching the title, so some extra filtering is required.
2693+ if title:
2694+ result = []
2695+ for entry in entries:
2696+ resource_type = entry.get_resource_type()
2697+ if (not type) \
2698+ or (type == 'folder' and resource_type == 'folder') \
2699+ or (type == GDocsBackend.BACKUP_DOCUMENT_TYPE and resource_type != 'folder'):
2700+
2701+ if folder_id != GDocsBackend.ROOT_FOLDER_ID:
2702+ for link in entry.in_collections():
2703+ folder_entry = self.client.get_entry(link.href, None, None,
2704+ desired_class=gdata.docs.data.Resource)
2705+ if folder_entry and (folder_entry.resource_id.text == folder_id):
2706+ result.append(entry)
2707+ elif len(entry.in_collections()) == 0:
2708+ result.append(entry)
2709+ else:
2710+ result = entries
2711+
2712+ # Done!
2713+ return result
2714
2715 duplicity.backend.register_backend('gdocs', GDocsBackend)
2716
2717=== modified file 'duplicity/backends/giobackend.py'
2718--- duplicity/backends/giobackend.py 2014-04-17 20:50:57 +0000
2719+++ duplicity/backends/giobackend.py 2014-04-28 02:49:55 +0000
2720@@ -19,18 +19,12 @@
2721 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
2722
2723 import os
2724-import types
2725 import subprocess
2726 import atexit
2727 import signal
2728-from gi.repository import Gio #@UnresolvedImport
2729-from gi.repository import GLib #@UnresolvedImport
2730
2731 import duplicity.backend
2732-from duplicity.backend import retry
2733 from duplicity import log
2734-from duplicity import util
2735-from duplicity.errors import * #@UnusedWildImport
2736
2737 def ensure_dbus():
2738 # GIO requires a dbus session bus which can start the gvfs daemons
2739@@ -46,36 +40,39 @@
2740 atexit.register(os.kill, int(parts[1]), signal.SIGTERM)
2741 os.environ[parts[0]] = parts[1]
2742
2743-class DupMountOperation(Gio.MountOperation):
2744- """A simple MountOperation that grabs the password from the environment
2745- or the user.
2746- """
2747- def __init__(self, backend):
2748- Gio.MountOperation.__init__(self)
2749- self.backend = backend
2750- self.connect('ask-password', self.ask_password_cb)
2751- self.connect('ask-question', self.ask_question_cb)
2752-
2753- def ask_password_cb(self, *args, **kwargs):
2754- self.set_password(self.backend.get_password())
2755- self.reply(Gio.MountOperationResult.HANDLED)
2756-
2757- def ask_question_cb(self, *args, **kwargs):
2758- # Obviously just always answering with the first choice is a naive
2759- # approach. But there's no easy way to allow for answering questions
2760- # in duplicity's typical run-from-cron mode with environment variables.
2761- # And only a couple gvfs backends ask questions: 'sftp' does about
2762- # new hosts and 'afc' does if the device is locked. 0 should be a
2763- # safe choice.
2764- self.set_choice(0)
2765- self.reply(Gio.MountOperationResult.HANDLED)
2766-
2767 class GIOBackend(duplicity.backend.Backend):
2768 """Use this backend when saving to a GIO URL.
2769 This is a bit of a meta-backend, in that it can handle multiple schemas.
2770 URLs look like schema://user@server/path.
2771 """
2772 def __init__(self, parsed_url):
2773+ from gi.repository import Gio #@UnresolvedImport
2774+ from gi.repository import GLib #@UnresolvedImport
2775+
2776+ class DupMountOperation(Gio.MountOperation):
2777+ """A simple MountOperation that grabs the password from the environment
2778+ or the user.
2779+ """
2780+ def __init__(self, backend):
2781+ Gio.MountOperation.__init__(self)
2782+ self.backend = backend
2783+ self.connect('ask-password', self.ask_password_cb)
2784+ self.connect('ask-question', self.ask_question_cb)
2785+
2786+ def ask_password_cb(self, *args, **kwargs):
2787+ self.set_password(self.backend.get_password())
2788+ self.reply(Gio.MountOperationResult.HANDLED)
2789+
2790+ def ask_question_cb(self, *args, **kwargs):
2791+ # Obviously just always answering with the first choice is a naive
2792+ # approach. But there's no easy way to allow for answering questions
2793+ # in duplicity's typical run-from-cron mode with environment variables.
2794+ # And only a couple gvfs backends ask questions: 'sftp' does about
2795+ # new hosts and 'afc' does if the device is locked. 0 should be a
2796+ # safe choice.
2797+ self.set_choice(0)
2798+ self.reply(Gio.MountOperationResult.HANDLED)
2799+
2800 duplicity.backend.Backend.__init__(self, parsed_url)
2801
2802 ensure_dbus()
2803@@ -86,8 +83,8 @@
2804 op = DupMountOperation(self)
2805 loop = GLib.MainLoop()
2806 self.remote_file.mount_enclosing_volume(Gio.MountMountFlags.NONE,
2807- op, None, self.done_with_mount,
2808- loop)
2809+ op, None,
2810+ self.__done_with_mount, loop)
2811 loop.run() # halt program until we're done mounting
2812
2813 # Now make the directory if it doesn't exist
2814@@ -97,7 +94,9 @@
2815 if e.code != Gio.IOErrorEnum.EXISTS:
2816 raise
2817
2818- def done_with_mount(self, fileobj, result, loop):
2819+ def __done_with_mount(self, fileobj, result, loop):
2820+ from gi.repository import Gio #@UnresolvedImport
2821+ from gi.repository import GLib #@UnresolvedImport
2822 try:
2823 fileobj.mount_enclosing_volume_finish(result)
2824 except GLib.GError as e:
2825@@ -107,97 +106,63 @@
2826 % str(e), log.ErrorCode.connection_failed)
2827 loop.quit()
2828
2829- def handle_error(self, raise_error, e, op, file1=None, file2=None):
2830- if raise_error:
2831- raise e
2832- code = log.ErrorCode.backend_error
2833+ def __copy_progress(self, *args, **kwargs):
2834+ pass
2835+
2836+ def __copy_file(self, source, target):
2837+ from gi.repository import Gio #@UnresolvedImport
2838+ source.copy(target,
2839+ Gio.FileCopyFlags.OVERWRITE | Gio.FileCopyFlags.NOFOLLOW_SYMLINKS,
2840+ None, self.__copy_progress, None)
2841+
2842+ def _error_code(self, operation, e):
2843+ from gi.repository import Gio #@UnresolvedImport
2844+ from gi.repository import GLib #@UnresolvedImport
2845 if isinstance(e, GLib.GError):
2846- if e.code == Gio.IOErrorEnum.PERMISSION_DENIED:
2847- code = log.ErrorCode.backend_permission_denied
2848+ if e.code == Gio.IOErrorEnum.FAILED and operation == 'delete':
2849+ # Sometimes delete will return a generic failure on a file not
2850+ # found (notably the FTP does that)
2851+ return log.ErrorCode.backend_not_found
2852+ elif e.code == Gio.IOErrorEnum.PERMISSION_DENIED:
2853+ return log.ErrorCode.backend_permission_denied
2854 elif e.code == Gio.IOErrorEnum.NOT_FOUND:
2855- code = log.ErrorCode.backend_not_found
2856+ return log.ErrorCode.backend_not_found
2857 elif e.code == Gio.IOErrorEnum.NO_SPACE:
2858- code = log.ErrorCode.backend_no_space
2859- extra = ' '.join([util.escape(x) for x in [file1, file2] if x])
2860- extra = ' '.join([op, extra])
2861- log.FatalError(str(e), code, extra)
2862-
2863- def copy_progress(self, *args, **kwargs):
2864- pass
2865-
2866- @retry
2867- def copy_file(self, op, source, target, raise_errors=False):
2868- log.Info(_("Writing %s") % target.get_parse_name())
2869- try:
2870- source.copy(target,
2871- Gio.FileCopyFlags.OVERWRITE | Gio.FileCopyFlags.NOFOLLOW_SYMLINKS,
2872- None, self.copy_progress, None)
2873- except Exception as e:
2874- self.handle_error(raise_errors, e, op, source.get_parse_name(),
2875- target.get_parse_name())
2876-
2877- def put(self, source_path, remote_filename = None):
2878- """Copy file to remote"""
2879- if not remote_filename:
2880- remote_filename = source_path.get_filename()
2881+ return log.ErrorCode.backend_no_space
2882+
2883+ def _put(self, source_path, remote_filename):
2884+ from gi.repository import Gio #@UnresolvedImport
2885 source_file = Gio.File.new_for_path(source_path.name)
2886 target_file = self.remote_file.get_child(remote_filename)
2887- self.copy_file('put', source_file, target_file)
2888+ self.__copy_file(source_file, target_file)
2889
2890- def get(self, filename, local_path):
2891- """Get file and put in local_path (Path object)"""
2892+ def _get(self, filename, local_path):
2893+ from gi.repository import Gio #@UnresolvedImport
2894 source_file = self.remote_file.get_child(filename)
2895 target_file = Gio.File.new_for_path(local_path.name)
2896- self.copy_file('get', source_file, target_file)
2897- local_path.setdata()
2898+ self.__copy_file(source_file, target_file)
2899
2900- @retry
2901- def _list(self, raise_errors=False):
2902- """List files in that directory"""
2903+ def _list(self):
2904+ from gi.repository import Gio #@UnresolvedImport
2905 files = []
2906- try:
2907- enum = self.remote_file.enumerate_children(Gio.FILE_ATTRIBUTE_STANDARD_NAME,
2908- Gio.FileQueryInfoFlags.NOFOLLOW_SYMLINKS,
2909- None)
2910+ enum = self.remote_file.enumerate_children(Gio.FILE_ATTRIBUTE_STANDARD_NAME,
2911+ Gio.FileQueryInfoFlags.NOFOLLOW_SYMLINKS,
2912+ None)
2913+ info = enum.next_file(None)
2914+ while info:
2915+ files.append(info.get_name())
2916 info = enum.next_file(None)
2917- while info:
2918- files.append(info.get_name())
2919- info = enum.next_file(None)
2920- except Exception as e:
2921- self.handle_error(raise_errors, e, 'list',
2922- self.remote_file.get_parse_name())
2923 return files
2924
2925- @retry
2926- def delete(self, filename_list, raise_errors=False):
2927- """Delete all files in filename list"""
2928- assert type(filename_list) is not types.StringType
2929- for filename in filename_list:
2930- target_file = self.remote_file.get_child(filename)
2931- try:
2932- target_file.delete(None)
2933- except Exception as e:
2934- if isinstance(e, GLib.GError):
2935- if e.code == Gio.IOErrorEnum.NOT_FOUND:
2936- continue
2937- self.handle_error(raise_errors, e, 'delete',
2938- target_file.get_parse_name())
2939- return
2940-
2941- @retry
2942- def _query_file_info(self, filename, raise_errors=False):
2943- """Query attributes on filename"""
2944- target_file = self.remote_file.get_child(filename)
2945- attrs = Gio.FILE_ATTRIBUTE_STANDARD_SIZE
2946- try:
2947- info = target_file.query_info(attrs, Gio.FileQueryInfoFlags.NONE,
2948- None)
2949- return {'size': info.get_size()}
2950- except Exception as e:
2951- if isinstance(e, GLib.GError):
2952- if e.code == Gio.IOErrorEnum.NOT_FOUND:
2953- return {'size': -1} # early exit, no need to retry
2954- if raise_errors:
2955- raise e
2956- else:
2957- return {'size': None}
2958+ def _delete(self, filename):
2959+ target_file = self.remote_file.get_child(filename)
2960+ target_file.delete(None)
2961+
2962+ def _query(self, filename):
2963+ from gi.repository import Gio #@UnresolvedImport
2964+ target_file = self.remote_file.get_child(filename)
2965+ info = target_file.query_info(Gio.FILE_ATTRIBUTE_STANDARD_SIZE,
2966+ Gio.FileQueryInfoFlags.NONE, None)
2967+ return {'size': info.get_size()}
2968+
2969+duplicity.backend.register_backend_prefix('gio', GIOBackend)
2970
2971=== modified file 'duplicity/backends/hsibackend.py'
2972--- duplicity/backends/hsibackend.py 2014-04-25 23:20:12 +0000
2973+++ duplicity/backends/hsibackend.py 2014-04-28 02:49:55 +0000
2974@@ -20,9 +20,7 @@
2975 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
2976
2977 import os
2978-
2979 import duplicity.backend
2980-from duplicity.errors import * #@UnusedWildImport
2981
2982 hsi_command = "hsi"
2983 class HSIBackend(duplicity.backend.Backend):
2984@@ -35,35 +33,23 @@
2985 else:
2986 self.remote_prefix = ""
2987
2988- def put(self, source_path, remote_filename = None):
2989- if not remote_filename:
2990- remote_filename = source_path.get_filename()
2991+ def _put(self, source_path, remote_filename):
2992 commandline = '%s "put %s : %s%s"' % (hsi_command,source_path.name,self.remote_prefix,remote_filename)
2993- try:
2994- self.run_command(commandline)
2995- except Exception:
2996- print commandline
2997+ self.subprocess_popen(commandline)
2998
2999- def get(self, remote_filename, local_path):
3000+ def _get(self, remote_filename, local_path):
3001 commandline = '%s "get %s : %s%s"' % (hsi_command, local_path.name, self.remote_prefix, remote_filename)
3002- self.run_command(commandline)
3003- local_path.setdata()
3004- if not local_path.exists():
3005- raise BackendException("File %s not found" % local_path.name)
3006+ self.subprocess_popen(commandline)
3007
3008- def list(self):
3009+ def _list(self):
3010 commandline = '%s "ls -l %s"' % (hsi_command, self.remote_dir)
3011 l = os.popen3(commandline)[2].readlines()[3:]
3012 for i in range(0,len(l)):
3013 l[i] = l[i].split()[-1]
3014 return [x for x in l if x]
3015
3016- def delete(self, filename_list):
3017- assert len(filename_list) > 0
3018- for fn in filename_list:
3019- commandline = '%s "rm %s%s"' % (hsi_command, self.remote_prefix, fn)
3020- self.run_command(commandline)
3021+ def _delete(self, filename):
3022+ commandline = '%s "rm %s%s"' % (hsi_command, self.remote_prefix, filename)
3023+ self.subprocess_popen(commandline)
3024
3025 duplicity.backend.register_backend("hsi", HSIBackend)
3026-
3027-
3028
3029=== modified file 'duplicity/backends/imapbackend.py'
3030--- duplicity/backends/imapbackend.py 2014-04-17 22:03:10 +0000
3031+++ duplicity/backends/imapbackend.py 2014-04-28 02:49:55 +0000
3032@@ -44,7 +44,7 @@
3033 (self.__class__.__name__, parsed_url.scheme, parsed_url.hostname, parsed_url.username))
3034
3035 # Store url for reconnection on error
3036- self._url = parsed_url
3037+ self.url = parsed_url
3038
3039 # Set the username
3040 if ( parsed_url.username is None ):
3041@@ -61,12 +61,12 @@
3042 else:
3043 password = parsed_url.password
3044
3045- self._username = username
3046- self._password = password
3047- self._resetConnection()
3048+ self.username = username
3049+ self.password = password
3050+ self.resetConnection()
3051
3052- def _resetConnection(self):
3053- parsed_url = self._url
3054+ def resetConnection(self):
3055+ parsed_url = self.url
3056 try:
3057 imap_server = os.environ['IMAP_SERVER']
3058 except KeyError:
3059@@ -74,32 +74,32 @@
3060
3061 # Try to close the connection cleanly
3062 try:
3063- self._conn.close()
3064+ self.conn.close()
3065 except Exception:
3066 pass
3067
3068 if (parsed_url.scheme == "imap"):
3069 cl = imaplib.IMAP4
3070- self._conn = cl(imap_server, 143)
3071+ self.conn = cl(imap_server, 143)
3072 elif (parsed_url.scheme == "imaps"):
3073 cl = imaplib.IMAP4_SSL
3074- self._conn = cl(imap_server, 993)
3075+ self.conn = cl(imap_server, 993)
3076
3077 log.Debug("Type of imap class: %s" % (cl.__name__))
3078 self.remote_dir = re.sub(r'^/', r'', parsed_url.path, 1)
3079
3080 # Login
3081 if (not(globals.imap_full_address)):
3082- self._conn.login(self._username, self._password)
3083- self._conn.select(globals.imap_mailbox)
3084+ self.conn.login(self.username, self.password)
3085+ self.conn.select(globals.imap_mailbox)
3086 log.Info("IMAP connected")
3087 else:
3088- self._conn.login(self._username + "@" + parsed_url.hostname, self._password)
3089- self._conn.select(globals.imap_mailbox)
3090+ self.conn.login(self.username + "@" + parsed_url.hostname, self.password)
3091+ self.conn.select(globals.imap_mailbox)
3092 log.Info("IMAP connected")
3093
3094
3095- def _prepareBody(self,f,rname):
3096+ def prepareBody(self,f,rname):
3097 mp = email.MIMEMultipart.MIMEMultipart()
3098
3099 # I am going to use the remote_dir as the From address so that
3100@@ -117,9 +117,7 @@
3101
3102 return mp.as_string()
3103
3104- def put(self, source_path, remote_filename = None):
3105- if not remote_filename:
3106- remote_filename = source_path.get_filename()
3107+ def _put(self, source_path, remote_filename):
3108 f=source_path.open("rb")
3109 allowedTimeout = globals.timeout
3110 if (allowedTimeout == 0):
3111@@ -127,12 +125,12 @@
3112 allowedTimeout = 2880
3113 while allowedTimeout > 0:
3114 try:
3115- self._conn.select(remote_filename)
3116- body=self._prepareBody(f,remote_filename)
3117+ self.conn.select(remote_filename)
3118+ body=self.prepareBody(f,remote_filename)
3119 # If we don't select the IMAP folder before
3120 # append, the message goes into the INBOX.
3121- self._conn.select(globals.imap_mailbox)
3122- self._conn.append(globals.imap_mailbox, None, None, body)
3123+ self.conn.select(globals.imap_mailbox)
3124+ self.conn.append(globals.imap_mailbox, None, None, body)
3125 break
3126 except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
3127 allowedTimeout -= 1
3128@@ -140,7 +138,7 @@
3129 time.sleep(30)
3130 while allowedTimeout > 0:
3131 try:
3132- self._resetConnection()
3133+ self.resetConnection()
3134 break
3135 except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
3136 allowedTimeout -= 1
3137@@ -149,15 +147,15 @@
3138
3139 log.Info("IMAP mail with '%s' subject stored" % remote_filename)
3140
3141- def get(self, remote_filename, local_path):
3142+ def _get(self, remote_filename, local_path):
3143 allowedTimeout = globals.timeout
3144 if (allowedTimeout == 0):
3145 # Allow a total timeout of 1 day
3146 allowedTimeout = 2880
3147 while allowedTimeout > 0:
3148 try:
3149- self._conn.select(globals.imap_mailbox)
3150- (result,list) = self._conn.search(None, 'Subject', remote_filename)
3151+ self.conn.select(globals.imap_mailbox)
3152+ (result,list) = self.conn.search(None, 'Subject', remote_filename)
3153 if result != "OK":
3154 raise Exception(list[0])
3155
3156@@ -165,7 +163,7 @@
3157 if list[0] == '':
3158 raise Exception("no mail with subject %s")
3159
3160- (result,list) = self._conn.fetch(list[0],"(RFC822)")
3161+ (result,list) = self.conn.fetch(list[0],"(RFC822)")
3162
3163 if result != "OK":
3164 raise Exception(list[0])
3165@@ -185,7 +183,7 @@
3166 time.sleep(30)
3167 while allowedTimeout > 0:
3168 try:
3169- self._resetConnection()
3170+ self.resetConnection()
3171 break
3172 except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
3173 allowedTimeout -= 1
3174@@ -199,7 +197,7 @@
3175
3176 def _list(self):
3177 ret = []
3178- (result,list) = self._conn.select(globals.imap_mailbox)
3179+ (result,list) = self.conn.select(globals.imap_mailbox)
3180 if result != "OK":
3181 raise BackendException(list[0])
3182
3183@@ -207,14 +205,14 @@
3184 # address
3185
3186 # Search returns an error if you haven't selected an IMAP folder.
3187- (result,list) = self._conn.search(None, 'FROM', self.remote_dir)
3188+ (result,list) = self.conn.search(None, 'FROM', self.remote_dir)
3189 if result!="OK":
3190 raise Exception(list[0])
3191 if list[0]=='':
3192 return ret
3193 nums=list[0].split(" ")
3194 set="%s:%s"%(nums[0],nums[-1])
3195- (result,list) = self._conn.fetch(set,"(BODY[HEADER])")
3196+ (result,list) = self.conn.fetch(set,"(BODY[HEADER])")
3197 if result!="OK":
3198 raise Exception(list[0])
3199
3200@@ -232,34 +230,32 @@
3201 log.Info("IMAP LIST: %s %s" % (subj,header_from))
3202 return ret
3203
3204- def _imapf(self,fun,*args):
3205+ def imapf(self,fun,*args):
3206 (ret,list)=fun(*args)
3207 if ret != "OK":
3208 raise Exception(list[0])
3209 return list
3210
3211- def _delete_single_mail(self,i):
3212- self._imapf(self._conn.store,i,"+FLAGS",'\\DELETED')
3213-
3214- def _expunge(self):
3215- list=self._imapf(self._conn.expunge)
3216-
3217- def delete(self, filename_list):
3218- assert len(filename_list) > 0
3219+ def delete_single_mail(self,i):
3220+ self.imapf(self.conn.store,i,"+FLAGS",'\\DELETED')
3221+
3222+ def expunge(self):
3223+ list=self.imapf(self.conn.expunge)
3224+
3225+ def _delete_list(self, filename_list):
3226 for filename in filename_list:
3227- list = self._imapf(self._conn.search,None,"(SUBJECT %s)"%filename)
3228+ list = self.imapf(self.conn.search,None,"(SUBJECT %s)"%filename)
3229 list = list[0].split()
3230- if len(list)==0 or list[0]=="":raise Exception("no such mail with subject '%s'"%filename)
3231- self._delete_single_mail(list[0])
3232- log.Notice("marked %s to be deleted" % filename)
3233- self._expunge()
3234- log.Notice("IMAP expunged %s files" % len(list))
3235+ if len(list) > 0 and list[0] != "":
3236+ self.delete_single_mail(list[0])
3237+ log.Notice("marked %s to be deleted" % filename)
3238+ self.expunge()
3239+ log.Notice("IMAP expunged %s files" % len(filename_list))
3240
3241- def close(self):
3242- self._conn.select(globals.imap_mailbox)
3243- self._conn.close()
3244- self._conn.logout()
3245+ def _close(self):
3246+ self.conn.select(globals.imap_mailbox)
3247+ self.conn.close()
3248+ self.conn.logout()
3249
3250 duplicity.backend.register_backend("imap", ImapBackend);
3251 duplicity.backend.register_backend("imaps", ImapBackend);
3252-
3253
3254=== modified file 'duplicity/backends/localbackend.py'
3255--- duplicity/backends/localbackend.py 2014-04-17 20:50:57 +0000
3256+++ duplicity/backends/localbackend.py 2014-04-28 02:49:55 +0000
3257@@ -20,14 +20,11 @@
3258 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
3259
3260 import os
3261-import types
3262-import errno
3263
3264 import duplicity.backend
3265 from duplicity import log
3266 from duplicity import path
3267-from duplicity import util
3268-from duplicity.errors import * #@UnusedWildImport
3269+from duplicity.errors import BackendException
3270
3271
3272 class LocalBackend(duplicity.backend.Backend):
3273@@ -43,90 +40,37 @@
3274 if not parsed_url.path.startswith('//'):
3275 raise BackendException("Bad file:// path syntax.")
3276 self.remote_pathdir = path.Path(parsed_url.path[2:])
3277-
3278- def handle_error(self, e, op, file1 = None, file2 = None):
3279- code = log.ErrorCode.backend_error
3280- if hasattr(e, 'errno'):
3281- if e.errno == errno.EACCES:
3282- code = log.ErrorCode.backend_permission_denied
3283- elif e.errno == errno.ENOENT:
3284- code = log.ErrorCode.backend_not_found
3285- elif e.errno == errno.ENOSPC:
3286- code = log.ErrorCode.backend_no_space
3287- extra = ' '.join([util.escape(x) for x in [file1, file2] if x])
3288- extra = ' '.join([op, extra])
3289- if op != 'delete' and op != 'query':
3290- log.FatalError(str(e), code, extra)
3291- else:
3292- log.Warn(str(e), code, extra)
3293-
3294- def move(self, source_path, remote_filename = None):
3295- self.put(source_path, remote_filename, rename_instead = True)
3296-
3297- def put(self, source_path, remote_filename = None, rename_instead = False):
3298- if not remote_filename:
3299- remote_filename = source_path.get_filename()
3300- target_path = self.remote_pathdir.append(remote_filename)
3301- log.Info("Writing %s" % target_path.name)
3302- """Try renaming first (if allowed to), copying if doesn't work"""
3303- if rename_instead:
3304- try:
3305- source_path.rename(target_path)
3306- except OSError:
3307- pass
3308- except Exception as e:
3309- self.handle_error(e, 'put', source_path.name, target_path.name)
3310- else:
3311- return
3312- try:
3313- target_path.writefileobj(source_path.open("rb"))
3314- except Exception as e:
3315- self.handle_error(e, 'put', source_path.name, target_path.name)
3316-
3317- """If we get here, renaming failed previously"""
3318- if rename_instead:
3319- """We need to simulate its behaviour"""
3320- source_path.delete()
3321-
3322- def get(self, filename, local_path):
3323- """Get file and put in local_path (Path object)"""
3324+ try:
3325+ os.makedirs(self.remote_pathdir.base)
3326+ except Exception:
3327+ pass
3328+
3329+ def _move(self, source_path, remote_filename):
3330+ target_path = self.remote_pathdir.append(remote_filename)
3331+ try:
3332+ source_path.rename(target_path)
3333+ return True
3334+ except OSError:
3335+ return False
3336+
3337+ def _put(self, source_path, remote_filename):
3338+ target_path = self.remote_pathdir.append(remote_filename)
3339+ target_path.writefileobj(source_path.open("rb"))
3340+
3341+ def _get(self, filename, local_path):
3342 source_path = self.remote_pathdir.append(filename)
3343- try:
3344- local_path.writefileobj(source_path.open("rb"))
3345- except Exception as e:
3346- self.handle_error(e, 'get', source_path.name, local_path.name)
3347+ local_path.writefileobj(source_path.open("rb"))
3348
3349 def _list(self):
3350- """List files in that directory"""
3351- try:
3352- os.makedirs(self.remote_pathdir.base)
3353- except Exception:
3354- pass
3355- try:
3356- return self.remote_pathdir.listdir()
3357- except Exception as e:
3358- self.handle_error(e, 'list', self.remote_pathdir.name)
3359-
3360- def delete(self, filename_list):
3361- """Delete all files in filename list"""
3362- assert type(filename_list) is not types.StringType
3363- for filename in filename_list:
3364- try:
3365- self.remote_pathdir.append(filename).delete()
3366- except Exception as e:
3367- self.handle_error(e, 'delete', self.remote_pathdir.append(filename).name)
3368-
3369- def _query_file_info(self, filename):
3370- """Query attributes on filename"""
3371- try:
3372- target_file = self.remote_pathdir.append(filename)
3373- if not os.path.exists(target_file.name):
3374- return {'size': -1}
3375- target_file.setdata()
3376- size = target_file.getsize()
3377- return {'size': size}
3378- except Exception as e:
3379- self.handle_error(e, 'query', target_file.name)
3380- return {'size': None}
3381+ return self.remote_pathdir.listdir()
3382+
3383+ def _delete(self, filename):
3384+ self.remote_pathdir.append(filename).delete()
3385+
3386+ def _query(self, filename):
3387+ target_file = self.remote_pathdir.append(filename)
3388+ target_file.setdata()
3389+ size = target_file.getsize() if target_file.exists() else -1
3390+ return {'size': size}
3391
3392 duplicity.backend.register_backend("file", LocalBackend)
3393
3394=== modified file 'duplicity/backends/megabackend.py'
3395--- duplicity/backends/megabackend.py 2014-04-17 20:50:57 +0000
3396+++ duplicity/backends/megabackend.py 2014-04-28 02:49:55 +0000
3397@@ -22,9 +22,8 @@
3398 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
3399
3400 import duplicity.backend
3401-from duplicity.backend import retry
3402 from duplicity import log
3403-from duplicity.errors import * #@UnusedWildImport
3404+from duplicity.errors import BackendException
3405
3406
3407 class MegaBackend(duplicity.backend.Backend):
3408@@ -63,113 +62,64 @@
3409
3410 self.folder = parent_folder
3411
3412- @retry
3413- def put(self, source_path, remote_filename=None, raise_errors=False):
3414- """Transfer source_path to remote_filename"""
3415- # Default remote file name.
3416- if not remote_filename:
3417- remote_filename = source_path.get_filename()
3418-
3419- try:
3420- # If remote file already exists in destination folder, remove it.
3421- files = self.client.get_files()
3422- entries = self.__filter_entries(files, self.folder, remote_filename, 'file')
3423-
3424- for entry in entries:
3425- self.client.delete(entry)
3426-
3427- self.client.upload(source_path.get_canonical(), self.folder, dest_filename=remote_filename)
3428-
3429- except Exception as e:
3430- self.__handle_error("Failed to upload file '%s' to remote folder '%s': %s"
3431- % (source_path.get_canonical(), self.__get_node_name(self.folder), str(e)), raise_errors)
3432-
3433- @retry
3434- def get(self, remote_filename, local_path, raise_errors=False):
3435- """Get remote filename, saving it to local_path"""
3436- try:
3437- files = self.client.get_files()
3438- entries = self.__filter_entries(files, self.folder, remote_filename, 'file')
3439-
3440- if len(entries):
3441- # get first matching remote file
3442- entry = entries.keys()[0]
3443- self.client.download((entry, entries[entry]), dest_filename=local_path.name)
3444- local_path.setdata()
3445- return
3446- else:
3447- self.__handle_error("Failed to find file '%s' in remote folder '%s'"
3448- % (remote_filename, self.__get_node_name(self.folder)), raise_errors)
3449- except Exception as e:
3450- self.__handle_error("Failed to download file '%s' in remote folder '%s': %s"
3451- % (remote_filename, self.__get_node_name(self.folder), str(e)), raise_errors)
3452-
3453- @retry
3454- def _list(self, raise_errors=False):
3455- """List files in folder"""
3456- try:
3457- entries = self.client.get_files_in_node(self.folder)
3458- return [ self.client.get_name_from_file({entry:entries[entry]}) for entry in entries]
3459- except Exception as e:
3460- self.__handle_error("Failed to fetch list of files in remote folder '%s': %s"
3461- % (self.__get_node_name(self.folder), str(e)), raise_errors)
3462-
3463- @retry
3464- def delete(self, filename_list, raise_errors=False):
3465- """Delete files in filename_list"""
3466- files = self.client.get_files()
3467- for filename in filename_list:
3468- entries = self.__filter_entries(files, self.folder, filename)
3469- try:
3470- if len(entries) > 0:
3471- for entry in entries:
3472- if self.client.destroy(entry):
3473- self.__handle_error("Failed to remove file '%s' in remote folder '%s'"
3474- % (filename, self.__get_node_name(self.folder)), raise_errors)
3475- else:
3476- log.Warn("Failed to fetch file '%s' in remote folder '%s'"
3477- % (filename, self.__get_node_name(self.folder)))
3478- except Exception as e:
3479- self.__handle_error("Failed to remove file '%s' in remote folder '%s': %s"
3480- % (filename, self.__get_node_name(self.folder), str(e)), raise_errors)
3481+ def _put(self, source_path, remote_filename):
3482+ try:
3483+ self._delete(remote_filename)
3484+ except Exception:
3485+ pass
3486+ self.client.upload(source_path.get_canonical(), self.folder, dest_filename=remote_filename)
3487+
3488+ def _get(self, remote_filename, local_path):
3489+ files = self.client.get_files()
3490+ entries = self.__filter_entries(files, self.folder, remote_filename, 'file')
3491+ if len(entries):
3492+ # get first matching remote file
3493+ entry = entries.keys()[0]
3494+ self.client.download((entry, entries[entry]), dest_filename=local_path.name)
3495+ else:
3496+ raise BackendException("Failed to find file '%s' in remote folder '%s'"
3497+ % (remote_filename, self.__get_node_name(self.folder)),
3498+ code=log.ErrorCode.backend_not_found)
3499+
3500+ def _list(self):
3501+ entries = self.client.get_files_in_node(self.folder)
3502+ return [self.client.get_name_from_file({entry:entries[entry]}) for entry in entries]
3503+
3504+ def _delete(self, filename):
3505+ files = self.client.get_files()
3506+ entries = self.__filter_entries(files, self.folder, filename, 'file')
3507+ if len(entries):
3508+ self.client.destroy(entries.keys()[0])
3509+ else:
3510+ raise BackendException("Failed to find file '%s' in remote folder '%s'"
3511+ % (filename, self.__get_node_name(self.folder)),
3512+ code=log.ErrorCode.backend_not_found)
3513
3514 def __get_node_name(self, handle):
3515 """get node name from public handle"""
3516 files = self.client.get_files()
3517 return self.client.get_name_from_file({handle:files[handle]})
3518-
3519- def __handle_error(self, message, raise_errors=True):
3520- if raise_errors:
3521- raise BackendException(message)
3522- else:
3523- log.FatalError(message, log.ErrorCode.backend_error)
3524
3525 def __authorize(self, email, password):
3526- try:
3527- self.client.login(email, password)
3528- except Exception as e:
3529- self.__handle_error('Error while authenticating client: %s.' % str(e))
3530+ self.client.login(email, password)
3531
3532 def __filter_entries(self, entries, parent_id=None, title=None, type=None):
3533 result = {}
3534 type_map = { 'folder': 1, 'file': 0 }
3535
3536- try:
3537- for k, v in entries.items():
3538- try:
3539- if parent_id != None:
3540- assert(v['p'] == parent_id)
3541- if title != None:
3542- assert(v['a']['n'] == title)
3543- if type != None:
3544- assert(v['t'] == type_map[type])
3545- except AssertionError:
3546- continue
3547-
3548- result.update({k:v})
3549-
3550- return result
3551- except Exception as e:
3552- self.__handle_error('Error while fetching remote entries: %s.' % str(e))
3553+ for k, v in entries.items():
3554+ try:
3555+ if parent_id != None:
3556+ assert(v['p'] == parent_id)
3557+ if title != None:
3558+ assert(v['a']['n'] == title)
3559+ if type != None:
3560+ assert(v['t'] == type_map[type])
3561+ except AssertionError:
3562+ continue
3563+
3564+ result.update({k:v})
3565+
3566+ return result
3567
3568 duplicity.backend.register_backend('mega', MegaBackend)
3569
3570=== renamed file 'duplicity/backends/~par2wrapperbackend.py' => 'duplicity/backends/par2backend.py'
3571--- duplicity/backends/~par2wrapperbackend.py 2014-04-17 19:53:30 +0000
3572+++ duplicity/backends/par2backend.py 2014-04-28 02:49:55 +0000
3573@@ -16,14 +16,16 @@
3574 # along with duplicity; if not, write to the Free Software Foundation,
3575 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
3576
3577+from future_builtins import filter
3578+
3579 import os
3580 import re
3581 from duplicity import backend
3582-from duplicity.errors import UnsupportedBackendScheme, BackendException
3583+from duplicity.errors import BackendException
3584 from duplicity import log
3585 from duplicity import globals
3586
3587-class Par2WrapperBackend(backend.Backend):
3588+class Par2Backend(backend.Backend):
3589 """This backend wrap around other backends and create Par2 recovery files
3590 before the file and the Par2 files are transfered with the wrapped backend.
3591
3592@@ -37,13 +39,15 @@
3593 except AttributeError:
3594 self.redundancy = 10
3595
3596- try:
3597- url_string = self.parsed_url.url_string.lstrip('par2+')
3598- self.wrapped_backend = backend.get_backend(url_string)
3599- except:
3600- raise UnsupportedBackendScheme(self.parsed_url.url_string)
3601-
3602- def put(self, source_path, remote_filename = None):
3603+ self.wrapped_backend = backend.get_backend_object(parsed_url.url_string)
3604+
3605+ for attr in ['_get', '_put', '_list', '_delete', '_delete_list',
3606+ '_query', '_query_list', '_retry_cleanup', '_error_code',
3607+ '_move', '_close']:
3608+ if hasattr(self.wrapped_backend, attr):
3609+ setattr(self, attr, getattr(self, attr[1:]))
3610+
3611+ def transfer(self, method, source_path, remote_filename):
3612 """create Par2 files and transfer the given file and the Par2 files
3613 with the wrapped backend.
3614
3615@@ -52,13 +56,14 @@
3616 the soure_path with remote_filename into this.
3617 """
3618 import pexpect
3619- if remote_filename is None:
3620- remote_filename = source_path.get_filename()
3621
3622 par2temp = source_path.get_temp_in_same_dir()
3623 par2temp.mkdir()
3624 source_symlink = par2temp.append(remote_filename)
3625- os.symlink(source_path.get_canonical(), source_symlink.get_canonical())
3626+ source_target = source_path.get_canonical()
3627+ if not os.path.isabs(source_target):
3628+ source_target = os.path.join(os.getcwd(), source_target)
3629+ os.symlink(source_target, source_symlink.get_canonical())
3630 source_symlink.setdata()
3631
3632 log.Info("Create Par2 recovery files")
3633@@ -70,16 +75,17 @@
3634 for file in par2temp.listdir():
3635 files_to_transfer.append(par2temp.append(file))
3636
3637- ret = self.wrapped_backend.put(source_path, remote_filename)
3638+ method(source_path, remote_filename)
3639 for file in files_to_transfer:
3640- self.wrapped_backend.put(file, file.get_filename())
3641+ method(file, file.get_filename())
3642
3643 par2temp.deltree()
3644- return ret
3645-
3646- def move(self, source_path, remote_filename = None):
3647- self.put(source_path, remote_filename)
3648- source_path.delete()
3649+
3650+ def put(self, local, remote):
3651+ self.transfer(self.wrapped_backend._put, local, remote)
3652+
3653+ def move(self, local, remote):
3654+ self.transfer(self.wrapped_backend._move, local, remote)
3655
3656 def get(self, remote_filename, local_path):
3657 """transfer remote_filename and the related .par2 file into
3658@@ -94,22 +100,23 @@
3659 par2temp.mkdir()
3660 local_path_temp = par2temp.append(remote_filename)
3661
3662- ret = self.wrapped_backend.get(remote_filename, local_path_temp)
3663+ self.wrapped_backend._get(remote_filename, local_path_temp)
3664
3665 try:
3666 par2file = par2temp.append(remote_filename + '.par2')
3667- self.wrapped_backend.get(par2file.get_filename(), par2file)
3668+ self.wrapped_backend._get(par2file.get_filename(), par2file)
3669
3670 par2verify = 'par2 v -q -q %s %s' % (par2file.get_canonical(), local_path_temp.get_canonical())
3671 out, returncode = pexpect.run(par2verify, -1, True)
3672
3673 if returncode:
3674 log.Warn("File is corrupt. Try to repair %s" % remote_filename)
3675- par2volumes = self.list(re.compile(r'%s\.vol[\d+]*\.par2' % remote_filename))
3676+ par2volumes = filter(re.compile((r'%s\.vol[\d+]*\.par2' % remote_filename).match,
3677+ self.wrapped_backend._list()))
3678
3679 for filename in par2volumes:
3680 file = par2temp.append(filename)
3681- self.wrapped_backend.get(filename, file)
3682+ self.wrapped_backend._get(filename, file)
3683
3684 par2repair = 'par2 r -q -q %s %s' % (par2file.get_canonical(), local_path_temp.get_canonical())
3685 out, returncode = pexpect.run(par2repair, -1, True)
3686@@ -124,25 +131,23 @@
3687 finally:
3688 local_path_temp.rename(local_path)
3689 par2temp.deltree()
3690- return ret
3691
3692- def list(self, filter = re.compile(r'(?!.*\.par2$)')):
3693- """default filter all files that ends with ".par"
3694- filter can be a re.compile instance or False for all remote files
3695+ def delete(self, filename):
3696+ """delete given filename and its .par2 files
3697 """
3698- list = self.wrapped_backend.list()
3699- if not filter:
3700- return list
3701- filtered_list = []
3702- for item in list:
3703- if filter.match(item):
3704- filtered_list.append(item)
3705- return filtered_list
3706-
3707- def delete(self, filename_list):
3708+ self.wrapped_backend._delete(filename)
3709+
3710+ remote_list = self.list()
3711+ filename_list = [filename]
3712+ c = re.compile(r'%s(?:\.vol[\d+]*)?\.par2' % filename)
3713+ for remote_filename in remote_list:
3714+ if c.match(remote_filename):
3715+ self.wrapped_backend._delete(remote_filename)
3716+
3717+ def delete_list(self, filename_list):
3718 """delete given filename_list and all .par2 files that belong to them
3719 """
3720- remote_list = self.list(False)
3721+ remote_list = self.list()
3722
3723 for filename in filename_list[:]:
3724 c = re.compile(r'%s(?:\.vol[\d+]*)?\.par2' % filename)
3725@@ -150,46 +155,25 @@
3726 if c.match(remote_filename):
3727 filename_list.append(remote_filename)
3728
3729- return self.wrapped_backend.delete(filename_list)
3730-
3731- """just return the output of coresponding wrapped backend
3732- for all other functions
3733- """
3734- def query_info(self, filename_list, raise_errors=True):
3735- return self.wrapped_backend.query_info(filename_list, raise_errors)
3736-
3737- def get_password(self):
3738- return self.wrapped_backend.get_password()
3739-
3740- def munge_password(self, commandline):
3741- return self.wrapped_backend.munge_password(commandline)
3742-
3743- def run_command(self, commandline):
3744- return self.wrapped_backend.run_command(commandline)
3745- def run_command_persist(self, commandline):
3746- return self.wrapped_backend.run_command_persist(commandline)
3747-
3748- def popen(self, commandline):
3749- return self.wrapped_backend.popen(commandline)
3750- def popen_persist(self, commandline):
3751- return self.wrapped_backend.popen_persist(commandline)
3752-
3753- def _subprocess_popen(self, commandline):
3754- return self.wrapped_backend._subprocess_popen(commandline)
3755-
3756- def subprocess_popen(self, commandline):
3757- return self.wrapped_backend.subprocess_popen(commandline)
3758-
3759- def subprocess_popen_persist(self, commandline):
3760- return self.wrapped_backend.subprocess_popen_persist(commandline)
3761+ return self.wrapped_backend._delete_list(filename_list)
3762+
3763+
3764+ def list(self):
3765+ return self.wrapped_backend._list()
3766+
3767+ def retry_cleanup(self):
3768+ self.wrapped_backend._retry_cleanup()
3769+
3770+ def error_code(self, operation, e):
3771+ return self.wrapped_backend._error_code(operation, e)
3772+
3773+ def query(self, filename):
3774+ return self.wrapped_backend._query(filename)
3775+
3776+ def query_list(self, filename_list):
3777+ return self.wrapped_backend._query(filename_list)
3778
3779 def close(self):
3780- return self.wrapped_backend.close()
3781-
3782-"""register this backend with leading "par2+" for all already known backends
3783-
3784-files must be sorted in duplicity.backend.import_backends to catch
3785-all supported backends
3786-"""
3787-for item in backend._backends.keys():
3788- backend.register_backend('par2+' + item, Par2WrapperBackend)
3789+ self.wrapped_backend._close()
3790+
3791+backend.register_backend_prefix('par2', Par2Backend)
3792
3793=== modified file 'duplicity/backends/rsyncbackend.py'
3794--- duplicity/backends/rsyncbackend.py 2014-04-25 23:20:12 +0000
3795+++ duplicity/backends/rsyncbackend.py 2014-04-28 02:49:55 +0000
3796@@ -23,7 +23,7 @@
3797 import tempfile
3798
3799 import duplicity.backend
3800-from duplicity.errors import * #@UnusedWildImport
3801+from duplicity.errors import InvalidBackendURL
3802 from duplicity import globals, tempdir, util
3803
3804 class RsyncBackend(duplicity.backend.Backend):
3805@@ -58,12 +58,13 @@
3806 if port:
3807 port = " --port=%s" % port
3808 else:
3809+ host_string = host + ":" if host else ""
3810 if parsed_url.path.startswith("//"):
3811 # its an absolute path
3812- self.url_string = "%s:/%s" % (host, parsed_url.path.lstrip('/'))
3813+ self.url_string = "%s/%s" % (host_string, parsed_url.path.lstrip('/'))
3814 else:
3815 # its a relative path
3816- self.url_string = "%s:%s" % (host, parsed_url.path.lstrip('/'))
3817+ self.url_string = "%s%s" % (host_string, parsed_url.path.lstrip('/'))
3818 if parsed_url.port:
3819 port = " -p %s" % parsed_url.port
3820 # add trailing slash if missing
3821@@ -105,29 +106,17 @@
3822 raise InvalidBackendURL("Could not determine rsync path: %s"
3823 "" % self.munge_password( url ) )
3824
3825- def run_command(self, commandline):
3826- result, stdout, stderr = self.subprocess_popen_persist(commandline)
3827- return result, stdout
3828-
3829- def put(self, source_path, remote_filename = None):
3830- """Use rsync to copy source_dir/filename to remote computer"""
3831- if not remote_filename:
3832- remote_filename = source_path.get_filename()
3833+ def _put(self, source_path, remote_filename):
3834 remote_path = os.path.join(self.url_string, remote_filename)
3835 commandline = "%s %s %s" % (self.cmd, source_path.name, remote_path)
3836- self.run_command(commandline)
3837+ self.subprocess_popen(commandline)
3838
3839- def get(self, remote_filename, local_path):
3840- """Use rsync to get a remote file"""
3841+ def _get(self, remote_filename, local_path):
3842 remote_path = os.path.join (self.url_string, remote_filename)
3843 commandline = "%s %s %s" % (self.cmd, remote_path, local_path.name)
3844- self.run_command(commandline)
3845- local_path.setdata()
3846- if not local_path.exists():
3847- raise BackendException("File %s not found" % local_path.name)
3848+ self.subprocess_popen(commandline)
3849
3850- def list(self):
3851- """List files"""
3852+ def _list(self):
3853 def split (str):
3854 line = str.split ()
3855 if len (line) > 4 and line[4] != '.':
3856@@ -135,20 +124,17 @@
3857 else:
3858 return None
3859 commandline = "%s %s" % (self.cmd, self.url_string)
3860- result, stdout = self.run_command(commandline)
3861+ result, stdout, stderr = self.subprocess_popen(commandline)
3862 return [x for x in map (split, stdout.split('\n')) if x]
3863
3864- def delete(self, filename_list):
3865- """Delete files."""
3866+ def _delete_list(self, filename_list):
3867 delete_list = filename_list
3868 dont_delete_list = []
3869- for file in self.list ():
3870+ for file in self._list ():
3871 if file in delete_list:
3872 delete_list.remove (file)
3873 else:
3874 dont_delete_list.append (file)
3875- if len (delete_list) > 0:
3876- raise BackendException("Files %s not found" % str (delete_list))
3877
3878 dir = tempfile.mkdtemp()
3879 exclude, exclude_name = tempdir.default().mkstemp_file()
3880@@ -162,7 +148,7 @@
3881 exclude.close()
3882 commandline = ("%s --recursive --delete --exclude-from=%s %s/ %s" %
3883 (self.cmd, exclude_name, dir, self.url_string))
3884- self.run_command(commandline)
3885+ self.subprocess_popen(commandline)
3886 for file in to_delete:
3887 util.ignore_missing(os.unlink, file)
3888 os.rmdir (dir)
3889
3890=== modified file 'duplicity/backends/sshbackend.py'
3891--- duplicity/backends/sshbackend.py 2014-04-17 21:54:04 +0000
3892+++ duplicity/backends/sshbackend.py 2014-04-28 02:49:55 +0000
3893@@ -18,6 +18,7 @@
3894 # along with duplicity; if not, write to the Free Software Foundation,
3895 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
3896
3897+import duplicity.backend
3898 from duplicity import globals, log
3899
3900 def warn_option(option, optionvar):
3901@@ -26,11 +27,15 @@
3902
3903 if (globals.ssh_backend and
3904 globals.ssh_backend.lower().strip() == 'pexpect'):
3905- from . import _ssh_pexpect
3906+ from ._ssh_pexpect import SSHPExpectBackend as SSHBackend
3907 else:
3908 # take user by the hand to prevent typo driven bug reports
3909 if globals.ssh_backend.lower().strip() != 'paramiko':
3910 log.Warn(_("Warning: Selected ssh backend '%s' is neither 'paramiko nor 'pexpect'. Will use default paramiko instead.") % globals.ssh_backend)
3911 warn_option("--scp-command", globals.scp_command)
3912 warn_option("--sftp-command", globals.sftp_command)
3913- from . import _ssh_paramiko
3914+ from ._ssh_paramiko import SSHParamikoBackend as SSHBackend
3915+
3916+duplicity.backend.register_backend("sftp", SSHBackend)
3917+duplicity.backend.register_backend("scp", SSHBackend)
3918+duplicity.backend.register_backend("ssh", SSHBackend)
3919
3920=== modified file 'duplicity/backends/swiftbackend.py'
3921--- duplicity/backends/swiftbackend.py 2014-04-17 22:03:10 +0000
3922+++ duplicity/backends/swiftbackend.py 2014-04-28 02:49:55 +0000
3923@@ -19,14 +19,11 @@
3924 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
3925
3926 import os
3927-import time
3928
3929 import duplicity.backend
3930-from duplicity import globals
3931 from duplicity import log
3932-from duplicity.errors import * #@UnusedWildImport
3933-from duplicity.util import exception_traceback
3934-from duplicity.backend import retry
3935+from duplicity.errors import BackendException
3936+
3937
3938 class SwiftBackend(duplicity.backend.Backend):
3939 """
3940@@ -82,121 +79,30 @@
3941 % (e.__class__.__name__, str(e)),
3942 log.ErrorCode.connection_failed)
3943
3944- def put(self, source_path, remote_filename = None):
3945- if not remote_filename:
3946- remote_filename = source_path.get_filename()
3947-
3948- for n in range(1, globals.num_retries+1):
3949- log.Info("Uploading '%s/%s' " % (self.container, remote_filename))
3950- try:
3951- self.conn.put_object(self.container,
3952- remote_filename,
3953- file(source_path.name))
3954- return
3955- except self.resp_exc as error:
3956- log.Warn("Upload of '%s' failed (attempt %d): Swift server returned: %s %s"
3957- % (remote_filename, n, error.http_status, error.message))
3958- except Exception as e:
3959- log.Warn("Upload of '%s' failed (attempt %s): %s: %s"
3960- % (remote_filename, n, e.__class__.__name__, str(e)))
3961- log.Debug("Backtrace of previous error: %s"
3962- % exception_traceback())
3963- time.sleep(30)
3964- log.Warn("Giving up uploading '%s' after %s attempts"
3965- % (remote_filename, globals.num_retries))
3966- raise BackendException("Error uploading '%s'" % remote_filename)
3967-
3968- def get(self, remote_filename, local_path):
3969- for n in range(1, globals.num_retries+1):
3970- log.Info("Downloading '%s/%s'" % (self.container, remote_filename))
3971- try:
3972- headers, body = self.conn.get_object(self.container,
3973- remote_filename)
3974- f = open(local_path.name, 'w')
3975- for chunk in body:
3976- f.write(chunk)
3977- local_path.setdata()
3978- return
3979- except self.resp_exc as resperr:
3980- log.Warn("Download of '%s' failed (attempt %s): Swift server returned: %s %s"
3981- % (remote_filename, n, resperr.http_status, resperr.message))
3982- except Exception as e:
3983- log.Warn("Download of '%s' failed (attempt %s): %s: %s"
3984- % (remote_filename, n, e.__class__.__name__, str(e)))
3985- log.Debug("Backtrace of previous error: %s"
3986- % exception_traceback())
3987- time.sleep(30)
3988- log.Warn("Giving up downloading '%s' after %s attempts"
3989- % (remote_filename, globals.num_retries))
3990- raise BackendException("Error downloading '%s/%s'"
3991- % (self.container, remote_filename))
3992+ def _error_code(self, operation, e):
3993+ if isinstance(e, self.resp_exc):
3994+ if e.http_status == 404:
3995+ return log.ErrorCode.backend_not_found
3996+
3997+ def _put(self, source_path, remote_filename):
3998+ self.conn.put_object(self.container, remote_filename,
3999+ file(source_path.name))
4000+
4001+ def _get(self, remote_filename, local_path):
4002+ headers, body = self.conn.get_object(self.container, remote_filename)
4003+ with open(local_path.name, 'wb') as f:
4004+ for chunk in body:
4005+ f.write(chunk)
4006
4007 def _list(self):
4008- for n in range(1, globals.num_retries+1):
4009- log.Info("Listing '%s'" % (self.container))
4010- try:
4011- # Cloud Files will return a max of 10,000 objects. We have
4012- # to make multiple requests to get them all.
4013- headers, objs = self.conn.get_container(self.container)
4014- return [ o['name'] for o in objs ]
4015- except self.resp_exc as resperr:
4016- log.Warn("Listing of '%s' failed (attempt %s): Swift server returned: %s %s"
4017- % (self.container, n, resperr.http_status, resperr.message))
4018- except Exception as e:
4019- log.Warn("Listing of '%s' failed (attempt %s): %s: %s"
4020- % (self.container, n, e.__class__.__name__, str(e)))
4021- log.Debug("Backtrace of previous error: %s"
4022- % exception_traceback())
4023- time.sleep(30)
4024- log.Warn("Giving up listing of '%s' after %s attempts"
4025- % (self.container, globals.num_retries))
4026- raise BackendException("Error listing '%s'"
4027- % (self.container))
4028-
4029- def delete_one(self, remote_filename):
4030- for n in range(1, globals.num_retries+1):
4031- log.Info("Deleting '%s/%s'" % (self.container, remote_filename))
4032- try:
4033- self.conn.delete_object(self.container, remote_filename)
4034- return
4035- except self.resp_exc as resperr:
4036- if n > 1 and resperr.http_status == 404:
4037- # We failed on a timeout, but delete succeeded on the server
4038- log.Warn("Delete of '%s' missing after retry - must have succeded earlier" % remote_filename )
4039- return
4040- log.Warn("Delete of '%s' failed (attempt %s): Swift server returned: %s %s"
4041- % (remote_filename, n, resperr.http_status, resperr.message))
4042- except Exception as e:
4043- log.Warn("Delete of '%s' failed (attempt %s): %s: %s"
4044- % (remote_filename, n, e.__class__.__name__, str(e)))
4045- log.Debug("Backtrace of previous error: %s"
4046- % exception_traceback())
4047- time.sleep(30)
4048- log.Warn("Giving up deleting '%s' after %s attempts"
4049- % (remote_filename, globals.num_retries))
4050- raise BackendException("Error deleting '%s/%s'"
4051- % (self.container, remote_filename))
4052-
4053- def delete(self, filename_list):
4054- for file in filename_list:
4055- self.delete_one(file)
4056- log.Debug("Deleted '%s/%s'" % (self.container, file))
4057-
4058- @retry
4059- def _query_file_info(self, filename, raise_errors=False):
4060- try:
4061- sobject = self.conn.head_object(self.container, filename)
4062- return {'size': int(sobject['content-length'])}
4063- except self.resp_exc:
4064- return {'size': -1}
4065- except Exception as e:
4066- log.Warn("Error querying '%s/%s': %s"
4067- "" % (self.container,
4068- filename,
4069- str(e)))
4070- if raise_errors:
4071- raise e
4072- else:
4073- return {'size': None}
4074+ headers, objs = self.conn.get_container(self.container)
4075+ return [ o['name'] for o in objs ]
4076+
4077+ def _delete(self, filename):
4078+ self.conn.delete_object(self.container, filename)
4079+
4080+ def _query(self, filename):
4081+ sobject = self.conn.head_object(self.container, filename)
4082+ return {'size': int(sobject['content-length'])}
4083
4084 duplicity.backend.register_backend("swift", SwiftBackend)
4085
4086=== modified file 'duplicity/backends/tahoebackend.py'
4087--- duplicity/backends/tahoebackend.py 2013-12-27 06:39:00 +0000
4088+++ duplicity/backends/tahoebackend.py 2014-04-28 02:49:55 +0000
4089@@ -20,9 +20,8 @@
4090
4091 import duplicity.backend
4092 from duplicity import log
4093-from duplicity.errors import * #@UnusedWildImport
4094+from duplicity.errors import BackendException
4095
4096-from commands import getstatusoutput
4097
4098 class TAHOEBackend(duplicity.backend.Backend):
4099 """
4100@@ -36,10 +35,8 @@
4101
4102 self.alias = url[0]
4103
4104- if len(url) > 2:
4105+ if len(url) > 1:
4106 self.directory = "/".join(url[1:])
4107- elif len(url) == 2:
4108- self.directory = url[1]
4109 else:
4110 self.directory = ""
4111
4112@@ -59,28 +56,20 @@
4113
4114 def run(self, *args):
4115 cmd = " ".join(args)
4116- log.Debug("tahoe execute: %s" % cmd)
4117- (status, output) = getstatusoutput(cmd)
4118-
4119- if status != 0:
4120- raise BackendException("Error running %s" % cmd)
4121- else:
4122- return output
4123-
4124- def put(self, source_path, remote_filename=None):
4125+ _, output, _ = self.subprocess_popen(cmd)
4126+ return output
4127+
4128+ def _put(self, source_path, remote_filename):
4129 self.run("tahoe", "cp", source_path.name, self.get_remote_path(remote_filename))
4130
4131- def get(self, remote_filename, local_path):
4132+ def _get(self, remote_filename, local_path):
4133 self.run("tahoe", "cp", self.get_remote_path(remote_filename), local_path.name)
4134- local_path.setdata()
4135
4136 def _list(self):
4137- log.Debug("tahoe: List")
4138- return self.run("tahoe", "ls", self.get_remote_path()).split('\n')
4139+ output = self.run("tahoe", "ls", self.get_remote_path())
4140+ return output.split('\n') if output else []
4141
4142- def delete(self, filename_list):
4143- log.Debug("tahoe: delete(%s)" % filename_list)
4144- for filename in filename_list:
4145- self.run("tahoe", "rm", self.get_remote_path(filename))
4146+ def _delete(self, filename):
4147+ self.run("tahoe", "rm", self.get_remote_path(filename))
4148
4149 duplicity.backend.register_backend("tahoe", TAHOEBackend)
4150
4151=== modified file 'duplicity/backends/webdavbackend.py'
4152--- duplicity/backends/webdavbackend.py 2014-04-25 15:03:00 +0000
4153+++ duplicity/backends/webdavbackend.py 2014-04-28 02:49:55 +0000
4154@@ -32,8 +32,7 @@
4155 import duplicity.backend
4156 from duplicity import globals
4157 from duplicity import log
4158-from duplicity.errors import * #@UnusedWildImport
4159-from duplicity.backend import retry_fatal
4160+from duplicity.errors import BackendException, FatalBackendException
4161
4162 class CustomMethodRequest(urllib2.Request):
4163 """
4164@@ -54,7 +53,7 @@
4165 global socket, ssl
4166 import socket, ssl
4167 except ImportError:
4168- raise FatalBackendError("Missing socket or ssl libraries.")
4169+ raise FatalBackendException("Missing socket or ssl libraries.")
4170
4171 httplib.HTTPSConnection.__init__(self, *args, **kwargs)
4172
4173@@ -71,21 +70,21 @@
4174 break
4175 # still no cacert file, inform user
4176 if not self.cacert_file:
4177- raise FatalBackendError("""For certificate verification a cacert database file is needed in one of these locations: %s
4178+ raise FatalBackendException("""For certificate verification a cacert database file is needed in one of these locations: %s
4179 Hints:
4180 Consult the man page, chapter 'SSL Certificate Verification'.
4181 Consider using the options --ssl-cacert-file, --ssl-no-check-certificate .""" % ", ".join(cacert_candidates) )
4182 # check if file is accessible (libssl errors are not very detailed)
4183 if not os.access(self.cacert_file, os.R_OK):
4184- raise FatalBackendError("Cacert database file '%s' is not readable." % cacert_file)
4185+ raise FatalBackendException("Cacert database file '%s' is not readable." % cacert_file)
4186
4187 def connect(self):
4188 # create new socket
4189 sock = socket.create_connection((self.host, self.port),
4190 self.timeout)
4191- if self._tunnel_host:
4192+ if self.tunnel_host:
4193 self.sock = sock
4194- self._tunnel()
4195+ self.tunnel()
4196
4197 # wrap the socket in ssl using verification
4198 self.sock = ssl.wrap_socket(sock,
4199@@ -126,7 +125,7 @@
4200
4201 self.username = parsed_url.username
4202 self.password = self.get_password()
4203- self.directory = self._sanitize_path(parsed_url.path)
4204+ self.directory = self.sanitize_path(parsed_url.path)
4205
4206 log.Info("Using WebDAV protocol %s" % (globals.webdav_proto,))
4207 log.Info("Using WebDAV host %s port %s" % (parsed_url.hostname, parsed_url.port))
4208@@ -134,30 +133,33 @@
4209
4210 self.conn = None
4211
4212- def _sanitize_path(self,path):
4213+ def sanitize_path(self,path):
4214 if path:
4215 foldpath = re.compile('/+')
4216 return foldpath.sub('/', path + '/' )
4217 else:
4218 return '/'
4219
4220- def _getText(self,nodelist):
4221+ def getText(self,nodelist):
4222 rc = ""
4223 for node in nodelist:
4224 if node.nodeType == node.TEXT_NODE:
4225 rc = rc + node.data
4226 return rc
4227
4228- def _connect(self, forced=False):
4229+ def _retry_cleanup(self):
4230+ self.connect(forced=True)
4231+
4232+ def connect(self, forced=False):
4233 """
4234 Connect or re-connect to the server, updates self.conn
4235 # reconnect on errors as a precaution, there are errors e.g.
4236 # "[Errno 32] Broken pipe" or SSl errors that render the connection unusable
4237 """
4238- if self.retry_count<=1 and self.conn \
4239+ if not forced and self.conn \
4240 and self.conn.host == self.parsed_url.hostname: return
4241
4242- log.Info("WebDAV create connection on '%s' (retry %s) " % (self.parsed_url.hostname,self.retry_count) )
4243+ log.Info("WebDAV create connection on '%s'" % (self.parsed_url.hostname))
4244 if self.conn: self.conn.close()
4245 # http schemes needed for redirect urls from servers
4246 if self.parsed_url.scheme in ['webdav','http']:
4247@@ -168,9 +170,9 @@
4248 else:
4249 self.conn = VerifiedHTTPSConnection(self.parsed_url.hostname, self.parsed_url.port)
4250 else:
4251- raise FatalBackendError("WebDAV Unknown URI scheme: %s" % (self.parsed_url.scheme))
4252+ raise FatalBackendException("WebDAV Unknown URI scheme: %s" % (self.parsed_url.scheme))
4253
4254- def close(self):
4255+ def _close(self):
4256 self.conn.close()
4257
4258 def request(self, method, path, data=None, redirected=0):
4259@@ -178,7 +180,7 @@
4260 Wraps the connection.request method to retry once if authentication is
4261 required
4262 """
4263- self._connect()
4264+ self.connect()
4265
4266 quoted_path = urllib.quote(path,"/:~")
4267
4268@@ -197,12 +199,12 @@
4269 if redirect_url:
4270 log.Notice("WebDAV redirect to: %s " % urllib.unquote(redirect_url) )
4271 if redirected > 10:
4272- raise FatalBackendError("WebDAV redirected 10 times. Giving up.")
4273+ raise FatalBackendException("WebDAV redirected 10 times. Giving up.")
4274 self.parsed_url = duplicity.backend.ParsedUrl(redirect_url)
4275- self.directory = self._sanitize_path(self.parsed_url.path)
4276+ self.directory = self.sanitize_path(self.parsed_url.path)
4277 return self.request(method,self.directory,data,redirected+1)
4278 else:
4279- raise FatalBackendError("WebDAV missing location header in redirect response.")
4280+ raise FatalBackendException("WebDAV missing location header in redirect response.")
4281 elif response.status == 401:
4282 response.close()
4283 self.headers['Authorization'] = self.get_authorization(response, quoted_path)
4284@@ -261,10 +263,7 @@
4285 auth_string = self.digest_auth_handler.get_authorization(dummy_req, self.digest_challenge)
4286 return 'Digest %s' % auth_string
4287
4288- @retry_fatal
4289 def _list(self):
4290- """List files in directory"""
4291- log.Info("Listing directory %s on WebDAV server" % (self.directory,))
4292 response = None
4293 try:
4294 self.headers['Depth'] = "1"
4295@@ -289,7 +288,7 @@
4296 dom = xml.dom.minidom.parseString(document)
4297 result = []
4298 for href in dom.getElementsByTagName('d:href') + dom.getElementsByTagName('D:href'):
4299- filename = self.__taste_href(href)
4300+ filename = self.taste_href(href)
4301 if filename:
4302 result.append(filename)
4303 return result
4304@@ -308,7 +307,7 @@
4305 for i in range(1,len(dirs)):
4306 d="/".join(dirs[0:i+1])+"/"
4307
4308- self.close() # or we get previous request's data or exception
4309+ self._close() # or we get previous request's data or exception
4310 self.headers['Depth'] = "1"
4311 response = self.request("PROPFIND", d)
4312 del self.headers['Depth']
4313@@ -317,21 +316,21 @@
4314
4315 if response.status == 404:
4316 log.Info("Creating missing directory %s" % d)
4317- self.close() # or we get previous request's data or exception
4318+ self._close() # or we get previous request's data or exception
4319
4320 res = self.request("MKCOL", d)
4321 if res.status != 201:
4322 raise BackendException("WebDAV MKCOL %s failed: %s %s" % (d,res.status,res.reason))
4323- self.close()
4324+ self._close()
4325
4326- def __taste_href(self, href):
4327+ def taste_href(self, href):
4328 """
4329 Internal helper to taste the given href node and, if
4330 it is a duplicity file, collect it as a result file.
4331
4332 @return: A matching filename, or None if the href did not match.
4333 """
4334- raw_filename = self._getText(href.childNodes).strip()
4335+ raw_filename = self.getText(href.childNodes).strip()
4336 parsed_url = urlparse.urlparse(urllib.unquote(raw_filename))
4337 filename = parsed_url.path
4338 log.Debug("webdav path decoding and translation: "
4339@@ -361,11 +360,8 @@
4340 else:
4341 return None
4342
4343- @retry_fatal
4344- def get(self, remote_filename, local_path):
4345- """Get remote filename, saving it to local_path"""
4346+ def _get(self, remote_filename, local_path):
4347 url = self.directory + remote_filename
4348- log.Info("Retrieving %s from WebDAV server" % (url ,))
4349 response = None
4350 try:
4351 target_file = local_path.open("wb")
4352@@ -376,7 +372,6 @@
4353 #import hashlib
4354 #log.Info("WebDAV GOT %s bytes with md5=%s" % (len(data),hashlib.md5(data).hexdigest()) )
4355 assert not target_file.close()
4356- local_path.setdata()
4357 response.close()
4358 else:
4359 status = response.status
4360@@ -388,13 +383,8 @@
4361 finally:
4362 if response: response.close()
4363
4364- @retry_fatal
4365- def put(self, source_path, remote_filename = None):
4366- """Transfer source_path to remote_filename"""
4367- if not remote_filename:
4368- remote_filename = source_path.get_filename()
4369+ def _put(self, source_path, remote_filename):
4370 url = self.directory + remote_filename
4371- log.Info("Saving %s on WebDAV server" % (url ,))
4372 response = None
4373 try:
4374 source_file = source_path.open("rb")
4375@@ -412,27 +402,23 @@
4376 finally:
4377 if response: response.close()
4378
4379- @retry_fatal
4380- def delete(self, filename_list):
4381- """Delete files in filename_list"""
4382- for filename in filename_list:
4383- url = self.directory + filename
4384- log.Info("Deleting %s from WebDAV server" % (url ,))
4385- response = None
4386- try:
4387- response = self.request("DELETE", url)
4388- if response.status in [200, 204]:
4389- response.read()
4390- response.close()
4391- else:
4392- status = response.status
4393- reason = response.reason
4394- response.close()
4395- raise BackendException("Bad status code %s reason %s." % (status,reason))
4396- except Exception as e:
4397- raise e
4398- finally:
4399- if response: response.close()
4400+ def _delete(self, filename):
4401+ url = self.directory + filename
4402+ response = None
4403+ try:
4404+ response = self.request("DELETE", url)
4405+ if response.status in [200, 204]:
4406+ response.read()
4407+ response.close()
4408+ else:
4409+ status = response.status
4410+ reason = response.reason
4411+ response.close()
4412+ raise BackendException("Bad status code %s reason %s." % (status,reason))
4413+ except Exception as e:
4414+ raise e
4415+ finally:
4416+ if response: response.close()
4417
4418 duplicity.backend.register_backend("webdav", WebDAVBackend)
4419 duplicity.backend.register_backend("webdavs", WebDAVBackend)
4420
4421=== modified file 'duplicity/commandline.py'
4422--- duplicity/commandline.py 2014-04-25 23:20:12 +0000
4423+++ duplicity/commandline.py 2014-04-28 02:49:55 +0000
4424@@ -210,13 +210,6 @@
4425 global select_opts, select_files, full_backup
4426 global list_current, collection_status, cleanup, remove_time, verify
4427
4428- def use_gio(*args):
4429- try:
4430- import duplicity.backends.giobackend
4431- backend.force_backend(duplicity.backends.giobackend.GIOBackend)
4432- except ImportError:
4433- log.FatalError(_("Unable to load gio backend: %s") % str(sys.exc_info()[1]), log.ErrorCode.gio_not_available)
4434-
4435 def set_log_fd(fd):
4436 if fd < 1:
4437 raise optparse.OptionValueError("log-fd must be greater than zero.")
4438@@ -365,7 +358,9 @@
4439 # the time specified
4440 parser.add_option("--full-if-older-than", type = "time", dest = "full_force_time", metavar = _("time"))
4441
4442- parser.add_option("--gio", action = "callback", callback = use_gio)
4443+ parser.add_option("--gio",action = "callback", dest = "use_gio",
4444+ callback = lambda o, s, v, p: (setattr(p.values, o.dest, True),
4445+ old_fn_deprecation(s)))
4446
4447 parser.add_option("--gpg-options", action = "extend", metavar = _("options"))
4448
4449@@ -521,8 +516,8 @@
4450 # sftp command to use (ssh pexpect backend)
4451 parser.add_option("--sftp-command", metavar = _("command"))
4452
4453- # sftp command to use (ssh pexpect backend)
4454- parser.add_option("--cf-command", metavar = _("command"))
4455+ # allow the user to switch cloudfiles backend
4456+ parser.add_option("--cf-backend", metavar = _("pyrax|cloudfiles"))
4457
4458 # If set, use short (< 30 char) filenames for all the remote files.
4459 parser.add_option("--short-filenames", action = "callback",
4460
4461=== modified file 'duplicity/errors.py'
4462--- duplicity/errors.py 2013-01-10 19:04:39 +0000
4463+++ duplicity/errors.py 2014-04-28 02:49:55 +0000
4464@@ -23,6 +23,8 @@
4465 Error/exception classes that do not fit naturally anywhere else.
4466 """
4467
4468+from duplicity import log
4469+
4470 class DuplicityError(Exception):
4471 pass
4472
4473@@ -68,9 +70,11 @@
4474 """
4475 Raised to indicate a backend specific problem.
4476 """
4477- pass
4478+ def __init__(self, msg, code=log.ErrorCode.backend_error):
4479+ super(BackendException, self).__init__(msg)
4480+ self.code = code
4481
4482-class FatalBackendError(DuplicityError):
4483+class FatalBackendException(BackendException):
4484 """
4485 Raised to indicate a backend failed fatally.
4486 """
4487
4488=== modified file 'duplicity/globals.py'
4489--- duplicity/globals.py 2014-04-17 21:49:37 +0000
4490+++ duplicity/globals.py 2014-04-28 02:49:55 +0000
4491@@ -284,3 +284,6 @@
4492
4493 # Level of Redundancy in % for Par2 files
4494 par2_redundancy = 10
4495+
4496+# Whether to enable gio backend
4497+use_gio = False
4498
4499=== modified file 'po/duplicity.pot'
4500--- po/duplicity.pot 2014-04-25 15:03:00 +0000
4501+++ po/duplicity.pot 2014-04-28 02:49:55 +0000
4502@@ -8,7 +8,7 @@
4503 msgstr ""
4504 "Project-Id-Version: PACKAGE VERSION\n"
4505 "Report-Msgid-Bugs-To: Kenneth Loafman <kenneth@loafman.com>\n"
4506-"POT-Creation-Date: 2014-04-21 11:04-0500\n"
4507+"POT-Creation-Date: 2014-04-27 22:39-0400\n"
4508 "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
4509 "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
4510 "Language-Team: LANGUAGE <LL@li.org>\n"
4511@@ -69,243 +69,243 @@
4512 "Continuing restart on file %s."
4513 msgstr ""
4514
4515-#: ../bin/duplicity:299
4516+#: ../bin/duplicity:297
4517 #, python-format
4518 msgid "File %s was corrupted during upload."
4519 msgstr ""
4520
4521-#: ../bin/duplicity:333
4522+#: ../bin/duplicity:331
4523 msgid ""
4524 "Restarting backup, but current encryption settings do not match original "
4525 "settings"
4526 msgstr ""
4527
4528-#: ../bin/duplicity:356
4529+#: ../bin/duplicity:354
4530 #, python-format
4531 msgid "Restarting after volume %s, file %s, block %s"
4532 msgstr ""
4533
4534-#: ../bin/duplicity:423
4535+#: ../bin/duplicity:421
4536 #, python-format
4537 msgid "Processed volume %d"
4538 msgstr ""
4539
4540-#: ../bin/duplicity:572
4541+#: ../bin/duplicity:570
4542 msgid ""
4543 "Fatal Error: Unable to start incremental backup. Old signatures not found "
4544 "and incremental specified"
4545 msgstr ""
4546
4547-#: ../bin/duplicity:576
4548+#: ../bin/duplicity:574
4549 msgid "No signatures found, switching to full backup."
4550 msgstr ""
4551
4552-#: ../bin/duplicity:590
4553+#: ../bin/duplicity:588
4554 msgid "Backup Statistics"
4555 msgstr ""
4556
4557-#: ../bin/duplicity:695
4558+#: ../bin/duplicity:693
4559 #, python-format
4560 msgid "%s not found in archive, no files restored."
4561 msgstr ""
4562
4563-#: ../bin/duplicity:699
4564+#: ../bin/duplicity:697
4565 msgid "No files found in archive - nothing restored."
4566 msgstr ""
4567
4568-#: ../bin/duplicity:732
4569+#: ../bin/duplicity:730
4570 #, python-format
4571 msgid "Processed volume %d of %d"
4572 msgstr ""
4573
4574+#: ../bin/duplicity:764
4575+#, python-format
4576+msgid "Invalid data - %s hash mismatch for file:"
4577+msgstr ""
4578+
4579 #: ../bin/duplicity:766
4580 #, python-format
4581-msgid "Invalid data - %s hash mismatch for file:"
4582-msgstr ""
4583-
4584-#: ../bin/duplicity:768
4585-#, python-format
4586 msgid "Calculated hash: %s"
4587 msgstr ""
4588
4589-#: ../bin/duplicity:769
4590+#: ../bin/duplicity:767
4591 #, python-format
4592 msgid "Manifest hash: %s"
4593 msgstr ""
4594
4595-#: ../bin/duplicity:807
4596+#: ../bin/duplicity:805
4597 #, python-format
4598 msgid "Volume was signed by key %s, not %s"
4599 msgstr ""
4600
4601-#: ../bin/duplicity:837
4602+#: ../bin/duplicity:835
4603 #, python-format
4604 msgid "Verify complete: %s, %s."
4605 msgstr ""
4606
4607-#: ../bin/duplicity:838
4608+#: ../bin/duplicity:836
4609 #, python-format
4610 msgid "%d file compared"
4611 msgid_plural "%d files compared"
4612 msgstr[0] ""
4613 msgstr[1] ""
4614
4615-#: ../bin/duplicity:840
4616+#: ../bin/duplicity:838
4617 #, python-format
4618 msgid "%d difference found"
4619 msgid_plural "%d differences found"
4620 msgstr[0] ""
4621 msgstr[1] ""
4622
4623-#: ../bin/duplicity:859
4624+#: ../bin/duplicity:857
4625 msgid "No extraneous files found, nothing deleted in cleanup."
4626 msgstr ""
4627
4628-#: ../bin/duplicity:864
4629+#: ../bin/duplicity:862
4630 msgid "Deleting this file from backend:"
4631 msgid_plural "Deleting these files from backend:"
4632 msgstr[0] ""
4633 msgstr[1] ""
4634
4635-#: ../bin/duplicity:876
4636+#: ../bin/duplicity:874
4637 msgid "Found the following file to delete:"
4638 msgid_plural "Found the following files to delete:"
4639 msgstr[0] ""
4640 msgstr[1] ""
4641
4642-#: ../bin/duplicity:880
4643+#: ../bin/duplicity:878
4644 msgid "Run duplicity again with the --force option to actually delete."
4645 msgstr ""
4646
4647+#: ../bin/duplicity:921
4648+msgid "There are backup set(s) at time(s):"
4649+msgstr ""
4650+
4651 #: ../bin/duplicity:923
4652-msgid "There are backup set(s) at time(s):"
4653-msgstr ""
4654-
4655-#: ../bin/duplicity:925
4656 msgid "Which can't be deleted because newer sets depend on them."
4657 msgstr ""
4658
4659-#: ../bin/duplicity:929
4660+#: ../bin/duplicity:927
4661 msgid ""
4662 "Current active backup chain is older than specified time. However, it will "
4663 "not be deleted. To remove all your backups, manually purge the repository."
4664 msgstr ""
4665
4666-#: ../bin/duplicity:935
4667+#: ../bin/duplicity:933
4668 msgid "No old backup sets found, nothing deleted."
4669 msgstr ""
4670
4671-#: ../bin/duplicity:938
4672+#: ../bin/duplicity:936
4673 msgid "Deleting backup chain at time:"
4674 msgid_plural "Deleting backup chains at times:"
4675 msgstr[0] ""
4676 msgstr[1] ""
4677
4678+#: ../bin/duplicity:947
4679+#, python-format
4680+msgid "Deleting incremental signature chain %s"
4681+msgstr ""
4682+
4683 #: ../bin/duplicity:949
4684 #, python-format
4685-msgid "Deleting incremental signature chain %s"
4686-msgstr ""
4687-
4688-#: ../bin/duplicity:951
4689-#, python-format
4690 msgid "Deleting incremental backup chain %s"
4691 msgstr ""
4692
4693+#: ../bin/duplicity:952
4694+#, python-format
4695+msgid "Deleting complete signature chain %s"
4696+msgstr ""
4697+
4698 #: ../bin/duplicity:954
4699 #, python-format
4700-msgid "Deleting complete signature chain %s"
4701-msgstr ""
4702-
4703-#: ../bin/duplicity:956
4704-#, python-format
4705 msgid "Deleting complete backup chain %s"
4706 msgstr ""
4707
4708-#: ../bin/duplicity:962
4709+#: ../bin/duplicity:960
4710 msgid "Found old backup chain at the following time:"
4711 msgid_plural "Found old backup chains at the following times:"
4712 msgstr[0] ""
4713 msgstr[1] ""
4714
4715-#: ../bin/duplicity:966
4716+#: ../bin/duplicity:964
4717 msgid "Rerun command with --force option to actually delete."
4718 msgstr ""
4719
4720-#: ../bin/duplicity:1043
4721+#: ../bin/duplicity:1041
4722 #, python-format
4723 msgid "Deleting local %s (not authoritative at backend)."
4724 msgstr ""
4725
4726-#: ../bin/duplicity:1047
4727+#: ../bin/duplicity:1045
4728 #, python-format
4729 msgid "Unable to delete %s: %s"
4730 msgstr ""
4731
4732-#: ../bin/duplicity:1075 ../duplicity/dup_temp.py:263
4733+#: ../bin/duplicity:1073 ../duplicity/dup_temp.py:263
4734 #, python-format
4735 msgid "Failed to read %s: %s"
4736 msgstr ""
4737
4738-#: ../bin/duplicity:1089
4739+#: ../bin/duplicity:1087
4740 #, python-format
4741 msgid "Copying %s to local cache."
4742 msgstr ""
4743
4744-#: ../bin/duplicity:1137
4745+#: ../bin/duplicity:1135
4746 msgid "Local and Remote metadata are synchronized, no sync needed."
4747 msgstr ""
4748
4749-#: ../bin/duplicity:1142
4750+#: ../bin/duplicity:1140
4751 msgid "Synchronizing remote metadata to local cache..."
4752 msgstr ""
4753
4754-#: ../bin/duplicity:1157
4755+#: ../bin/duplicity:1155
4756 msgid "Sync would copy the following from remote to local:"
4757 msgstr ""
4758
4759-#: ../bin/duplicity:1160
4760+#: ../bin/duplicity:1158
4761 msgid "Sync would remove the following spurious local files:"
4762 msgstr ""
4763
4764-#: ../bin/duplicity:1203
4765+#: ../bin/duplicity:1201
4766 msgid "Unable to get free space on temp."
4767 msgstr ""
4768
4769-#: ../bin/duplicity:1211
4770+#: ../bin/duplicity:1209
4771 #, python-format
4772 msgid "Temp space has %d available, backup needs approx %d."
4773 msgstr ""
4774
4775-#: ../bin/duplicity:1214
4776+#: ../bin/duplicity:1212
4777 #, python-format
4778 msgid "Temp has %d available, backup will use approx %d."
4779 msgstr ""
4780
4781-#: ../bin/duplicity:1222
4782+#: ../bin/duplicity:1220
4783 msgid "Unable to get max open files."
4784 msgstr ""
4785
4786-#: ../bin/duplicity:1226
4787+#: ../bin/duplicity:1224
4788 #, python-format
4789 msgid ""
4790 "Max open files of %s is too low, should be >= 1024.\n"
4791 "Use 'ulimit -n 1024' or higher to correct.\n"
4792 msgstr ""
4793
4794-#: ../bin/duplicity:1275
4795+#: ../bin/duplicity:1273
4796 msgid ""
4797 "RESTART: The first volume failed to upload before termination.\n"
4798 " Restart is impossible...starting backup from beginning."
4799 msgstr ""
4800
4801-#: ../bin/duplicity:1281
4802+#: ../bin/duplicity:1279
4803 #, python-format
4804 msgid ""
4805 "RESTART: Volumes %d to %d failed to upload before termination.\n"
4806 " Restarting backup at volume %d."
4807 msgstr ""
4808
4809-#: ../bin/duplicity:1288
4810+#: ../bin/duplicity:1286
4811 #, python-format
4812 msgid ""
4813 "RESTART: Impossible backup state: manifest has %d vols, remote has %d vols.\n"
4814@@ -314,7 +314,7 @@
4815 " backup then restart the backup from the beginning."
4816 msgstr ""
4817
4818-#: ../bin/duplicity:1310
4819+#: ../bin/duplicity:1308
4820 msgid ""
4821 "\n"
4822 "PYTHONOPTIMIZE in the environment causes duplicity to fail to\n"
4823@@ -324,54 +324,54 @@
4824 "See https://bugs.launchpad.net/duplicity/+bug/931175\n"
4825 msgstr ""
4826
4827-#: ../bin/duplicity:1401
4828+#: ../bin/duplicity:1399
4829 #, python-format
4830 msgid "Last %s backup left a partial set, restarting."
4831 msgstr ""
4832
4833-#: ../bin/duplicity:1405
4834+#: ../bin/duplicity:1403
4835 #, python-format
4836 msgid "Cleaning up previous partial %s backup set, restarting."
4837 msgstr ""
4838
4839+#: ../bin/duplicity:1414
4840+msgid "Last full backup date:"
4841+msgstr ""
4842+
4843 #: ../bin/duplicity:1416
4844-msgid "Last full backup date:"
4845+msgid "Last full backup date: none"
4846 msgstr ""
4847
4848 #: ../bin/duplicity:1418
4849-msgid "Last full backup date: none"
4850-msgstr ""
4851-
4852-#: ../bin/duplicity:1420
4853 msgid "Last full backup is too old, forcing full backup"
4854 msgstr ""
4855
4856-#: ../bin/duplicity:1463
4857+#: ../bin/duplicity:1461
4858 msgid ""
4859 "When using symmetric encryption, the signing passphrase must equal the "
4860 "encryption passphrase."
4861 msgstr ""
4862
4863-#: ../bin/duplicity:1516
4864+#: ../bin/duplicity:1514
4865 msgid "INT intercepted...exiting."
4866 msgstr ""
4867
4868-#: ../bin/duplicity:1524
4869+#: ../bin/duplicity:1522
4870 #, python-format
4871 msgid "GPG error detail: %s"
4872 msgstr ""
4873
4874-#: ../bin/duplicity:1534
4875+#: ../bin/duplicity:1532
4876 #, python-format
4877 msgid "User error detail: %s"
4878 msgstr ""
4879
4880-#: ../bin/duplicity:1544
4881+#: ../bin/duplicity:1542
4882 #, python-format
4883 msgid "Backend error detail: %s"
4884 msgstr ""
4885
4886-#: ../bin/rdiffdir:56 ../duplicity/commandline.py:238
4887+#: ../bin/rdiffdir:56 ../duplicity/commandline.py:233
4888 #, python-format
4889 msgid "Error opening file %s"
4890 msgstr ""
4891@@ -381,33 +381,33 @@
4892 msgid "File %s already exists, will not overwrite."
4893 msgstr ""
4894
4895-#: ../duplicity/selection.py:119
4896+#: ../duplicity/selection.py:121
4897 #, python-format
4898 msgid "Skipping socket %s"
4899 msgstr ""
4900
4901-#: ../duplicity/selection.py:123
4902+#: ../duplicity/selection.py:125
4903 #, python-format
4904 msgid "Error initializing file %s"
4905 msgstr ""
4906
4907-#: ../duplicity/selection.py:127 ../duplicity/selection.py:148
4908+#: ../duplicity/selection.py:129 ../duplicity/selection.py:150
4909 #, python-format
4910 msgid "Error accessing possibly locked file %s"
4911 msgstr ""
4912
4913-#: ../duplicity/selection.py:163
4914+#: ../duplicity/selection.py:165
4915 #, python-format
4916 msgid "Warning: base %s doesn't exist, continuing"
4917 msgstr ""
4918
4919-#: ../duplicity/selection.py:166 ../duplicity/selection.py:184
4920-#: ../duplicity/selection.py:187
4921+#: ../duplicity/selection.py:168 ../duplicity/selection.py:186
4922+#: ../duplicity/selection.py:189
4923 #, python-format
4924 msgid "Selecting %s"
4925 msgstr ""
4926
4927-#: ../duplicity/selection.py:268
4928+#: ../duplicity/selection.py:270
4929 #, python-format
4930 msgid ""
4931 "Fatal Error: The file specification\n"
4932@@ -418,14 +418,14 @@
4933 "pattern (such as '**') which matches the base directory."
4934 msgstr ""
4935
4936-#: ../duplicity/selection.py:276
4937+#: ../duplicity/selection.py:278
4938 #, python-format
4939 msgid ""
4940 "Fatal Error while processing expression\n"
4941 "%s"
4942 msgstr ""
4943
4944-#: ../duplicity/selection.py:286
4945+#: ../duplicity/selection.py:288
4946 #, python-format
4947 msgid ""
4948 "Last selection expression:\n"
4949@@ -435,49 +435,49 @@
4950 "probably isn't what you meant."
4951 msgstr ""
4952
4953-#: ../duplicity/selection.py:311
4954+#: ../duplicity/selection.py:313
4955 #, python-format
4956 msgid "Reading filelist %s"
4957 msgstr ""
4958
4959-#: ../duplicity/selection.py:314
4960+#: ../duplicity/selection.py:316
4961 #, python-format
4962 msgid "Sorting filelist %s"
4963 msgstr ""
4964
4965-#: ../duplicity/selection.py:341
4966+#: ../duplicity/selection.py:343
4967 #, python-format
4968 msgid ""
4969 "Warning: file specification '%s' in filelist %s\n"
4970 "doesn't start with correct prefix %s. Ignoring."
4971 msgstr ""
4972
4973-#: ../duplicity/selection.py:345
4974+#: ../duplicity/selection.py:347
4975 msgid "Future prefix errors will not be logged."
4976 msgstr ""
4977
4978-#: ../duplicity/selection.py:361
4979+#: ../duplicity/selection.py:363
4980 #, python-format
4981 msgid "Error closing filelist %s"
4982 msgstr ""
4983
4984-#: ../duplicity/selection.py:428
4985+#: ../duplicity/selection.py:430
4986 #, python-format
4987 msgid "Reading globbing filelist %s"
4988 msgstr ""
4989
4990-#: ../duplicity/selection.py:461
4991+#: ../duplicity/selection.py:463
4992 #, python-format
4993 msgid "Error compiling regular expression %s"
4994 msgstr ""
4995
4996-#: ../duplicity/selection.py:477
4997+#: ../duplicity/selection.py:479
4998 msgid ""
4999 "Warning: exclude-device-files is not the first selector.\n"
5000 "This may not be what you intended"
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches

to all changes: