Merge lp:~mterry/duplicity/backend-unification into lp:duplicity/0.6

Proposed by Michael Terry
Status: Merged
Merged at revision: 981
Proposed branch: lp:~mterry/duplicity/backend-unification
Merge into: lp:duplicity/0.6
Diff against target: 7230 lines (+2212/-2635)
43 files modified
bin/duplicity (+0/-2)
duplicity/backend.py (+293/-279)
duplicity/backends/README (+79/-0)
duplicity/backends/_boto_multi.py (+2/-2)
duplicity/backends/_boto_single.py (+77/-186)
duplicity/backends/_cf_cloudfiles.py (+33/-124)
duplicity/backends/_cf_pyrax.py (+34/-124)
duplicity/backends/_ssh_paramiko.py (+84/-156)
duplicity/backends/_ssh_pexpect.py (+131/-168)
duplicity/backends/botobackend.py (+6/-8)
duplicity/backends/cfbackend.py (+5/-2)
duplicity/backends/dpbxbackend.py (+17/-32)
duplicity/backends/ftpbackend.py (+11/-20)
duplicity/backends/ftpsbackend.py (+10/-24)
duplicity/backends/gdocsbackend.py (+84/-143)
duplicity/backends/giobackend.py (+79/-114)
duplicity/backends/hsibackend.py (+8/-22)
duplicity/backends/imapbackend.py (+46/-50)
duplicity/backends/localbackend.py (+30/-86)
duplicity/backends/megabackend.py (+48/-98)
duplicity/backends/par2backend.py (+63/-79)
duplicity/backends/rsyncbackend.py (+13/-27)
duplicity/backends/sshbackend.py (+7/-2)
duplicity/backends/swiftbackend.py (+25/-119)
duplicity/backends/tahoebackend.py (+11/-22)
duplicity/backends/webdavbackend.py (+46/-60)
duplicity/commandline.py (+5/-10)
duplicity/errors.py (+6/-2)
duplicity/globals.py (+3/-0)
po/duplicity.pot (+270/-293)
testing/__init__.py (+5/-1)
testing/functional/__init__.py (+5/-3)
testing/functional/test_badupload.py (+1/-1)
testing/manual/backendtest.py (+208/-290)
testing/manual/config.py.tmpl (+18/-86)
testing/overrides/bin/hsi (+16/-0)
testing/overrides/bin/lftp (+23/-0)
testing/overrides/bin/ncftpget (+6/-0)
testing/overrides/bin/ncftpls (+19/-0)
testing/overrides/bin/ncftpput (+6/-0)
testing/overrides/bin/tahoe (+10/-0)
testing/unit/test_backend.py (+128/-0)
testing/unit/test_backend_instance.py (+241/-0)
To merge this branch: bzr merge lp:~mterry/duplicity/backend-unification
Reviewer Review Type Date Requested Status
duplicity-team Pending
Review via email: mp+216764@code.launchpad.net

Description of the change

Reorganize and simplify backend code. Specifically:

- Formalize the expected API between backends and duplicity. See the new file duplicity/backends/README for the instructions I've given authors.

- Add some tests for our backend wrapper class as well as some tests for individual backends. For several backends that have some commands do all the heavy lifting (hsi, tahoe, ftp), I've added fake little mock commands so that we can test them locally. This doesn't truly test our integration with those commands, but at least lets us test the backend glue code itself.

- Removed a lot of duplicate and unused code which backends were using (or not using). This branch drops 700 lines of code (~20%) in duplicity/backends!

- Simplified expectations of backends. Our wrapper code now does all the retrying, and all the exception handling. Backends can 'fire and forget' trusting our wrappers to give the user a reasonable error message. Obviously, backends can also add more details and make nicer error messages. But they don't *have* to.

- Separate out the backend classes from our wrapper class. Now there is no possibility of namespace collision. All our API methods use one underscore. Anything else (zero or two underscores) are for the backend class's use.

- Added the concept of a 'backend prefix' which is used by par2 and gio backends to provide generic support for "schema+" in urls -- like par2+ or gio+. I've since marked the '--gio' flag as deprecated, in favor of 'gio+'. Now you can even nest such backends like par2+gio+file://blah/blah.

- The switch to control which cloudfiles backend had a typo. I fixed this, but I'm not sure I should have? If we haven't had complaints, maybe we can just drop the old backend.

- I manually tested all the backends we have (except hsi and tahoe -- but those are simple wrappers around commands and I did test those via mocks per above). I also added a bunch more manual backend tests to ./testing/manual/backendtest.py, which can now be run like the above to test all the files you have configured in config.py or you can pass it a URL which it will use for testing (useful for backend authors).

To post a comment you must log in.
Revision history for this message
edso (ed.so) wrote :

A+ for effort ;) .. we should probably make that a 0.7 release, stating that there might be some hickups (bugs) due to major code revision.
two more comments below..

On 28.04.2014 04:55, Michael Terry wrote:
SNIP
>
>
> === modified file 'bin/duplicity'
> --- bin/duplicity 2014-04-19 19:54:54 +0000
> +++ bin/duplicity 2014-04-28 02:49:55 +0000
> @@ -289,8 +289,6 @@
>
> def validate_block(orig_size, dest_filename):
> info = backend.query_info([dest_filename])[dest_filename]
> - if 'size' not in info:
> - return # backend didn't know how to query size

this could raise a key not found error. can you really guaratee the key will be there?

> size = info['size']
> if size is None:
> return # error querying file
>
> === modified file 'duplicity/backend.py'
> --- duplicity/backend.py 2014-04-25 23:53:46 +0000
> +++ duplicity/backend.py 2014-04-28 02:49:55 +0000
> @@ -24,6 +24,7 @@
SNIP
>
> # These URL schemes have a backend with a notion of an RFC "network location".
> # The 'file' and 's3+http' schemes should not be in this list.
> @@ -69,7 +74,6 @@
> uses_netloc = ['ftp',
> 'ftps',
> 'hsi',
> - 'rsync',
> 's3',
> 'scp', 'ssh', 'sftp',
> 'webdav', 'webdavs',

any reason why you removed rsync here?

..ede

Revision history for this message
Kenneth Loafman (kenneth-loafman) wrote :

ede, I don't see a problem with a missing key.

As to 0.7 -- I'm thinking that some of the latest changes, not bug fixes,
should be removed from 0.6 and applied to 0.7. We could make the next
release, 0.6.24, the last of the 0.6 series. We probably need to start an
email thread to discuss this, but I think it's time.

On Mon, Apr 28, 2014 at 4:51 AM, edso <email address hidden> wrote:

> A+ for effort ;) .. we should probably make that a 0.7 release, stating
> that there might be some hickups (bugs) due to major code revision.
> two more comments below..
>
> On 28.04.2014 04:55, Michael Terry wrote:
> SNIP
> >
> >
> > === modified file 'bin/duplicity'
> > --- bin/duplicity 2014-04-19 19:54:54 +0000
> > +++ bin/duplicity 2014-04-28 02:49:55 +0000
> > @@ -289,8 +289,6 @@
> >
> > def validate_block(orig_size, dest_filename):
> > info = backend.query_info([dest_filename])[dest_filename]
> > - if 'size' not in info:
> > - return # backend didn't know how to query size
>
> this could raise a key not found error. can you really guaratee the key
> will be there?
>
> > size = info['size']
> > if size is None:
> > return # error querying file
> >
> > === modified file 'duplicity/backend.py'
> > --- duplicity/backend.py 2014-04-25 23:53:46 +0000
> > +++ duplicity/backend.py 2014-04-28 02:49:55 +0000
> > @@ -24,6 +24,7 @@
> SNIP
> >
> > # These URL schemes have a backend with a notion of an RFC "network
> location".
> > # The 'file' and 's3+http' schemes should not be in this list.
> > @@ -69,7 +74,6 @@
> > uses_netloc = ['ftp',
> > 'ftps',
> > 'hsi',
> > - 'rsync',
> > 's3',
> > 'scp', 'ssh', 'sftp',
> > 'webdav', 'webdavs',
>
> any reason why you removed rsync here?
>
> ..ede
>
> --
>
> https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764
> Your team duplicity-team is requested to review the proposed merge of
> lp:~mterry/duplicity/backend-unification into lp:duplicity.
>
> _______________________________________________
> Mailing list: https://launchpad.net/~duplicity-team
> Post to : <email address hidden>
> Unsubscribe : https://launchpad.net/~duplicity-team
> More help : https://help.launchpad.net/ListHelp
>

Revision history for this message
edso (ed.so) wrote :

On 28.04.2014 13:21, Kenneth Loafman wrote:
> ede, I don't see a problem with a missing key.

        size = info['size']
will throw an error if the dictionary misses they key. we had that earlier. but MT will probably know if he hacked the info routines safely enough to guarantee it's existence.

> As to 0.7 -- I'm thinking that some of the latest changes, not bug fixes,
> should be removed from 0.6 and applied to 0.7. We could make the next
> release, 0.6.24, the last of the 0.6 series. We probably need to start an
> email thread to discuss this, but I think it's time.

that's an alternative route to go, although i am not aware of major bugs since 0.6.23 ..ede

>
>
> On Mon, Apr 28, 2014 at 4:51 AM, edso <email address hidden> wrote:
>
>> A+ for effort ;) .. we should probably make that a 0.7 release, stating
>> that there might be some hickups (bugs) due to major code revision.
>> two more comments below..
>>
>> On 28.04.2014 04:55, Michael Terry wrote:
>> SNIP
>>>
>>>
>>> === modified file 'bin/duplicity'
>>> --- bin/duplicity 2014-04-19 19:54:54 +0000
>>> +++ bin/duplicity 2014-04-28 02:49:55 +0000
>>> @@ -289,8 +289,6 @@
>>>
>>> def validate_block(orig_size, dest_filename):
>>> info = backend.query_info([dest_filename])[dest_filename]
>>> - if 'size' not in info:
>>> - return # backend didn't know how to query size
>>
>> this could raise a key not found error. can you really guaratee the key
>> will be there?
>>
>>> size = info['size']
>>> if size is None:
>>> return # error querying file
>>>
>>> === modified file 'duplicity/backend.py'
>>> --- duplicity/backend.py 2014-04-25 23:53:46 +0000
>>> +++ duplicity/backend.py 2014-04-28 02:49:55 +0000
>>> @@ -24,6 +24,7 @@
>> SNIP
>>>
>>> # These URL schemes have a backend with a notion of an RFC "network
>> location".
>>> # The 'file' and 's3+http' schemes should not be in this list.
>>> @@ -69,7 +74,6 @@
>>> uses_netloc = ['ftp',
>>> 'ftps',
>>> 'hsi',
>>> - 'rsync',
>>> 's3',
>>> 'scp', 'ssh', 'sftp',
>>> 'webdav', 'webdavs',
>>
>> any reason why you removed rsync here?
>>
>> ..ede
>>
>> --
>>
>> https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764
>> Your team duplicity-team is requested to review the proposed merge of
>> lp:~mterry/duplicity/backend-unification into lp:duplicity.
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~duplicity-team
>> Post to : <email address hidden>
>> Unsubscribe : https://launchpad.net/~duplicity-team
>> More help : https://help.launchpad.net/ListHelp
>>
>

Revision history for this message
Kenneth Loafman (kenneth-loafman) wrote :

Sorry, was looking above the comment, not below. Not enough coffee!

There are a number of bugs/additions in the current CHANGELOG slated for 0.6.24. Looking at them again, this change is the trigger for 0.7 startup.

Mike, I really like tox. It's really easy to merge changes and retest.

Revision history for this message
Michael Terry (mterry) wrote :

> this could raise a key not found error. can you really guaratee the key will be there?

Yes, because now query_info calls the backend query function, then makes sure that 'size' exists for each filename passed. This was part of the "allow backends to be dumb" strategy -- if they skip a filename or don't provide 'size', the rest of duplicity doesn't have to care. Guarantees come from the wrapper class, not the backends.

> any reason why you removed rsync here?

Yeah, I changed the rsync backend to support relative/local urls. This was so we could test it as part of our normal automated tests (without needing a server). The list I removed 'rsync' from was a list of backends that require a hostname.

Revision history for this message
edso (ed.so) wrote :

On 28.04.2014 14:55, Michael Terry wrote:
>> this could raise a key not found error. can you really guaratee the key will be there?
>
> Yes, because now query_info calls the backend query function, then makes sure that 'size' exists for each filename passed. This was part of the "allow backends to be dumb" strategy -- if they skip a filename or don't provide 'size', the rest of duplicity doesn't have to care. Guarantees come from the wrapper class, not the backends.
>
>> any reason why you removed rsync here?
>
> Yeah, I changed the rsync backend to support relative/local urls. This was so we could test it as part of our normal automated tests (without needing a server). The list I removed 'rsync' from was a list of backends that require a hostname.
>

well done.. thx ede

Revision history for this message
Kenneth Loafman (kenneth-loafman) wrote :

Either of you see a show-stopper to keep 0.6.24 from going out as-is? We
have an impressive list of changes already.

I've resurrected the old 0.7 series, merged in the current 0.6, and will
merge this set of changes in later. Does everyone agree that this is a
good place to make the switch to 0.7? Do we want to make 0.7 the current
focus of development after the release of 0.6.24?

On Mon, Apr 28, 2014 at 9:15 AM, edso <email address hidden> wrote:

> On 28.04.2014 14:55, Michael Terry wrote:
> >> this could raise a key not found error. can you really guaratee the key
> will be there?
> >
> > Yes, because now query_info calls the backend query function, then makes
> sure that 'size' exists for each filename passed. This was part of the
> "allow backends to be dumb" strategy -- if they skip a filename or don't
> provide 'size', the rest of duplicity doesn't have to care. Guarantees
> come from the wrapper class, not the backends.
> >
> >> any reason why you removed rsync here?
> >
> > Yeah, I changed the rsync backend to support relative/local urls. This
> was so we could test it as part of our normal automated tests (without
> needing a server). The list I removed 'rsync' from was a list of backends
> that require a hostname.
> >
>
> well done.. thx ede
>
> --
>
> https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764
> You are subscribed to branch lp:duplicity.
>

Revision history for this message
Michael Terry (mterry) wrote :

Well, two separate decisions: release 0.6.24 or not and whether we want to go to 0.7.

But I don't have a reason to wait on releasing 0.6.24.

And as for 0.7, I'm fine with that too. Just a number. :)

But this branch itself feels like not a very momentous change. By which I mean, end users shouldn't care about this at all. Doesn't fix any big bugs (except maybe the --cf-backend argument) and doesn't add any new features.

The bump to require python 2.6 feels like a more natural jumping off point, but as edso noted, 0.6.23 already has 2.6isms in it. So not really a change there either.

So... :shrug:

Revision history for this message
edso (ed.so) wrote :

main concern for me is to have a "stable" version, to point users to if they stumble over a showstopper after Mike's reworks. that can be 0.6.24
i wonder if it'd make sense to also exclude all the other modifications Mike in his current run (since drop-pexpect) as they may also contain some side effects that may be better kept in 0.7dev.

considering the major backend differences of Mike's latest commit i agree to make a 0.7 the new devel focus.

..ede

On 28.04.2014 16:45, Kenneth Loafman wrote:
> Either of you see a show-stopper to keep 0.6.24 from going out as-is? We
> have an impressive list of changes already.
>
> I've resurrected the old 0.7 series, merged in the current 0.6, and will
> merge this set of changes in later. Does everyone agree that this is a
> good place to make the switch to 0.7? Do we want to make 0.7 the current
> focus of development after the release of 0.6.24?
>
>
> On Mon, Apr 28, 2014 at 9:15 AM, edso <email address hidden> wrote:
>
>> On 28.04.2014 14:55, Michael Terry wrote:
>>>> this could raise a key not found error. can you really guaratee the key
>> will be there?
>>>
>>> Yes, because now query_info calls the backend query function, then makes
>> sure that 'size' exists for each filename passed. This was part of the
>> "allow backends to be dumb" strategy -- if they skip a filename or don't
>> provide 'size', the rest of duplicity doesn't have to care. Guarantees
>> come from the wrapper class, not the backends.
>>>
>>>> any reason why you removed rsync here?
>>>
>>> Yeah, I changed the rsync backend to support relative/local urls. This
>> was so we could test it as part of our normal automated tests (without
>> needing a server). The list I removed 'rsync' from was a list of backends
>> that require a hostname.
>>>
>>
>> well done.. thx ede
>>
>> --
>>
>> https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764
>> You are subscribed to branch lp:duplicity.
>>
>

Revision history for this message
edso (ed.so) wrote :

On 28.04.2014 16:53, Michael Terry wrote:
> Well, two separate decisions: release 0.6.24 or not and whether we want to go to 0.7.
>
> But I don't have a reason to wait on releasing 0.6.24.
>
> And as for 0.7, I'm fine with that too. Just a number. :)
>
> But this branch itself feels like not a very momentous change. By which I mean, end users shouldn't care about this at all. Doesn't fix any big bugs (except maybe the --cf-backend argument) and doesn't add any new features.
>

it's just the sheer amount of modified code that worries me a bit. as i wrote in the other email

also,
1. old backend code e.g. released by somebody privately will not work anymore when dropped into duplicity/backends
2. true, officially switching to 2.6 might be another point, somebody might want to remove the 2.6'isms from the 0.6 branch if they really need duplicity on some oldold distro. supposing we also keep the 2.6 changes only in 0.7 branch.

..ede

Revision history for this message
Kenneth Loafman (kenneth-loafman) wrote :

Hmmm, this is getting complicated, but I see your points.

We can test against 2.4 and 2.5 using the answers here:
http://askubuntu.com/questions/125342/how-can-i-install-python-2-6-on-12-04(the
deadsnakes repositories, love the name). So, if we wanted, we could
move the modernizations over to 0.7 and fix 0.6.24 to work on older
Pythons.

That seems like a cleaner and more rational break to me. Thoughts?

On Mon, Apr 28, 2014 at 10:06 AM, edso <email address hidden> wrote:

> On 28.04.2014 16:53, Michael Terry wrote:
> > Well, two separate decisions: release 0.6.24 or not and whether we want
> to go to 0.7.
> >
> > But I don't have a reason to wait on releasing 0.6.24.
> >
> > And as for 0.7, I'm fine with that too. Just a number. :)
> >
> > But this branch itself feels like not a very momentous change. By which
> I mean, end users shouldn't care about this at all. Doesn't fix any big
> bugs (except maybe the --cf-backend argument) and doesn't add any new
> features.
> >
>
> it's just the sheer amount of modified code that worries me a bit. as i
> wrote in the other email
>
> also,
> 1. old backend code e.g. released by somebody privately will not work
> anymore when dropped into duplicity/backends
> 2. true, officially switching to 2.6 might be another point, somebody
> might want to remove the 2.6'isms from the 0.6 branch if they really need
> duplicity on some oldold distro. supposing we also keep the 2.6 changes
> only in 0.7 branch.
>
> ..ede
>
> --
>
> https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764
> You are subscribed to branch lp:duplicity.
>

Revision history for this message
edso (ed.so) wrote :

exactly my point, from a stability standpoint though :) ..ede

On 28.04.2014 17:18, Kenneth Loafman wrote:
> Hmmm, this is getting complicated, but I see your points.
>
> We can test against 2.4 and 2.5 using the answers here:
> http://askubuntu.com/questions/125342/how-can-i-install-python-2-6-on-12-04(the
> deadsnakes repositories, love the name). So, if we wanted, we could
> move the modernizations over to 0.7 and fix 0.6.24 to work on older
> Pythons.
>
> That seems like a cleaner and more rational break to me. Thoughts?
>
>
>
> On Mon, Apr 28, 2014 at 10:06 AM, edso <email address hidden> wrote:
>
>> On 28.04.2014 16:53, Michael Terry wrote:
>>> Well, two separate decisions: release 0.6.24 or not and whether we want
>> to go to 0.7.
>>>
>>> But I don't have a reason to wait on releasing 0.6.24.
>>>
>>> And as for 0.7, I'm fine with that too. Just a number. :)
>>>
>>> But this branch itself feels like not a very momentous change. By which
>> I mean, end users shouldn't care about this at all. Doesn't fix any big
>> bugs (except maybe the --cf-backend argument) and doesn't add any new
>> features.
>>>
>>
>> it's just the sheer amount of modified code that worries me a bit. as i
>> wrote in the other email
>>
>> also,
>> 1. old backend code e.g. released by somebody privately will not work
>> anymore when dropped into duplicity/backends
>> 2. true, officially switching to 2.6 might be another point, somebody
>> might want to remove the 2.6'isms from the 0.6 branch if they really need
>> duplicity on some oldold distro. supposing we also keep the 2.6 changes
>> only in 0.7 branch.
>>
>> ..ede
>>
>> --
>>
>> https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764
>> You are subscribed to branch lp:duplicity.
>>
>

Revision history for this message
Michael Terry (mterry) wrote :

> it's just the sheer amount of modified code that worries me a bit.
> as i wrote in the other email

Fair. I tried to thoroughly test, but I'm human. :)

> 1. old backend code e.g. released by somebody privately will not work
> anymore when dropped into duplicity/backends

This is a good point. I'm curious how many people do that; I can easily imagine it happening.

As for your comments, Ken:

> So, if we wanted, we could move the modernizations over to 0.7 and
> fix 0.6.24 to work on older Pythons.

As much as I hate giving any thoughts to py2.4/2.5, this is probably a nice thing to do. We clearly have interested users still on RHEL5, from my last survey on the mailing list.

If I have time in the next week, I can do this. Else, if you tackle this, note that you shouldn't do the testing on a recently-released distro. Because recent versions of tox/setuptools/virtualenv all dropped support for <2.6. Maybe try Ubuntu 12.04? I haven't tried an older distro myself yet, but ideally you'd be able to add "py24,py25" to the tox.ini file and just run tox to confirm that we work with older pythons. If that doesn't work, you'd have to go to an earlier release of your distro of choice.

Revision history for this message
Kenneth Loafman (kenneth-loafman) wrote :

Luckily, I'm still on 12.04. I'll test and see if 2.4/2.5 work.

On Mon, Apr 28, 2014 at 10:25 AM, Michael Terry <<email address hidden>
> wrote:

> > it's just the sheer amount of modified code that worries me a bit.
> > as i wrote in the other email
>
> Fair. I tried to thoroughly test, but I'm human. :)
>
> > 1. old backend code e.g. released by somebody privately will not work
> > anymore when dropped into duplicity/backends
>
> This is a good point. I'm curious how many people do that; I can easily
> imagine it happening.
>
> As for your comments, Ken:
>
> > So, if we wanted, we could move the modernizations over to 0.7 and
> > fix 0.6.24 to work on older Pythons.
>
> As much as I hate giving any thoughts to py2.4/2.5, this is probably a
> nice thing to do. We clearly have interested users still on RHEL5, from my
> last survey on the mailing list.
>
> If I have time in the next week, I can do this. Else, if you tackle this,
> note that you shouldn't do the testing on a recently-released distro.
> Because recent versions of tox/setuptools/virtualenv all dropped support
> for <2.6. Maybe try Ubuntu 12.04? I haven't tried an older distro myself
> yet, but ideally you'd be able to add "py24,py25" to the tox.ini file and
> just run tox to confirm that we work with older pythons. If that doesn't
> work, you'd have to go to an earlier release of your distro of choice.
> --
>
> https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764
> You are subscribed to branch lp:duplicity.
>

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'bin/duplicity'
--- bin/duplicity 2014-04-19 19:54:54 +0000
+++ bin/duplicity 2014-04-28 02:49:55 +0000
@@ -289,8 +289,6 @@
289289
290 def validate_block(orig_size, dest_filename):290 def validate_block(orig_size, dest_filename):
291 info = backend.query_info([dest_filename])[dest_filename]291 info = backend.query_info([dest_filename])[dest_filename]
292 if 'size' not in info:
293 return # backend didn't know how to query size
294 size = info['size']292 size = info['size']
295 if size is None:293 if size is None:
296 return # error querying file294 return # error querying file
297295
=== modified file 'duplicity/backend.py'
--- duplicity/backend.py 2014-04-25 23:53:46 +0000
+++ duplicity/backend.py 2014-04-28 02:49:55 +0000
@@ -24,6 +24,7 @@
24intended to be used by the backends themselves.24intended to be used by the backends themselves.
25"""25"""
2626
27import errno
27import os28import os
28import sys29import sys
29import socket30import socket
@@ -31,6 +32,7 @@
31import re32import re
32import getpass33import getpass
33import gettext34import gettext
35import types
34import urllib36import urllib
35import urlparse37import urlparse
3638
@@ -38,11 +40,14 @@
38from duplicity import file_naming40from duplicity import file_naming
39from duplicity import globals41from duplicity import globals
40from duplicity import log42from duplicity import log
43from duplicity import path
41from duplicity import progress44from duplicity import progress
45from duplicity import util
4246
43from duplicity.util import exception_traceback47from duplicity.util import exception_traceback
4448
45from duplicity.errors import BackendException, FatalBackendError49from duplicity.errors import BackendException
50from duplicity.errors import FatalBackendException
46from duplicity.errors import TemporaryLoadException51from duplicity.errors import TemporaryLoadException
47from duplicity.errors import ConflictingScheme52from duplicity.errors import ConflictingScheme
48from duplicity.errors import InvalidBackendURL53from duplicity.errors import InvalidBackendURL
@@ -54,8 +59,8 @@
54# todo: this should really NOT be done here59# todo: this should really NOT be done here
55socket.setdefaulttimeout(globals.timeout)60socket.setdefaulttimeout(globals.timeout)
5661
57_forced_backend = None
58_backends = {}62_backends = {}
63_backend_prefixes = {}
5964
60# These URL schemes have a backend with a notion of an RFC "network location".65# These URL schemes have a backend with a notion of an RFC "network location".
61# The 'file' and 's3+http' schemes should not be in this list.66# The 'file' and 's3+http' schemes should not be in this list.
@@ -69,7 +74,6 @@
69uses_netloc = ['ftp',74uses_netloc = ['ftp',
70 'ftps',75 'ftps',
71 'hsi',76 'hsi',
72 'rsync',
73 's3',77 's3',
74 'scp', 'ssh', 'sftp',78 'scp', 'ssh', 'sftp',
75 'webdav', 'webdavs',79 'webdav', 'webdavs',
@@ -96,8 +100,6 @@
96 if fn.endswith("backend.py"):100 if fn.endswith("backend.py"):
97 fn = fn[:-3]101 fn = fn[:-3]
98 imp = "duplicity.backends.%s" % (fn,)102 imp = "duplicity.backends.%s" % (fn,)
99 # ignore gio as it is explicitly loaded in commandline.parse_cmdline_options()
100 if fn == "giobackend": continue
101 try:103 try:
102 __import__(imp)104 __import__(imp)
103 res = "Succeeded"105 res = "Succeeded"
@@ -110,14 +112,6 @@
110 continue112 continue
111113
112114
113def force_backend(backend):
114 """
115 Forces the use of a particular backend, regardless of schema
116 """
117 global _forced_backend
118 _forced_backend = backend
119
120
121def register_backend(scheme, backend_factory):115def register_backend(scheme, backend_factory):
122 """116 """
123 Register a given backend factory responsible for URL:s with the117 Register a given backend factory responsible for URL:s with the
@@ -144,6 +138,32 @@
144 _backends[scheme] = backend_factory138 _backends[scheme] = backend_factory
145139
146140
141def register_backend_prefix(scheme, backend_factory):
142 """
143 Register a given backend factory responsible for URL:s with the
144 given scheme prefix.
145
146 The backend must be a callable which, when called with a URL as
147 the single parameter, returns an object implementing the backend
148 protocol (i.e., a subclass of Backend).
149
150 Typically the callable will be the Backend subclass itself.
151
152 This function is not thread-safe and is intended to be called
153 during module importation or start-up.
154 """
155 global _backend_prefixes
156
157 assert callable(backend_factory), "backend factory must be callable"
158
159 if scheme in _backend_prefixes:
160 raise ConflictingScheme("the prefix %s already has a backend "
161 "associated with it"
162 "" % (scheme,))
163
164 _backend_prefixes[scheme] = backend_factory
165
166
147def is_backend_url(url_string):167def is_backend_url(url_string):
148 """168 """
149 @return Whether the given string looks like a backend URL.169 @return Whether the given string looks like a backend URL.
@@ -157,9 +177,9 @@
157 return False177 return False
158178
159179
160def get_backend(url_string):180def get_backend_object(url_string):
161 """181 """
162 Instantiate a backend suitable for the given URL, or return None182 Find the right backend class instance for the given URL, or return None
163 if the given string looks like a local path rather than a URL.183 if the given string looks like a local path rather than a URL.
164184
165 Raise InvalidBackendURL if the URL is not a valid URL.185 Raise InvalidBackendURL if the URL is not a valid URL.
@@ -167,22 +187,44 @@
167 if not is_backend_url(url_string):187 if not is_backend_url(url_string):
168 return None188 return None
169189
190 global _backends, _backend_prefixes
191
170 pu = ParsedUrl(url_string)192 pu = ParsedUrl(url_string)
171
172 # Implicit local path
173 assert pu.scheme, "should be a backend url according to is_backend_url"193 assert pu.scheme, "should be a backend url according to is_backend_url"
174194
175 global _backends, _forced_backend195 factory = None
176196
177 if _forced_backend:197 for prefix in _backend_prefixes:
178 return _forced_backend(pu)198 if url_string.startswith(prefix + '+'):
179 elif not pu.scheme in _backends:199 factory = _backend_prefixes[prefix]
180 raise UnsupportedBackendScheme(url_string)200 pu = ParsedUrl(url_string.lstrip(prefix + '+'))
181 else:201 break
182 try:202
183 return _backends[pu.scheme](pu)203 if factory is None:
184 except ImportError:204 if not pu.scheme in _backends:
185 raise BackendException(_("Could not initialize backend: %s") % str(sys.exc_info()[1]))205 raise UnsupportedBackendScheme(url_string)
206 else:
207 factory = _backends[pu.scheme]
208
209 try:
210 return factory(pu)
211 except ImportError:
212 raise BackendException(_("Could not initialize backend: %s") % str(sys.exc_info()[1]))
213
214
215def get_backend(url_string):
216 """
217 Instantiate a backend suitable for the given URL, or return None
218 if the given string looks like a local path rather than a URL.
219
220 Raise InvalidBackendURL if the URL is not a valid URL.
221 """
222 if globals.use_gio:
223 url_string = 'gio+' + url_string
224 obj = get_backend_object(url_string)
225 if obj:
226 obj = BackendWrapper(obj)
227 return obj
186228
187229
188class ParsedUrl:230class ParsedUrl:
@@ -296,165 +338,74 @@
296 # Replace the full network location with the stripped copy.338 # Replace the full network location with the stripped copy.
297 return parsed_url.geturl().replace(parsed_url.netloc, straight_netloc, 1)339 return parsed_url.geturl().replace(parsed_url.netloc, straight_netloc, 1)
298340
299341def _get_code_from_exception(backend, operation, e):
300# Decorator for backend operation functions to simplify writing one that342 if isinstance(e, BackendException) and e.code != log.ErrorCode.backend_error:
301# retries. Make sure to add a keyword argument 'raise_errors' to your function343 return e.code
302# and if it is true, raise an exception on an error. If false, fatal-log it.344 elif hasattr(backend, '_error_code'):
303def retry(fn):345 return backend._error_code(operation, e) or log.ErrorCode.backend_error
304 def iterate(*args):346 elif hasattr(e, 'errno'):
305 for n in range(1, globals.num_retries):347 # A few backends return such errors (local, paramiko, etc)
306 try:348 if e.errno == errno.EACCES:
307 kwargs = {"raise_errors" : True}349 return log.ErrorCode.backend_permission_denied
308 return fn(*args, **kwargs)350 elif e.errno == errno.ENOENT:
309 except Exception as e:351 return log.ErrorCode.backend_not_found
310 log.Warn(_("Attempt %s failed: %s: %s")352 elif e.errno == errno.ENOSPC:
311 % (n, e.__class__.__name__, str(e)))353 return log.ErrorCode.backend_no_space
312 log.Debug(_("Backtrace of previous error: %s")354 return log.ErrorCode.backend_error
313 % exception_traceback())355
314 if isinstance(e, TemporaryLoadException):356def retry(operation, fatal=True):
315 time.sleep(30) # wait longer before trying again357 # Decorators with arguments introduce a new level of indirection. So we
316 else:358 # have to return a decorator function (which itself returns a function!)
317 time.sleep(10) # wait a bit before trying again359 def outer_retry(fn):
318 # Now try one last time, but fatal-log instead of raising errors360 def inner_retry(self, *args):
319 kwargs = {"raise_errors" : False}361 for n in range(1, globals.num_retries + 1):
320 return fn(*args, **kwargs)
321 return iterate
322
323# same as above, a bit dumber and always dies fatally if last trial fails
324# hence no need for the raise_errors var ;), we really catch everything here
325# as we don't know what the underlying code comes up with and we really *do*
326# want to retry globals.num_retries times under all circumstances
327def retry_fatal(fn):
328 def _retry_fatal(self, *args):
329 try:
330 n = 0
331 for n in range(1, globals.num_retries):
332 try:362 try:
333 self.retry_count = n
334 return fn(self, *args)363 return fn(self, *args)
335 except FatalBackendError as e:364 except FatalBackendException as e:
336 # die on fatal errors365 # die on fatal errors
337 raise e366 raise e
338 except Exception as e:367 except Exception as e:
339 # retry on anything else368 # retry on anything else
340 log.Warn(_("Attempt %s failed. %s: %s")
341 % (n, e.__class__.__name__, str(e)))
342 log.Debug(_("Backtrace of previous error: %s")369 log.Debug(_("Backtrace of previous error: %s")
343 % exception_traceback())370 % exception_traceback())
344 time.sleep(10) # wait a bit before trying again371 at_end = n == globals.num_retries
345 # final trial, die on exception372 code = _get_code_from_exception(self.backend, operation, e)
346 self.retry_count = n+1373 if code == log.ErrorCode.backend_not_found:
347 return fn(self, *args)374 # If we tried to do something, but the file just isn't there,
348 except Exception as e:375 # no need to retry.
349 log.Debug(_("Backtrace of previous error: %s")376 at_end = True
350 % exception_traceback())377 if at_end and fatal:
351 log.FatalError(_("Giving up after %s attempts. %s: %s")378 def make_filename(f):
352 % (self.retry_count, e.__class__.__name__, str(e)),379 if isinstance(f, path.ROPath):
353 log.ErrorCode.backend_error)380 return util.escape(f.name)
354 self.retry_count = 0381 else:
355382 return util.escape(f)
356 return _retry_fatal383 extra = ' '.join([operation] + [make_filename(x) for x in args if x])
384 log.FatalError(_("Giving up after %s attempts. %s: %s")
385 % (n, e.__class__.__name__,
386 str(e)), code=code, extra=extra)
387 else:
388 log.Warn(_("Attempt %s failed. %s: %s")
389 % (n, e.__class__.__name__, str(e)))
390 if not at_end:
391 if isinstance(e, TemporaryLoadException):
392 time.sleep(90) # wait longer before trying again
393 else:
394 time.sleep(30) # wait a bit before trying again
395 if hasattr(self.backend, '_retry_cleanup'):
396 self.backend._retry_cleanup()
397
398 return inner_retry
399 return outer_retry
400
357401
358class Backend(object):402class Backend(object):
359 """403 """
360 Represents a generic duplicity backend, capable of storing and404 See README in backends directory for information on how to write a backend.
361 retrieving files.
362
363 Concrete sub-classes are expected to implement:
364
365 - put
366 - get
367 - list
368 - delete
369 - close (if needed)
370
371 Optional:
372
373 - move
374 """405 """
375
376 def __init__(self, parsed_url):406 def __init__(self, parsed_url):
377 self.parsed_url = parsed_url407 self.parsed_url = parsed_url
378408
379 def put(self, source_path, remote_filename = None):
380 """
381 Transfer source_path (Path object) to remote_filename (string)
382
383 If remote_filename is None, get the filename from the last
384 path component of pathname.
385 """
386 raise NotImplementedError()
387
388 def move(self, source_path, remote_filename = None):
389 """
390 Move source_path (Path object) to remote_filename (string)
391
392 Same as put(), but unlinks source_path in the process. This allows the
393 local backend to do this more efficiently using rename.
394 """
395 self.put(source_path, remote_filename)
396 source_path.delete()
397
398 def get(self, remote_filename, local_path):
399 """Retrieve remote_filename and place in local_path"""
400 raise NotImplementedError()
401
402 def list(self):
403 """
404 Return list of filenames (byte strings) present in backend
405 """
406 def tobytes(filename):
407 "Convert a (maybe unicode) filename to bytes"
408 if isinstance(filename, unicode):
409 # There shouldn't be any encoding errors for files we care
410 # about, since duplicity filenames are ascii. But user files
411 # may be in the same directory. So just replace characters.
412 return filename.encode(sys.getfilesystemencoding(), 'replace')
413 else:
414 return filename
415
416 if hasattr(self, '_list'):
417 # Make sure that duplicity internals only ever see byte strings
418 # for filenames, no matter what the backend thinks it is talking.
419 return [tobytes(x) for x in self._list()]
420 else:
421 raise NotImplementedError()
422
423 def delete(self, filename_list):
424 """
425 Delete each filename in filename_list, in order if possible.
426 """
427 raise NotImplementedError()
428
429 # Should never cause FatalError.
430 # Returns a dictionary of dictionaries. The outer dictionary maps
431 # filenames to metadata dictionaries. Supported metadata are:
432 #
433 # 'size': if >= 0, size of file
434 # if -1, file is not found
435 # if None, error querying file
436 #
437 # Returned dictionary is guaranteed to contain a metadata dictionary for
438 # each filename, but not all metadata are guaranteed to be present.
439 def query_info(self, filename_list, raise_errors=True):
440 """
441 Return metadata about each filename in filename_list
442 """
443 info = {}
444 if hasattr(self, '_query_list_info'):
445 info = self._query_list_info(filename_list)
446 elif hasattr(self, '_query_file_info'):
447 for filename in filename_list:
448 info[filename] = self._query_file_info(filename)
449
450 # Fill out any missing entries (may happen if backend has no support
451 # or its query_list support is lazy)
452 for filename in filename_list:
453 if filename not in info:
454 info[filename] = {}
455
456 return info
457
458 """ use getpass by default, inherited backends may overwrite this behaviour """409 """ use getpass by default, inherited backends may overwrite this behaviour """
459 use_getpass = True410 use_getpass = True
460411
@@ -493,27 +444,7 @@
493 else:444 else:
494 return commandline445 return commandline
495446
496 """447 def __subprocess_popen(self, commandline):
497 DEPRECATED:
498 run_command(_persist) - legacy wrappers for subprocess_popen(_persist)
499 """
500 def run_command(self, commandline):
501 return self.subprocess_popen(commandline)
502 def run_command_persist(self, commandline):
503 return self.subprocess_popen_persist(commandline)
504
505 """
506 DEPRECATED:
507 popen(_persist) - legacy wrappers for subprocess_popen(_persist)
508 """
509 def popen(self, commandline):
510 result, stdout, stderr = self.subprocess_popen(commandline)
511 return stdout
512 def popen_persist(self, commandline):
513 result, stdout, stderr = self.subprocess_popen_persist(commandline)
514 return stdout
515
516 def _subprocess_popen(self, commandline):
517 """448 """
518 For internal use.449 For internal use.
519 Execute the given command line, interpreted as a shell command.450 Execute the given command line, interpreted as a shell command.
@@ -525,6 +456,10 @@
525456
526 return p.returncode, stdout, stderr457 return p.returncode, stdout, stderr
527458
459 """ a dictionary for breaking exceptions, syntax is
460 { 'command' : [ code1, code2 ], ... } see ftpbackend for an example """
461 popen_breaks = {}
462
528 def subprocess_popen(self, commandline):463 def subprocess_popen(self, commandline):
529 """464 """
530 Execute the given command line with error check.465 Execute the given command line with error check.
@@ -534,54 +469,179 @@
534 """469 """
535 private = self.munge_password(commandline)470 private = self.munge_password(commandline)
536 log.Info(_("Reading results of '%s'") % private)471 log.Info(_("Reading results of '%s'") % private)
537 result, stdout, stderr = self._subprocess_popen(commandline)472 result, stdout, stderr = self.__subprocess_popen(commandline)
538 if result != 0:473 if result != 0:
539 raise BackendException("Error running '%s'" % private)
540 return result, stdout, stderr
541
542 """ a dictionary for persist breaking exceptions, syntax is
543 { 'command' : [ code1, code2 ], ... } see ftpbackend for an example """
544 popen_persist_breaks = {}
545
546 def subprocess_popen_persist(self, commandline):
547 """
548 Execute the given command line with error check.
549 Retries globals.num_retries times with 30s delay.
550 Returns int Exitcode, string StdOut, string StdErr
551
552 Raise a BackendException on failure.
553 """
554 private = self.munge_password(commandline)
555
556 for n in range(1, globals.num_retries+1):
557 # sleep before retry
558 if n > 1:
559 time.sleep(30)
560 log.Info(_("Reading results of '%s'") % private)
561 result, stdout, stderr = self._subprocess_popen(commandline)
562 if result == 0:
563 return result, stdout, stderr
564
565 try:474 try:
566 m = re.search("^\s*([\S]+)", commandline)475 m = re.search("^\s*([\S]+)", commandline)
567 cmd = m.group(1)476 cmd = m.group(1)
568 ignores = self.popen_persist_breaks[ cmd ]477 ignores = self.popen_breaks[ cmd ]
569 ignores.index(result)478 ignores.index(result)
570 """ ignore a predefined set of error codes """479 """ ignore a predefined set of error codes """
571 return 0, '', ''480 return 0, '', ''
572 except (KeyError, ValueError):481 except (KeyError, ValueError):
573 pass482 raise BackendException("Error running '%s': returned %d, with output:\n%s" %
574483 (private, result, stdout + '\n' + stderr))
575 log.Warn(ngettext("Running '%s' failed with code %d (attempt #%d)",484 return result, stdout, stderr
576 "Running '%s' failed with code %d (attempt #%d)", n) %485
577 (private, result, n))486
578 if stdout or stderr:487class BackendWrapper(object):
579 log.Warn(_("Error is:\n%s") % stderr + (stderr and stdout and "\n") + stdout)488 """
580489 Represents a generic duplicity backend, capable of storing and
581 log.Warn(ngettext("Giving up trying to execute '%s' after %d attempt",490 retrieving files.
582 "Giving up trying to execute '%s' after %d attempts",491 """
583 globals.num_retries) % (private, globals.num_retries))492
584 raise BackendException("Error running '%s'" % private)493 def __init__(self, backend):
494 self.backend = backend
495
496 def __do_put(self, source_path, remote_filename):
497 if hasattr(self.backend, '_put'):
498 log.Info(_("Writing %s") % remote_filename)
499 self.backend._put(source_path, remote_filename)
500 else:
501 raise NotImplementedError()
502
503 @retry('put', fatal=True)
504 def put(self, source_path, remote_filename=None):
505 """
506 Transfer source_path (Path object) to remote_filename (string)
507
508 If remote_filename is None, get the filename from the last
509 path component of pathname.
510 """
511 if not remote_filename:
512 remote_filename = source_path.get_filename()
513 self.__do_put(source_path, remote_filename)
514
515 @retry('move', fatal=True)
516 def move(self, source_path, remote_filename=None):
517 """
518 Move source_path (Path object) to remote_filename (string)
519
520 Same as put(), but unlinks source_path in the process. This allows the
521 local backend to do this more efficiently using rename.
522 """
523 if not remote_filename:
524 remote_filename = source_path.get_filename()
525 if hasattr(self.backend, '_move'):
526 if self.backend._move(source_path, remote_filename) is not False:
527 source_path.setdata()
528 return
529 self.__do_put(source_path, remote_filename)
530 source_path.delete()
531
532 @retry('get', fatal=True)
533 def get(self, remote_filename, local_path):
534 """Retrieve remote_filename and place in local_path"""
535 if hasattr(self.backend, '_get'):
536 self.backend._get(remote_filename, local_path)
537 if not local_path.exists():
538 raise BackendException(_("File %s not found locally after get "
539 "from backend") % util.ufn(local_path.name))
540 local_path.setdata()
541 else:
542 raise NotImplementedError()
543
544 @retry('list', fatal=True)
545 def list(self):
546 """
547 Return list of filenames (byte strings) present in backend
548 """
549 def tobytes(filename):
550 "Convert a (maybe unicode) filename to bytes"
551 if isinstance(filename, unicode):
552 # There shouldn't be any encoding errors for files we care
553 # about, since duplicity filenames are ascii. But user files
554 # may be in the same directory. So just replace characters.
555 return filename.encode(sys.getfilesystemencoding(), 'replace')
556 else:
557 return filename
558
559 if hasattr(self.backend, '_list'):
560 # Make sure that duplicity internals only ever see byte strings
561 # for filenames, no matter what the backend thinks it is talking.
562 return [tobytes(x) for x in self.backend._list()]
563 else:
564 raise NotImplementedError()
565
566 def delete(self, filename_list):
567 """
568 Delete each filename in filename_list, in order if possible.
569 """
570 assert type(filename_list) is not types.StringType
571 if hasattr(self.backend, '_delete_list'):
572 self._do_delete_list(filename_list)
573 elif hasattr(self.backend, '_delete'):
574 for filename in filename_list:
575 self._do_delete(filename)
576 else:
577 raise NotImplementedError()
578
579 @retry('delete', fatal=False)
580 def _do_delete_list(self, filename_list):
581 self.backend._delete_list(filename_list)
582
583 @retry('delete', fatal=False)
584 def _do_delete(self, filename):
585 self.backend._delete(filename)
586
587 # Should never cause FatalError.
588 # Returns a dictionary of dictionaries. The outer dictionary maps
589 # filenames to metadata dictionaries. Supported metadata are:
590 #
591 # 'size': if >= 0, size of file
592 # if -1, file is not found
593 # if None, error querying file
594 #
595 # Returned dictionary is guaranteed to contain a metadata dictionary for
596 # each filename, and all metadata are guaranteed to be present.
597 def query_info(self, filename_list):
598 """
599 Return metadata about each filename in filename_list
600 """
601 info = {}
602 if hasattr(self.backend, '_query_list'):
603 info = self._do_query_list(filename_list)
604 if info is None:
605 info = {}
606 elif hasattr(self.backend, '_query'):
607 for filename in filename_list:
608 info[filename] = self._do_query(filename)
609
610 # Fill out any missing entries (may happen if backend has no support
611 # or its query_list support is lazy)
612 for filename in filename_list:
613 if filename not in info or info[filename] is None:
614 info[filename] = {}
615 for metadata in ['size']:
616 info[filename].setdefault(metadata, None)
617
618 return info
619
620 @retry('query', fatal=False)
621 def _do_query_list(self, filename_list):
622 info = self.backend._query_list(filename_list)
623 if info is None:
624 info = {}
625 return info
626
627 @retry('query', fatal=False)
628 def _do_query(self, filename):
629 try:
630 return self.backend._query(filename)
631 except Exception as e:
632 code = _get_code_from_exception(self.backend, 'query', e)
633 if code == log.ErrorCode.backend_not_found:
634 return {'size': -1}
635 else:
636 raise e
637
638 def close(self):
639 """
640 Close the backend, releasing any resources held and
641 invalidating any file objects obtained from the backend.
642 """
643 if hasattr(self.backend, '_close'):
644 self.backend._close()
585645
586 def get_fileobj_read(self, filename, parseresults = None):646 def get_fileobj_read(self, filename, parseresults = None):
587 """647 """
@@ -598,37 +658,6 @@
598 tdp.setdata()658 tdp.setdata()
599 return tdp.filtered_open_with_delete("rb")659 return tdp.filtered_open_with_delete("rb")
600660
601 def get_fileobj_write(self, filename,
602 parseresults = None,
603 sizelist = None):
604 """
605 Return fileobj opened for writing, which will cause the file
606 to be written to the backend on close().
607
608 The file will be encoded as specified in parseresults (or as
609 read from the filename), and stored in a temp file until it
610 can be copied over and deleted.
611
612 If sizelist is not None, it should be set to an empty list.
613 The number of bytes will be inserted into the list.
614 """
615 if not parseresults:
616 parseresults = file_naming.parse(filename)
617 assert parseresults, u"Filename %s not correctly parsed" % util.ufn(filename)
618 tdp = dup_temp.new_tempduppath(parseresults)
619
620 def close_file_hook():
621 """This is called when returned fileobj is closed"""
622 self.put(tdp, filename)
623 if sizelist is not None:
624 tdp.setdata()
625 sizelist.append(tdp.getsize())
626 tdp.delete()
627
628 fh = dup_temp.FileobjHooked(tdp.filtered_open("wb"))
629 fh.addhook(close_file_hook)
630 return fh
631
632 def get_data(self, filename, parseresults = None):661 def get_data(self, filename, parseresults = None):
633 """662 """
634 Retrieve a file from backend, process it, return contents.663 Retrieve a file from backend, process it, return contents.
@@ -637,18 +666,3 @@
637 buf = fin.read()666 buf = fin.read()
638 assert not fin.close()667 assert not fin.close()
639 return buf668 return buf
640
641 def put_data(self, buffer, filename, parseresults = None):
642 """
643 Put buffer into filename on backend after processing.
644 """
645 fout = self.get_fileobj_write(filename, parseresults)
646 fout.write(buffer)
647 assert not fout.close()
648
649 def close(self):
650 """
651 Close the backend, releasing any resources held and
652 invalidating any file objects obtained from the backend.
653 """
654 pass
655669
=== added file 'duplicity/backends/README'
--- duplicity/backends/README 1970-01-01 00:00:00 +0000
+++ duplicity/backends/README 2014-04-28 02:49:55 +0000
@@ -0,0 +1,79 @@
1= How to write a backend, in five easy steps! =
2
3There are five main methods you want to implement:
4
5__init__ - Initial setup
6_get
7 - Get one file
8 - Retried if an exception is thrown
9_put
10 - Upload one file
11 - Retried if an exception is thrown
12_list
13 - List all files in the backend
14 - Return a list of filenames
15 - Retried if an exception is thrown
16_delete
17 - Delete one file
18 - Retried if an exception is thrown
19
20There are other methods you may optionally implement:
21
22_delete_list
23 - Delete list of files
24 - This is used in preference of _delete if defined
25 - Must gracefully handle individual file errors itself
26 - Retried if an exception is thrown
27_query
28 - Query metadata of one file
29 - Return a dict with a 'size' key, and a file size value (-1 for not found)
30 - Retried if an exception is thrown
31_query_list
32 - Query metadata of a list of files
33 - Return a dict of filenames mapping to a dict with a 'size' key,
34 and a file size value (-1 for not found)
35 - This is used in preference of _query if defined
36 - Must gracefully handle individual file errors itself
37 - Retried if an exception is thrown
38_retry_cleanup
39 - If the backend wants to do any bookkeeping or connection resetting inbetween
40 retries, do it here.
41_error_code
42 - Passed an exception thrown by your backend, return a log.ErrorCode that
43 corresponds to that exception
44_move
45 - If your backend can more optimally move a local file into its backend,
46 implement this. If it's not implemented or returns False, _put will be
47 called instead (and duplicity will delete the source file after).
48 - Retried if an exception is thrown
49_close
50 - If your backend needs to clean up after itself, do that here.
51
52== Subclassing ==
53
54Always subclass from duplicity.backend.Backend
55
56== Registering ==
57
58You can register your class as a single backend like so:
59
60duplicity.backend.register_backend("foo", FooBackend)
61
62This will allow a URL like so: foo://hostname/path
63
64Or you can register your class as a meta backend like so:
65duplicity.backend.register_backend_prefix("bar", BarBackend)
66
67Which will allow a URL like so: bar+foo://hostname/path and your class will
68be passed the inner URL to either interpret how you like or create a new
69inner backend instance with duplicity.backend.get_backend_object(url).
70
71== Naming ==
72
73Any method that duplicity calls will start with one underscore. Please use
74zero or two underscores in your method names to avoid conflicts.
75
76== Testing ==
77
78Use "./testing/manual/backendtest.py foo://hostname/path" to test your new
79backend. It will load your backend from your current branch.
080
=== modified file 'duplicity/backends/_boto_multi.py'
--- duplicity/backends/_boto_multi.py 2014-04-17 21:54:04 +0000
+++ duplicity/backends/_boto_multi.py 2014-04-28 02:49:55 +0000
@@ -98,8 +98,8 @@
9898
99 self._pool = multiprocessing.Pool(processes=number_of_procs)99 self._pool = multiprocessing.Pool(processes=number_of_procs)
100100
101 def close(self):101 def _close(self):
102 BotoSingleBackend.close(self)102 BotoSingleBackend._close(self)
103 log.Debug("Closing pool")103 log.Debug("Closing pool")
104 self._pool.terminate()104 self._pool.terminate()
105 self._pool.join()105 self._pool.join()
106106
=== modified file 'duplicity/backends/_boto_single.py'
--- duplicity/backends/_boto_single.py 2014-04-25 23:20:12 +0000
+++ duplicity/backends/_boto_single.py 2014-04-28 02:49:55 +0000
@@ -25,9 +25,7 @@
25import duplicity.backend25import duplicity.backend
26from duplicity import globals26from duplicity import globals
27from duplicity import log27from duplicity import log
28from duplicity.errors import * #@UnusedWildImport28from duplicity.errors import FatalBackendException, BackendException
29from duplicity.util import exception_traceback
30from duplicity.backend import retry
31from duplicity import progress29from duplicity import progress
3230
33BOTO_MIN_VERSION = "2.1.1"31BOTO_MIN_VERSION = "2.1.1"
@@ -163,7 +161,7 @@
163 self.resetConnection()161 self.resetConnection()
164 self._listed_keys = {}162 self._listed_keys = {}
165163
166 def close(self):164 def _close(self):
167 del self._listed_keys165 del self._listed_keys
168 self._listed_keys = {}166 self._listed_keys = {}
169 self.bucket = None167 self.bucket = None
@@ -185,137 +183,69 @@
185 self.conn = get_connection(self.scheme, self.parsed_url, self.storage_uri)183 self.conn = get_connection(self.scheme, self.parsed_url, self.storage_uri)
186 self.bucket = self.conn.lookup(self.bucket_name)184 self.bucket = self.conn.lookup(self.bucket_name)
187185
188 def put(self, source_path, remote_filename=None):186 def _retry_cleanup(self):
187 self.resetConnection()
188
189 def _put(self, source_path, remote_filename):
189 from boto.s3.connection import Location190 from boto.s3.connection import Location
190 if globals.s3_european_buckets:191 if globals.s3_european_buckets:
191 if not globals.s3_use_new_style:192 if not globals.s3_use_new_style:
192 log.FatalError("European bucket creation was requested, but not new-style "193 raise FatalBackendException("European bucket creation was requested, but not new-style "
193 "bucket addressing (--s3-use-new-style)",194 "bucket addressing (--s3-use-new-style)",
194 log.ErrorCode.s3_bucket_not_style)195 code=log.ErrorCode.s3_bucket_not_style)
195 #Network glitch may prevent first few attempts of creating/looking up a bucket196
196 for n in range(1, globals.num_retries+1):197 if self.bucket is None:
197 if self.bucket:
198 break
199 if n > 1:
200 time.sleep(30)
201 self.resetConnection()
202 try:198 try:
203 try:199 self.bucket = self.conn.get_bucket(self.bucket_name, validate=True)
204 self.bucket = self.conn.get_bucket(self.bucket_name, validate=True)200 except Exception as e:
205 except Exception as e:201 if "NoSuchBucket" in str(e):
206 if "NoSuchBucket" in str(e):202 if globals.s3_european_buckets:
207 if globals.s3_european_buckets:203 self.bucket = self.conn.create_bucket(self.bucket_name,
208 self.bucket = self.conn.create_bucket(self.bucket_name,204 location=Location.EU)
209 location=Location.EU)
210 else:
211 self.bucket = self.conn.create_bucket(self.bucket_name)
212 else:205 else:
213 raise e206 self.bucket = self.conn.create_bucket(self.bucket_name)
214 except Exception as e:207 else:
215 log.Warn("Failed to create bucket (attempt #%d) '%s' failed (reason: %s: %s)"208 raise
216 "" % (n, self.bucket_name,
217 e.__class__.__name__,
218 str(e)))
219209
220 if not remote_filename:
221 remote_filename = source_path.get_filename()
222 key = self.bucket.new_key(self.key_prefix + remote_filename)210 key = self.bucket.new_key(self.key_prefix + remote_filename)
223211
224 for n in range(1, globals.num_retries+1):212 if globals.s3_use_rrs:
225 if n > 1:213 storage_class = 'REDUCED_REDUNDANCY'
226 # sleep before retry (new connection to a **hopeful** new host, so no need to wait so long)214 else:
227 time.sleep(10)215 storage_class = 'STANDARD'
228216 log.Info("Uploading %s/%s to %s Storage" % (self.straight_url, remote_filename, storage_class))
229 if globals.s3_use_rrs:217 if globals.s3_use_sse:
230 storage_class = 'REDUCED_REDUNDANCY'218 headers = {
231 else:219 'Content-Type': 'application/octet-stream',
232 storage_class = 'STANDARD'220 'x-amz-storage-class': storage_class,
233 log.Info("Uploading %s/%s to %s Storage" % (self.straight_url, remote_filename, storage_class))221 'x-amz-server-side-encryption': 'AES256'
234 try:222 }
235 if globals.s3_use_sse:223 else:
236 headers = {224 headers = {
237 'Content-Type': 'application/octet-stream',225 'Content-Type': 'application/octet-stream',
238 'x-amz-storage-class': storage_class,226 'x-amz-storage-class': storage_class
239 'x-amz-server-side-encryption': 'AES256'227 }
240 }228
241 else:229 upload_start = time.time()
242 headers = {230 self.upload(source_path.name, key, headers)
243 'Content-Type': 'application/octet-stream',231 upload_end = time.time()
244 'x-amz-storage-class': storage_class232 total_s = abs(upload_end-upload_start) or 1 # prevent a zero value!
245 }233 rough_upload_speed = os.path.getsize(source_path.name)/total_s
246 234 log.Debug("Uploaded %s/%s to %s Storage at roughly %f bytes/second" % (self.straight_url, remote_filename, storage_class, rough_upload_speed))
247 upload_start = time.time()235
248 self.upload(source_path.name, key, headers)236 def _get(self, remote_filename, local_path):
249 upload_end = time.time()
250 total_s = abs(upload_end-upload_start) or 1 # prevent a zero value!
251 rough_upload_speed = os.path.getsize(source_path.name)/total_s
252 self.resetConnection()
253 log.Debug("Uploaded %s/%s to %s Storage at roughly %f bytes/second" % (self.straight_url, remote_filename, storage_class, rough_upload_speed))
254 return
255 except Exception as e:
256 log.Warn("Upload '%s/%s' failed (attempt #%d, reason: %s: %s)"
257 "" % (self.straight_url,
258 remote_filename,
259 n,
260 e.__class__.__name__,
261 str(e)))
262 log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
263 self.resetConnection()
264 log.Warn("Giving up trying to upload %s/%s after %d attempts" %
265 (self.straight_url, remote_filename, globals.num_retries))
266 raise BackendException("Error uploading %s/%s" % (self.straight_url, remote_filename))
267
268 def get(self, remote_filename, local_path):
269 key_name = self.key_prefix + remote_filename237 key_name = self.key_prefix + remote_filename
270 self.pre_process_download(remote_filename, wait=True)238 self.pre_process_download(remote_filename, wait=True)
271 key = self._listed_keys[key_name]239 key = self._listed_keys[key_name]
272 for n in range(1, globals.num_retries+1):240 self.resetConnection()
273 if n > 1:241 key.get_contents_to_filename(local_path.name)
274 # sleep before retry (new connection to a **hopeful** new host, so no need to wait so long)
275 time.sleep(10)
276 log.Info("Downloading %s/%s" % (self.straight_url, remote_filename))
277 try:
278 self.resetConnection()
279 key.get_contents_to_filename(local_path.name)
280 local_path.setdata()
281 return
282 except Exception as e:
283 log.Warn("Download %s/%s failed (attempt #%d, reason: %s: %s)"
284 "" % (self.straight_url,
285 remote_filename,
286 n,
287 e.__class__.__name__,
288 str(e)), 1)
289 log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
290
291 log.Warn("Giving up trying to download %s/%s after %d attempts" %
292 (self.straight_url, remote_filename, globals.num_retries))
293 raise BackendException("Error downloading %s/%s" % (self.straight_url, remote_filename))
294242
295 def _list(self):243 def _list(self):
296 if not self.bucket:244 if not self.bucket:
297 raise BackendException("No connection to backend")245 raise BackendException("No connection to backend")
298246 return self.list_filenames_in_bucket()
299 for n in range(1, globals.num_retries+1):247
300 if n > 1:248 def list_filenames_in_bucket(self):
301 # sleep before retry
302 time.sleep(30)
303 self.resetConnection()
304 log.Info("Listing %s" % self.straight_url)
305 try:
306 return self._list_filenames_in_bucket()
307 except Exception as e:
308 log.Warn("List %s failed (attempt #%d, reason: %s: %s)"
309 "" % (self.straight_url,
310 n,
311 e.__class__.__name__,
312 str(e)), 1)
313 log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
314 log.Warn("Giving up trying to list %s after %d attempts" %
315 (self.straight_url, globals.num_retries))
316 raise BackendException("Error listng %s" % self.straight_url)
317
318 def _list_filenames_in_bucket(self):
319 # We add a 'd' to the prefix to make sure it is not null (for boto) and249 # We add a 'd' to the prefix to make sure it is not null (for boto) and
320 # to optimize the listing of our filenames, which always begin with 'd'.250 # to optimize the listing of our filenames, which always begin with 'd'.
321 # This will cause a failure in the regression tests as below:251 # This will cause a failure in the regression tests as below:
@@ -336,76 +266,37 @@
336 pass266 pass
337 return filename_list267 return filename_list
338268
339 def delete(self, filename_list):269 def _delete(self, filename):
340 for filename in filename_list:270 self.bucket.delete_key(self.key_prefix + filename)
341 self.bucket.delete_key(self.key_prefix + filename)
342 log.Debug("Deleted %s/%s" % (self.straight_url, filename))
343271
344 @retry272 def _query(self, filename):
345 def _query_file_info(self, filename, raise_errors=False):273 key = self.bucket.lookup(self.key_prefix + filename)
346 try:274 if key is None:
347 key = self.bucket.lookup(self.key_prefix + filename)275 return {'size': -1}
348 if key is None:276 return {'size': key.size}
349 return {'size': -1}
350 return {'size': key.size}
351 except Exception as e:
352 log.Warn("Query %s/%s failed: %s"
353 "" % (self.straight_url,
354 filename,
355 str(e)))
356 self.resetConnection()
357 if raise_errors:
358 raise e
359 else:
360 return {'size': None}
361277
362 def upload(self, filename, key, headers):278 def upload(self, filename, key, headers):
363 key.set_contents_from_filename(filename, headers,279 key.set_contents_from_filename(filename, headers,
364 cb=progress.report_transfer,280 cb=progress.report_transfer,
365 num_cb=(max(2, 8 * globals.volsize / (1024 * 1024)))281 num_cb=(max(2, 8 * globals.volsize / (1024 * 1024)))
366 ) # Max num of callbacks = 8 times x megabyte282 ) # Max num of callbacks = 8 times x megabyte
367 key.close()283 key.close()
368284
369 def pre_process_download(self, files_to_download, wait=False):285 def pre_process_download(self, remote_filename, wait=False):
370 # Used primarily to move files in Glacier to S3286 # Used primarily to move files in Glacier to S3
371 if isinstance(files_to_download, (bytes, str, unicode)):287 key_name = self.key_prefix + remote_filename
372 files_to_download = [files_to_download]288 if not self._listed_keys.get(key_name, False):
289 self._listed_keys[key_name] = list(self.bucket.list(key_name))[0]
290 key = self._listed_keys[key_name]
373291
374 for remote_filename in files_to_download:292 if key.storage_class == "GLACIER":
375 success = False293 # We need to move the file out of glacier
376 for n in range(1, globals.num_retries+1):294 if not self.bucket.get_key(key.key).ongoing_restore:
377 if n > 1:295 log.Info("File %s is in Glacier storage, restoring to S3" % remote_filename)
378 # sleep before retry (new connection to a **hopeful** new host, so no need to wait so long)296 key.restore(days=1) # Shouldn't need this again after 1 day
379 time.sleep(10)297 if wait:
298 log.Info("Waiting for file %s to restore from Glacier" % remote_filename)
299 while self.bucket.get_key(key.key).ongoing_restore:
300 time.sleep(60)
380 self.resetConnection()301 self.resetConnection()
381 try:302 log.Info("File %s was successfully restored from Glacier" % remote_filename)
382 key_name = self.key_prefix + remote_filename
383 if not self._listed_keys.get(key_name, False):
384 self._listed_keys[key_name] = list(self.bucket.list(key_name))[0]
385 key = self._listed_keys[key_name]
386
387 if key.storage_class == "GLACIER":
388 # We need to move the file out of glacier
389 if not self.bucket.get_key(key.key).ongoing_restore:
390 log.Info("File %s is in Glacier storage, restoring to S3" % remote_filename)
391 key.restore(days=1) # Shouldn't need this again after 1 day
392 if wait:
393 log.Info("Waiting for file %s to restore from Glacier" % remote_filename)
394 while self.bucket.get_key(key.key).ongoing_restore:
395 time.sleep(60)
396 self.resetConnection()
397 log.Info("File %s was successfully restored from Glacier" % remote_filename)
398 success = True
399 break
400 except Exception as e:
401 log.Warn("Restoration from Glacier for file %s/%s failed (attempt #%d, reason: %s: %s)"
402 "" % (self.straight_url,
403 remote_filename,
404 n,
405 e.__class__.__name__,
406 str(e)), 1)
407 log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
408 if not success:
409 log.Warn("Giving up trying to restore %s/%s after %d attempts" %
410 (self.straight_url, remote_filename, globals.num_retries))
411 raise BackendException("Error restoring %s/%s from Glacier to S3" % (self.straight_url, remote_filename))
412303
=== modified file 'duplicity/backends/_cf_cloudfiles.py'
--- duplicity/backends/_cf_cloudfiles.py 2014-04-17 22:03:10 +0000
+++ duplicity/backends/_cf_cloudfiles.py 2014-04-28 02:49:55 +0000
@@ -19,14 +19,10 @@
19# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA19# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
2020
21import os21import os
22import time
2322
24import duplicity.backend23import duplicity.backend
25from duplicity import globals
26from duplicity import log24from duplicity import log
27from duplicity.errors import * #@UnusedWildImport25from duplicity.errors import BackendException
28from duplicity.util import exception_traceback
29from duplicity.backend import retry
3026
31class CloudFilesBackend(duplicity.backend.Backend):27class CloudFilesBackend(duplicity.backend.Backend):
32 """28 """
@@ -69,124 +65,37 @@
69 log.ErrorCode.connection_failed)65 log.ErrorCode.connection_failed)
70 self.container = conn.create_container(container)66 self.container = conn.create_container(container)
7167
72 def put(self, source_path, remote_filename = None):68 def _error_code(self, operation, e):
73 if not remote_filename:69 from cloudfiles.errors import NoSuchObject
74 remote_filename = source_path.get_filename()70 if isinstance(e, NoSuchObject):
7571 return log.ErrorCode.backend_not_found
76 for n in range(1, globals.num_retries+1):72 elif isinstance(e, self.resp_exc):
77 log.Info("Uploading '%s/%s' " % (self.container, remote_filename))73 if e.status == 404:
78 try:74 return log.ErrorCode.backend_not_found
79 sobject = self.container.create_object(remote_filename)75
80 sobject.load_from_filename(source_path.name)76 def _put(self, source_path, remote_filename):
81 return77 sobject = self.container.create_object(remote_filename)
82 except self.resp_exc as error:78 sobject.load_from_filename(source_path.name)
83 log.Warn("Upload of '%s' failed (attempt %d): CloudFiles returned: %s %s"79
84 % (remote_filename, n, error.status, error.reason))80 def _get(self, remote_filename, local_path):
85 except Exception as e:81 sobject = self.container.create_object(remote_filename)
86 log.Warn("Upload of '%s' failed (attempt %s): %s: %s"82 with open(local_path.name, 'wb') as f:
87 % (remote_filename, n, e.__class__.__name__, str(e)))83 for chunk in sobject.stream():
88 log.Debug("Backtrace of previous error: %s"84 f.write(chunk)
89 % exception_traceback())
90 time.sleep(30)
91 log.Warn("Giving up uploading '%s' after %s attempts"
92 % (remote_filename, globals.num_retries))
93 raise BackendException("Error uploading '%s'" % remote_filename)
94
95 def get(self, remote_filename, local_path):
96 for n in range(1, globals.num_retries+1):
97 log.Info("Downloading '%s/%s'" % (self.container, remote_filename))
98 try:
99 sobject = self.container.create_object(remote_filename)
100 f = open(local_path.name, 'w')
101 for chunk in sobject.stream():
102 f.write(chunk)
103 local_path.setdata()
104 return
105 except self.resp_exc as resperr:
106 log.Warn("Download of '%s' failed (attempt %s): CloudFiles returned: %s %s"
107 % (remote_filename, n, resperr.status, resperr.reason))
108 except Exception as e:
109 log.Warn("Download of '%s' failed (attempt %s): %s: %s"
110 % (remote_filename, n, e.__class__.__name__, str(e)))
111 log.Debug("Backtrace of previous error: %s"
112 % exception_traceback())
113 time.sleep(30)
114 log.Warn("Giving up downloading '%s' after %s attempts"
115 % (remote_filename, globals.num_retries))
116 raise BackendException("Error downloading '%s/%s'"
117 % (self.container, remote_filename))
11885
119 def _list(self):86 def _list(self):
120 for n in range(1, globals.num_retries+1):87 # Cloud Files will return a max of 10,000 objects. We have
121 log.Info("Listing '%s'" % (self.container))88 # to make multiple requests to get them all.
122 try:89 objs = self.container.list_objects()
123 # Cloud Files will return a max of 10,000 objects. We have90 keys = objs
124 # to make multiple requests to get them all.91 while len(objs) == 10000:
125 objs = self.container.list_objects()92 objs = self.container.list_objects(marker=keys[-1])
126 keys = objs93 keys += objs
127 while len(objs) == 10000:94 return keys
128 objs = self.container.list_objects(marker=keys[-1])95
129 keys += objs96 def _delete(self, filename):
130 return keys97 self.container.delete_object(filename)
131 except self.resp_exc as resperr:98
132 log.Warn("Listing of '%s' failed (attempt %s): CloudFiles returned: %s %s"99 def _query(self, filename):
133 % (self.container, n, resperr.status, resperr.reason))100 sobject = self.container.get_object(filename)
134 except Exception as e:101 return {'size': sobject.size}
135 log.Warn("Listing of '%s' failed (attempt %s): %s: %s"
136 % (self.container, n, e.__class__.__name__, str(e)))
137 log.Debug("Backtrace of previous error: %s"
138 % exception_traceback())
139 time.sleep(30)
140 log.Warn("Giving up listing of '%s' after %s attempts"
141 % (self.container, globals.num_retries))
142 raise BackendException("Error listing '%s'"
143 % (self.container))
144
145 def delete_one(self, remote_filename):
146 for n in range(1, globals.num_retries+1):
147 log.Info("Deleting '%s/%s'" % (self.container, remote_filename))
148 try:
149 self.container.delete_object(remote_filename)
150 return
151 except self.resp_exc as resperr:
152 if n > 1 and resperr.status == 404:
153 # We failed on a timeout, but delete succeeded on the server
154 log.Warn("Delete of '%s' missing after retry - must have succeded earler" % remote_filename )
155 return
156 log.Warn("Delete of '%s' failed (attempt %s): CloudFiles returned: %s %s"
157 % (remote_filename, n, resperr.status, resperr.reason))
158 except Exception as e:
159 log.Warn("Delete of '%s' failed (attempt %s): %s: %s"
160 % (remote_filename, n, e.__class__.__name__, str(e)))
161 log.Debug("Backtrace of previous error: %s"
162 % exception_traceback())
163 time.sleep(30)
164 log.Warn("Giving up deleting '%s' after %s attempts"
165 % (remote_filename, globals.num_retries))
166 raise BackendException("Error deleting '%s/%s'"
167 % (self.container, remote_filename))
168
169 def delete(self, filename_list):
170 for file in filename_list:
171 self.delete_one(file)
172 log.Debug("Deleted '%s/%s'" % (self.container, file))
173
174 @retry
175 def _query_file_info(self, filename, raise_errors=False):
176 from cloudfiles.errors import NoSuchObject
177 try:
178 sobject = self.container.get_object(filename)
179 return {'size': sobject.size}
180 except NoSuchObject:
181 return {'size': -1}
182 except Exception as e:
183 log.Warn("Error querying '%s/%s': %s"
184 "" % (self.container,
185 filename,
186 str(e)))
187 if raise_errors:
188 raise e
189 else:
190 return {'size': None}
191
192duplicity.backend.register_backend("cf+http", CloudFilesBackend)
193102
=== modified file 'duplicity/backends/_cf_pyrax.py'
--- duplicity/backends/_cf_pyrax.py 2014-04-17 22:03:10 +0000
+++ duplicity/backends/_cf_pyrax.py 2014-04-28 02:49:55 +0000
@@ -19,14 +19,11 @@
19# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA19# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
2020
21import os21import os
22import time
2322
24import duplicity.backend23import duplicity.backend
25from duplicity import globals
26from duplicity import log24from duplicity import log
27from duplicity.errors import * # @UnusedWildImport25from duplicity.errors import BackendException
28from duplicity.util import exception_traceback26
29from duplicity.backend import retry
3027
31class PyraxBackend(duplicity.backend.Backend):28class PyraxBackend(duplicity.backend.Backend):
32 """29 """
@@ -69,126 +66,39 @@
6966
70 self.client_exc = pyrax.exceptions.ClientException67 self.client_exc = pyrax.exceptions.ClientException
71 self.nso_exc = pyrax.exceptions.NoSuchObject68 self.nso_exc = pyrax.exceptions.NoSuchObject
72 self.cloudfiles = pyrax.cloudfiles
73 self.container = pyrax.cloudfiles.create_container(container)69 self.container = pyrax.cloudfiles.create_container(container)
7470
75 def put(self, source_path, remote_filename = None):71 def _error_code(self, operation, e):
76 if not remote_filename:72 if isinstance(e, self.nso_exc):
77 remote_filename = source_path.get_filename()73 return log.ErrorCode.backend_not_found
7874 elif isinstance(e, self.client_exc):
79 for n in range(1, globals.num_retries + 1):75 if e.status == 404:
80 log.Info("Uploading '%s/%s' " % (self.container, remote_filename))76 return log.ErrorCode.backend_not_found
81 try:77 elif hasattr(e, 'http_status'):
82 self.container.upload_file(source_path.name, remote_filename)78 if e.http_status == 404:
83 return79 return log.ErrorCode.backend_not_found
84 except self.client_exc as error:80
85 log.Warn("Upload of '%s' failed (attempt %d): pyrax returned: %s %s"81 def _put(self, source_path, remote_filename):
86 % (remote_filename, n, error.__class__.__name__, error.message))82 self.container.upload_file(source_path.name, remote_filename)
87 except Exception as e:83
88 log.Warn("Upload of '%s' failed (attempt %s): %s: %s"84 def _get(self, remote_filename, local_path):
89 % (remote_filename, n, e.__class__.__name__, str(e)))85 sobject = self.container.get_object(remote_filename)
90 log.Debug("Backtrace of previous error: %s"86 with open(local_path.name, 'wb') as f:
91 % exception_traceback())87 f.write(sobject.get())
92 time.sleep(30)
93 log.Warn("Giving up uploading '%s' after %s attempts"
94 % (remote_filename, globals.num_retries))
95 raise BackendException("Error uploading '%s'" % remote_filename)
96
97 def get(self, remote_filename, local_path):
98 for n in range(1, globals.num_retries + 1):
99 log.Info("Downloading '%s/%s'" % (self.container, remote_filename))
100 try:
101 sobject = self.container.get_object(remote_filename)
102 f = open(local_path.name, 'w')
103 f.write(sobject.get())
104 local_path.setdata()
105 return
106 except self.nso_exc:
107 return
108 except self.client_exc as resperr:
109 log.Warn("Download of '%s' failed (attempt %s): pyrax returned: %s %s"
110 % (remote_filename, n, resperr.__class__.__name__, resperr.message))
111 except Exception as e:
112 log.Warn("Download of '%s' failed (attempt %s): %s: %s"
113 % (remote_filename, n, e.__class__.__name__, str(e)))
114 log.Debug("Backtrace of previous error: %s"
115 % exception_traceback())
116 time.sleep(30)
117 log.Warn("Giving up downloading '%s' after %s attempts"
118 % (remote_filename, globals.num_retries))
119 raise BackendException("Error downloading '%s/%s'"
120 % (self.container, remote_filename))
12188
122 def _list(self):89 def _list(self):
123 for n in range(1, globals.num_retries + 1):90 # Cloud Files will return a max of 10,000 objects. We have
124 log.Info("Listing '%s'" % (self.container))91 # to make multiple requests to get them all.
125 try:92 objs = self.container.get_object_names()
126 # Cloud Files will return a max of 10,000 objects. We have93 keys = objs
127 # to make multiple requests to get them all.94 while len(objs) == 10000:
128 objs = self.container.get_object_names()95 objs = self.container.get_object_names(marker = keys[-1])
129 keys = objs96 keys += objs
130 while len(objs) == 10000:97 return keys
131 objs = self.container.get_object_names(marker = keys[-1])98
132 keys += objs99 def _delete(self, filename):
133 return keys100 self.container.delete_object(filename)
134 except self.client_exc as resperr:101
135 log.Warn("Listing of '%s' failed (attempt %s): pyrax returned: %s %s"102 def _query(self, filename):
136 % (self.container, n, resperr.__class__.__name__, resperr.message))103 sobject = self.container.get_object(filename)
137 except Exception as e:104 return {'size': sobject.total_bytes}
138 log.Warn("Listing of '%s' failed (attempt %s): %s: %s"
139 % (self.container, n, e.__class__.__name__, str(e)))
140 log.Debug("Backtrace of previous error: %s"
141 % exception_traceback())
142 time.sleep(30)
143 log.Warn("Giving up listing of '%s' after %s attempts"
144 % (self.container, globals.num_retries))
145 raise BackendException("Error listing '%s'"
146 % (self.container))
147
148 def delete_one(self, remote_filename):
149 for n in range(1, globals.num_retries + 1):
150 log.Info("Deleting '%s/%s'" % (self.container, remote_filename))
151 try:
152 self.container.delete_object(remote_filename)
153 return
154 except self.client_exc as resperr:
155 if n > 1 and resperr.status == 404:
156 # We failed on a timeout, but delete succeeded on the server
157 log.Warn("Delete of '%s' missing after retry - must have succeded earler" % remote_filename)
158 return
159 log.Warn("Delete of '%s' failed (attempt %s): pyrax returned: %s %s"
160 % (remote_filename, n, resperr.__class__.__name__, resperr.message))
161 except Exception as e:
162 log.Warn("Delete of '%s' failed (attempt %s): %s: %s"
163 % (remote_filename, n, e.__class__.__name__, str(e)))
164 log.Debug("Backtrace of previous error: %s"
165 % exception_traceback())
166 time.sleep(30)
167 log.Warn("Giving up deleting '%s' after %s attempts"
168 % (remote_filename, globals.num_retries))
169 raise BackendException("Error deleting '%s/%s'"
170 % (self.container, remote_filename))
171
172 def delete(self, filename_list):
173 for file_ in filename_list:
174 self.delete_one(file_)
175 log.Debug("Deleted '%s/%s'" % (self.container, file_))
176
177 @retry
178 def _query_file_info(self, filename, raise_errors = False):
179 try:
180 sobject = self.container.get_object(filename)
181 return {'size': sobject.total_bytes}
182 except self.nso_exc:
183 return {'size': -1}
184 except Exception as e:
185 log.Warn("Error querying '%s/%s': %s"
186 "" % (self.container,
187 filename,
188 str(e)))
189 if raise_errors:
190 raise e
191 else:
192 return {'size': None}
193
194duplicity.backend.register_backend("cf+http", PyraxBackend)
195105
=== modified file 'duplicity/backends/_ssh_paramiko.py'
--- duplicity/backends/_ssh_paramiko.py 2014-04-17 20:50:57 +0000
+++ duplicity/backends/_ssh_paramiko.py 2014-04-28 02:49:55 +0000
@@ -28,7 +28,6 @@
28import os28import os
29import errno29import errno
30import sys30import sys
31import time
32import getpass31import getpass
33import logging32import logging
34from binascii import hexlify33from binascii import hexlify
@@ -36,7 +35,7 @@
36import duplicity.backend35import duplicity.backend
37from duplicity import globals36from duplicity import globals
38from duplicity import log37from duplicity import log
39from duplicity.errors import *38from duplicity.errors import BackendException
4039
41read_blocksize=65635 # for doing scp retrievals, where we need to read ourselves40read_blocksize=65635 # for doing scp retrievals, where we need to read ourselves
4241
@@ -232,7 +231,6 @@
232 except Exception as e:231 except Exception as e:
233 raise BackendException("sftp negotiation failed: %s" % e)232 raise BackendException("sftp negotiation failed: %s" % e)
234233
235
236 # move to the appropriate directory, possibly after creating it and its parents234 # move to the appropriate directory, possibly after creating it and its parents
237 dirs = self.remote_dir.split(os.sep)235 dirs = self.remote_dir.split(os.sep)
238 if len(dirs) > 0:236 if len(dirs) > 0:
@@ -257,157 +255,91 @@
257 except Exception as e:255 except Exception as e:
258 raise BackendException("sftp chdir to %s failed: %s" % (self.sftp.normalize(".")+"/"+d,e))256 raise BackendException("sftp chdir to %s failed: %s" % (self.sftp.normalize(".")+"/"+d,e))
259257
260 def put(self, source_path, remote_filename = None):258 def _put(self, source_path, remote_filename):
261 """transfers a single file to the remote side.259 if globals.use_scp:
262 In scp mode unavoidable quoting issues will make this fail if the remote directory or file name260 f=file(source_path.name,'rb')
263 contain single quotes."""261 try:
264 if not remote_filename:262 chan=self.client.get_transport().open_session()
265 remote_filename = source_path.get_filename()263 chan.settimeout(globals.timeout)
266 264 chan.exec_command("scp -t '%s'" % self.remote_dir) # scp in sink mode uses the arg as base directory
267 for n in range(1, globals.num_retries+1):265 except Exception as e:
268 if n > 1:266 raise BackendException("scp execution failed: %s" % e)
269 # sleep before retry267 # scp protocol: one 0x0 after startup, one after the Create meta, one after saving
270 time.sleep(self.retry_delay)268 # if there's a problem: 0x1 or 0x02 and some error text
271 try:269 response=chan.recv(1)
272 if (globals.use_scp):270 if (response!="\0"):
273 f=file(source_path.name,'rb')271 raise BackendException("scp remote error: %s" % chan.recv(-1))
274 try:272 fstat=os.stat(source_path.name)
275 chan=self.client.get_transport().open_session()273 chan.send('C%s %d %s\n' %(oct(fstat.st_mode)[-4:], fstat.st_size, remote_filename))
276 chan.settimeout(globals.timeout)274 response=chan.recv(1)
277 chan.exec_command("scp -t '%s'" % self.remote_dir) # scp in sink mode uses the arg as base directory275 if (response!="\0"):
278 except Exception as e:276 raise BackendException("scp remote error: %s" % chan.recv(-1))
279 raise BackendException("scp execution failed: %s" % e)277 chan.sendall(f.read()+'\0')
280 # scp protocol: one 0x0 after startup, one after the Create meta, one after saving278 f.close()
281 # if there's a problem: 0x1 or 0x02 and some error text279 response=chan.recv(1)
282 response=chan.recv(1)280 if (response!="\0"):
283 if (response!="\0"):281 raise BackendException("scp remote error: %s" % chan.recv(-1))
284 raise BackendException("scp remote error: %s" % chan.recv(-1))282 chan.close()
285 fstat=os.stat(source_path.name)283 else:
286 chan.send('C%s %d %s\n' %(oct(fstat.st_mode)[-4:], fstat.st_size, remote_filename))284 self.sftp.put(source_path.name,remote_filename)
287 response=chan.recv(1)285
288 if (response!="\0"):286 def _get(self, remote_filename, local_path):
289 raise BackendException("scp remote error: %s" % chan.recv(-1))287 if globals.use_scp:
290 chan.sendall(f.read()+'\0')288 try:
291 f.close()289 chan=self.client.get_transport().open_session()
292 response=chan.recv(1)290 chan.settimeout(globals.timeout)
293 if (response!="\0"):291 chan.exec_command("scp -f '%s/%s'" % (self.remote_dir,remote_filename))
294 raise BackendException("scp remote error: %s" % chan.recv(-1))292 except Exception as e:
295 chan.close()293 raise BackendException("scp execution failed: %s" % e)
296 return294
297 else:295 chan.send('\0') # overall ready indicator
298 try:296 msg=chan.recv(-1)
299 self.sftp.put(source_path.name,remote_filename)297 m=re.match(r"C([0-7]{4})\s+(\d+)\s+(\S.*)$",msg)
300 return298 if (m==None or m.group(3)!=remote_filename):
301 except Exception as e:299 raise BackendException("scp get %s failed: incorrect response '%s'" % (remote_filename,msg))
302 raise BackendException("sftp put of %s (as %s) failed: %s" % (source_path.name,remote_filename,e))300 chan.recv(1) # dispose of the newline trailing the C message
303 except Exception as e:301
304 log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))302 size=int(m.group(2))
305 raise BackendException("Giving up trying to upload '%s' after %d attempts" % (remote_filename,n))303 togo=size
306304 f=file(local_path.name,'wb')
307305 chan.send('\0') # ready for data
308 def get(self, remote_filename, local_path):306 try:
309 """retrieves a single file from the remote side.307 while togo>0:
310 In scp mode unavoidable quoting issues will make this fail if the remote directory or file names308 if togo>read_blocksize:
311 contain single quotes."""309 blocksize = read_blocksize
312 310 else:
313 for n in range(1, globals.num_retries+1):311 blocksize = togo
314 if n > 1:312 buff=chan.recv(blocksize)
315 # sleep before retry313 f.write(buff)
316 time.sleep(self.retry_delay)314 togo-=len(buff)
317 try:315 except Exception as e:
318 if (globals.use_scp):316 raise BackendException("scp get %s failed: %s" % (remote_filename,e))
319 try:317
320 chan=self.client.get_transport().open_session()318 msg=chan.recv(1) # check the final status
321 chan.settimeout(globals.timeout)319 if msg!='\0':
322 chan.exec_command("scp -f '%s/%s'" % (self.remote_dir,remote_filename))320 raise BackendException("scp get %s failed: %s" % (remote_filename,chan.recv(-1)))
323 except Exception as e:321 f.close()
324 raise BackendException("scp execution failed: %s" % e)322 chan.send('\0') # send final done indicator
325323 chan.close()
326 chan.send('\0') # overall ready indicator324 else:
327 msg=chan.recv(-1)325 self.sftp.get(remote_filename,local_path.name)
328 m=re.match(r"C([0-7]{4})\s+(\d+)\s+(\S.*)$",msg)
329 if (m==None or m.group(3)!=remote_filename):
330 raise BackendException("scp get %s failed: incorrect response '%s'" % (remote_filename,msg))
331 chan.recv(1) # dispose of the newline trailing the C message
332
333 size=int(m.group(2))
334 togo=size
335 f=file(local_path.name,'wb')
336 chan.send('\0') # ready for data
337 try:
338 while togo>0:
339 if togo>read_blocksize:
340 blocksize = read_blocksize
341 else:
342 blocksize = togo
343 buff=chan.recv(blocksize)
344 f.write(buff)
345 togo-=len(buff)
346 except Exception as e:
347 raise BackendException("scp get %s failed: %s" % (remote_filename,e))
348
349 msg=chan.recv(1) # check the final status
350 if msg!='\0':
351 raise BackendException("scp get %s failed: %s" % (remote_filename,chan.recv(-1)))
352 f.close()
353 chan.send('\0') # send final done indicator
354 chan.close()
355 return
356 else:
357 try:
358 self.sftp.get(remote_filename,local_path.name)
359 return
360 except Exception as e:
361 raise BackendException("sftp get of %s (to %s) failed: %s" % (remote_filename,local_path.name,e))
362 local_path.setdata()
363 except Exception as e:
364 log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))
365 raise BackendException("Giving up trying to download '%s' after %d attempts" % (remote_filename,n))
366326
367 def _list(self):327 def _list(self):
368 """lists the contents of the one-and-only duplicity dir on the remote side.328 # In scp mode unavoidable quoting issues will make this fail if the
369 In scp mode unavoidable quoting issues will make this fail if the directory name329 # directory name contains single quotes.
370 contains single quotes."""330 if globals.use_scp:
371 for n in range(1, globals.num_retries+1):331 output = self.runremote("ls -1 '%s'" % self.remote_dir, False, "scp dir listing ")
372 if n > 1:332 return output.splitlines()
373 # sleep before retry333 else:
374 time.sleep(self.retry_delay)334 return self.sftp.listdir()
375 try:335
376 if (globals.use_scp):336 def _delete(self, filename):
377 output=self.runremote("ls -1 '%s'" % self.remote_dir,False,"scp dir listing ")337 # In scp mode unavoidable quoting issues will cause failures if
378 return output.splitlines()338 # filenames containing single quotes are encountered.
379 else:339 if globals.use_scp:
380 try:340 self.runremote("rm '%s/%s'" % (self.remote_dir, filename), False, "scp rm ")
381 return self.sftp.listdir()341 else:
382 except Exception as e:342 self.sftp.remove(filename)
383 raise BackendException("sftp listing of %s failed: %s" % (self.sftp.getcwd(),e))
384 except Exception as e:
385 log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))
386 raise BackendException("Giving up trying to list '%s' after %d attempts" % (self.remote_dir,n))
387
388 def delete(self, filename_list):
389 """deletes all files in the list on the remote side. In scp mode unavoidable quoting issues
390 will cause failures if filenames containing single quotes are encountered."""
391 for fn in filename_list:
392 # Try to delete each file several times before giving up completely.
393 for n in range(1, globals.num_retries+1):
394 try:
395 if (globals.use_scp):
396 self.runremote("rm '%s/%s'" % (self.remote_dir,fn),False,"scp rm ")
397 else:
398 try:
399 self.sftp.remove(fn)
400 except Exception as e:
401 raise BackendException("sftp rm %s failed: %s" % (fn,e))
402
403 # If we get here, we deleted this file successfully. Move on to the next one.
404 break
405 except Exception as e:
406 if n == globals.num_retries:
407 log.FatalError(str(e), log.ErrorCode.backend_error)
408 else:
409 log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))
410 time.sleep(self.retry_delay)
411343
412 def runremote(self,cmd,ignoreexitcode=False,errorprefix=""):344 def runremote(self,cmd,ignoreexitcode=False,errorprefix=""):
413 """small convenience function that opens a shell channel, runs remote command and returns345 """small convenience function that opens a shell channel, runs remote command and returns
@@ -438,7 +370,3 @@
438 raise BackendException("could not load '%s', maybe corrupt?" % (file))370 raise BackendException("could not load '%s', maybe corrupt?" % (file))
439 371
440 return sshconfig.lookup(host)372 return sshconfig.lookup(host)
441
442duplicity.backend.register_backend("sftp", SSHParamikoBackend)
443duplicity.backend.register_backend("scp", SSHParamikoBackend)
444duplicity.backend.register_backend("ssh", SSHParamikoBackend)
445373
=== modified file 'duplicity/backends/_ssh_pexpect.py'
--- duplicity/backends/_ssh_pexpect.py 2014-04-25 23:20:12 +0000
+++ duplicity/backends/_ssh_pexpect.py 2014-04-28 02:49:55 +0000
@@ -24,18 +24,20 @@
24# have the same syntax. Also these strings will be executed by the24# have the same syntax. Also these strings will be executed by the
25# shell, so shouldn't have strange characters in them.25# shell, so shouldn't have strange characters in them.
2626
27from future_builtins import map
28
27import re29import re
28import string30import string
29import time
30import os31import os
3132
32import duplicity.backend33import duplicity.backend
33from duplicity import globals34from duplicity import globals
34from duplicity import log35from duplicity import log
35from duplicity.errors import * #@UnusedWildImport36from duplicity.errors import BackendException
3637
37class SSHPExpectBackend(duplicity.backend.Backend):38class SSHPExpectBackend(duplicity.backend.Backend):
38 """This backend copies files using scp. List not supported"""39 """This backend copies files using scp. List not supported. Filenames
40 should not need any quoting or this will break."""
39 def __init__(self, parsed_url):41 def __init__(self, parsed_url):
40 """scpBackend initializer"""42 """scpBackend initializer"""
41 duplicity.backend.Backend.__init__(self, parsed_url)43 duplicity.backend.Backend.__init__(self, parsed_url)
@@ -76,74 +78,67 @@
76 def run_scp_command(self, commandline):78 def run_scp_command(self, commandline):
77 """ Run an scp command, responding to password prompts """79 """ Run an scp command, responding to password prompts """
78 import pexpect80 import pexpect
79 for n in range(1, globals.num_retries+1):81 log.Info("Running '%s'" % commandline)
80 if n > 1:82 child = pexpect.spawn(commandline, timeout = None)
81 # sleep before retry83 if globals.ssh_askpass:
82 time.sleep(self.retry_delay)84 state = "authorizing"
83 log.Info("Running '%s' (attempt #%d)" % (commandline, n))85 else:
84 child = pexpect.spawn(commandline, timeout = None)86 state = "copying"
85 if globals.ssh_askpass:87 while 1:
86 state = "authorizing"88 if state == "authorizing":
87 else:89 match = child.expect([pexpect.EOF,
88 state = "copying"90 "(?i)timeout, server not responding",
89 while 1:91 "(?i)pass(word|phrase .*):",
90 if state == "authorizing":92 "(?i)permission denied",
91 match = child.expect([pexpect.EOF,93 "authenticity"])
92 "(?i)timeout, server not responding",94 log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
93 "(?i)pass(word|phrase .*):",95 if match == 0:
94 "(?i)permission denied",96 log.Warn("Failed to authenticate")
95 "authenticity"])97 break
96 log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))98 elif match == 1:
97 if match == 0:99 log.Warn("Timeout waiting to authenticate")
98 log.Warn("Failed to authenticate")100 break
99 break101 elif match == 2:
100 elif match == 1:102 child.sendline(self.password)
101 log.Warn("Timeout waiting to authenticate")103 state = "copying"
102 break104 elif match == 3:
103 elif match == 2:105 log.Warn("Invalid SSH password")
104 child.sendline(self.password)106 break
105 state = "copying"107 elif match == 4:
106 elif match == 3:108 log.Warn("Remote host authentication failed (missing known_hosts entry?)")
107 log.Warn("Invalid SSH password")109 break
108 break110 elif state == "copying":
109 elif match == 4:111 match = child.expect([pexpect.EOF,
110 log.Warn("Remote host authentication failed (missing known_hosts entry?)")112 "(?i)timeout, server not responding",
111 break113 "stalled",
112 elif state == "copying":114 "authenticity",
113 match = child.expect([pexpect.EOF,115 "ETA"])
114 "(?i)timeout, server not responding",116 log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
115 "stalled",117 if match == 0:
116 "authenticity",118 break
117 "ETA"])119 elif match == 1:
118 log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))120 log.Warn("Timeout waiting for response")
119 if match == 0:121 break
120 break122 elif match == 2:
121 elif match == 1:123 state = "stalled"
122 log.Warn("Timeout waiting for response")124 elif match == 3:
123 break125 log.Warn("Remote host authentication failed (missing known_hosts entry?)")
124 elif match == 2:126 break
125 state = "stalled"127 elif state == "stalled":
126 elif match == 3:128 match = child.expect([pexpect.EOF,
127 log.Warn("Remote host authentication failed (missing known_hosts entry?)")129 "(?i)timeout, server not responding",
128 break130 "ETA"])
129 elif state == "stalled":131 log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
130 match = child.expect([pexpect.EOF,132 if match == 0:
131 "(?i)timeout, server not responding",133 break
132 "ETA"])134 elif match == 1:
133 log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))135 log.Warn("Stalled for too long, aborted copy")
134 if match == 0:136 break
135 break137 elif match == 2:
136 elif match == 1:138 state = "copying"
137 log.Warn("Stalled for too long, aborted copy")139 child.close(force = True)
138 break140 if child.exitstatus != 0:
139 elif match == 2:141 raise BackendException("Error running '%s'" % commandline)
140 state = "copying"
141 child.close(force = True)
142 if child.exitstatus == 0:
143 return
144 log.Warn("Running '%s' failed (attempt #%d)" % (commandline, n))
145 log.Warn("Giving up trying to execute '%s' after %d attempts" % (commandline, globals.num_retries))
146 raise BackendException("Error running '%s'" % commandline)
147142
148 def run_sftp_command(self, commandline, commands):143 def run_sftp_command(self, commandline, commands):
149 """ Run an sftp command, responding to password prompts, passing commands from list """144 """ Run an sftp command, responding to password prompts, passing commands from list """
@@ -160,76 +155,69 @@
160 "Couldn't delete file",155 "Couldn't delete file",
161 "open(.*): Failure"]156 "open(.*): Failure"]
162 max_response_len = max([len(p) for p in responses[1:]])157 max_response_len = max([len(p) for p in responses[1:]])
163 for n in range(1, globals.num_retries+1):158 log.Info("Running '%s'" % (commandline))
164 if n > 1:159 child = pexpect.spawn(commandline, timeout = None, maxread=maxread)
165 # sleep before retry160 cmdloc = 0
166 time.sleep(self.retry_delay)161 passprompt = 0
167 log.Info("Running '%s' (attempt #%d)" % (commandline, n))162 while 1:
168 child = pexpect.spawn(commandline, timeout = None, maxread=maxread)163 msg = ""
169 cmdloc = 0164 match = child.expect(responses,
170 passprompt = 0165 searchwindowsize=maxread+max_response_len)
171 while 1:166 log.Debug("State = sftp, Before = '%s'" % (child.before.strip()))
172 msg = ""167 if match == 0:
173 match = child.expect(responses,168 break
174 searchwindowsize=maxread+max_response_len)169 elif match == 1:
175 log.Debug("State = sftp, Before = '%s'" % (child.before.strip()))170 msg = "Timeout waiting for response"
176 if match == 0:171 break
177 break172 if match == 2:
178 elif match == 1:173 if cmdloc < len(commands):
179 msg = "Timeout waiting for response"174 command = commands[cmdloc]
180 break175 log.Info("sftp command: '%s'" % (command,))
181 if match == 2:176 child.sendline(command)
182 if cmdloc < len(commands):177 cmdloc += 1
183 command = commands[cmdloc]178 else:
184 log.Info("sftp command: '%s'" % (command,))179 command = 'quit'
185 child.sendline(command)180 child.sendline(command)
186 cmdloc += 1181 res = child.before
187 else:182 elif match == 3:
188 command = 'quit'183 passprompt += 1
189 child.sendline(command)184 child.sendline(self.password)
190 res = child.before185 if (passprompt>1):
191 elif match == 3:186 raise BackendException("Invalid SSH password.")
192 passprompt += 1187 elif match == 4:
193 child.sendline(self.password)188 if not child.before.strip().startswith("mkdir"):
194 if (passprompt>1):189 msg = "Permission denied"
195 raise BackendException("Invalid SSH password.")190 break
196 elif match == 4:191 elif match == 5:
197 if not child.before.strip().startswith("mkdir"):192 msg = "Host key authenticity could not be verified (missing known_hosts entry?)"
198 msg = "Permission denied"193 break
199 break194 elif match == 6:
200 elif match == 5:195 if not child.before.strip().startswith("rm"):
201 msg = "Host key authenticity could not be verified (missing known_hosts entry?)"196 msg = "Remote file or directory does not exist in command='%s'" % (commandline,)
202 break197 break
203 elif match == 6:198 elif match == 7:
204 if not child.before.strip().startswith("rm"):199 if not child.before.strip().startswith("Removing"):
205 msg = "Remote file or directory does not exist in command='%s'" % (commandline,)
206 break
207 elif match == 7:
208 if not child.before.strip().startswith("Removing"):
209 msg = "Could not delete file in command='%s'" % (commandline,)
210 break;
211 elif match == 8:
212 msg = "Could not delete file in command='%s'" % (commandline,)200 msg = "Could not delete file in command='%s'" % (commandline,)
213 break201 break;
214 elif match == 9:202 elif match == 8:
215 msg = "Could not open file in command='%s'" % (commandline,)203 msg = "Could not delete file in command='%s'" % (commandline,)
216 break204 break
217 child.close(force = True)205 elif match == 9:
218 if child.exitstatus == 0:206 msg = "Could not open file in command='%s'" % (commandline,)
219 return res207 break
220 log.Warn("Running '%s' with commands:\n %s\n failed (attempt #%d): %s" % (commandline, "\n ".join(commands), n, msg))208 child.close(force = True)
221 raise BackendException("Giving up trying to execute '%s' with commands:\n %s\n after %d attempts" % (commandline, "\n ".join(commands), globals.num_retries))209 if child.exitstatus == 0:
210 return res
211 else:
212 raise BackendException("Error running '%s': %s" % (commandline, msg))
222213
223 def put(self, source_path, remote_filename = None):214 def _put(self, source_path, remote_filename):
224 if globals.use_scp:215 if globals.use_scp:
225 self.put_scp(source_path, remote_filename = remote_filename)216 self.put_scp(source_path, remote_filename)
226 else:217 else:
227 self.put_sftp(source_path, remote_filename = remote_filename)218 self.put_sftp(source_path, remote_filename)
228219
229 def put_sftp(self, source_path, remote_filename = None):220 def put_sftp(self, source_path, remote_filename):
230 """Use sftp to copy source_dir/filename to remote computer"""
231 if not remote_filename:
232 remote_filename = source_path.get_filename()
233 commands = ["put \"%s\" \"%s.%s.part\"" %221 commands = ["put \"%s\" \"%s.%s.part\"" %
234 (source_path.name, self.remote_prefix, remote_filename),222 (source_path.name, self.remote_prefix, remote_filename),
235 "rename \"%s.%s.part\" \"%s%s\"" %223 "rename \"%s.%s.part\" \"%s%s\"" %
@@ -239,53 +227,36 @@
239 self.host_string))227 self.host_string))
240 self.run_sftp_command(commandline, commands)228 self.run_sftp_command(commandline, commands)
241229
242 def put_scp(self, source_path, remote_filename = None):230 def put_scp(self, source_path, remote_filename):
243 """Use scp to copy source_dir/filename to remote computer"""
244 if not remote_filename:
245 remote_filename = source_path.get_filename()
246 commandline = "%s %s %s %s:%s%s" % \231 commandline = "%s %s %s %s:%s%s" % \
247 (self.scp_command, globals.ssh_options, source_path.name, self.host_string,232 (self.scp_command, globals.ssh_options, source_path.name, self.host_string,
248 self.remote_prefix, remote_filename)233 self.remote_prefix, remote_filename)
249 self.run_scp_command(commandline)234 self.run_scp_command(commandline)
250235
251 def get(self, remote_filename, local_path):236 def _get(self, remote_filename, local_path):
252 if globals.use_scp:237 if globals.use_scp:
253 self.get_scp(remote_filename, local_path)238 self.get_scp(remote_filename, local_path)
254 else:239 else:
255 self.get_sftp(remote_filename, local_path)240 self.get_sftp(remote_filename, local_path)
256241
257 def get_sftp(self, remote_filename, local_path):242 def get_sftp(self, remote_filename, local_path):
258 """Use sftp to get a remote file"""
259 commands = ["get \"%s%s\" \"%s\"" %243 commands = ["get \"%s%s\" \"%s\"" %
260 (self.remote_prefix, remote_filename, local_path.name)]244 (self.remote_prefix, remote_filename, local_path.name)]
261 commandline = ("%s %s %s" % (self.sftp_command,245 commandline = ("%s %s %s" % (self.sftp_command,
262 globals.ssh_options,246 globals.ssh_options,
263 self.host_string))247 self.host_string))
264 self.run_sftp_command(commandline, commands)248 self.run_sftp_command(commandline, commands)
265 local_path.setdata()
266 if not local_path.exists():
267 raise BackendException("File %s not found locally after get "
268 "from backend" % local_path.name)
269249
270 def get_scp(self, remote_filename, local_path):250 def get_scp(self, remote_filename, local_path):
271 """Use scp to get a remote file"""
272 commandline = "%s %s %s:%s%s %s" % \251 commandline = "%s %s %s:%s%s %s" % \
273 (self.scp_command, globals.ssh_options, self.host_string, self.remote_prefix,252 (self.scp_command, globals.ssh_options, self.host_string, self.remote_prefix,
274 remote_filename, local_path.name)253 remote_filename, local_path.name)
275 self.run_scp_command(commandline)254 self.run_scp_command(commandline)
276 local_path.setdata()
277 if not local_path.exists():
278 raise BackendException("File %s not found locally after get "
279 "from backend" % local_path.name)
280255
281 def _list(self):256 def _list(self):
282 """257 # Note that this command can get confused when dealing with
283 List files available for scp258 # files with newlines in them, as the embedded newlines cannot
284259 # be distinguished from the file boundaries.
285 Note that this command can get confused when dealing with
286 files with newlines in them, as the embedded newlines cannot
287 be distinguished from the file boundaries.
288 """
289 dirs = self.remote_dir.split(os.sep)260 dirs = self.remote_dir.split(os.sep)
290 if len(dirs) > 0:261 if len(dirs) > 0:
291 if not dirs[0] :262 if not dirs[0] :
@@ -304,16 +275,8 @@
304275
305 return [x for x in map(string.strip, l) if x]276 return [x for x in map(string.strip, l) if x]
306277
307 def delete(self, filename_list):278 def _delete(self, filename):
308 """
309 Runs sftp rm to delete files. Files must not require quoting.
310 """
311 commands = ["cd \"%s\"" % (self.remote_dir,)]279 commands = ["cd \"%s\"" % (self.remote_dir,)]
312 for fn in filename_list:280 commands.append("rm \"%s\"" % filename)
313 commands.append("rm \"%s\"" % fn)
314 commandline = ("%s %s %s" % (self.sftp_command, globals.ssh_options, self.host_string))281 commandline = ("%s %s %s" % (self.sftp_command, globals.ssh_options, self.host_string))
315 self.run_sftp_command(commandline, commands)282 self.run_sftp_command(commandline, commands)
316
317duplicity.backend.register_backend("ssh", SSHPExpectBackend)
318duplicity.backend.register_backend("scp", SSHPExpectBackend)
319duplicity.backend.register_backend("sftp", SSHPExpectBackend)
320283
=== modified file 'duplicity/backends/botobackend.py'
--- duplicity/backends/botobackend.py 2014-04-17 21:54:04 +0000
+++ duplicity/backends/botobackend.py 2014-04-28 02:49:55 +0000
@@ -22,14 +22,12 @@
2222
23import duplicity.backend23import duplicity.backend
24from duplicity import globals24from duplicity import globals
25from ._boto_multi import BotoBackend as BotoMultiUploadBackend
26from ._boto_single import BotoBackend as BotoSingleUploadBackend
2725
28if globals.s3_use_multiprocessing:26if globals.s3_use_multiprocessing:
29 duplicity.backend.register_backend("gs", BotoMultiUploadBackend)27 from ._boto_multi import BotoBackend
30 duplicity.backend.register_backend("s3", BotoMultiUploadBackend)
31 duplicity.backend.register_backend("s3+http", BotoMultiUploadBackend)
32else:28else:
33 duplicity.backend.register_backend("gs", BotoSingleUploadBackend)29 from ._boto_single import BotoBackend
34 duplicity.backend.register_backend("s3", BotoSingleUploadBackend)30
35 duplicity.backend.register_backend("s3+http", BotoSingleUploadBackend)31duplicity.backend.register_backend("gs", BotoBackend)
32duplicity.backend.register_backend("s3", BotoBackend)
33duplicity.backend.register_backend("s3+http", BotoBackend)
3634
=== modified file 'duplicity/backends/cfbackend.py'
--- duplicity/backends/cfbackend.py 2014-04-17 21:54:04 +0000
+++ duplicity/backends/cfbackend.py 2014-04-28 02:49:55 +0000
@@ -18,10 +18,13 @@
18# along with duplicity; if not, write to the Free Software Foundation,18# along with duplicity; if not, write to the Free Software Foundation,
19# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA19# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
2020
21import duplicity.backend
21from duplicity import globals22from duplicity import globals
2223
23if (globals.cf_backend and24if (globals.cf_backend and
24 globals.cf_backend.lower().strip() == 'pyrax'):25 globals.cf_backend.lower().strip() == 'pyrax'):
25 from . import _cf_pyrax26 from ._cf_pyrax import PyraxBackend as CFBackend
26else:27else:
27 from . import _cf_cloudfiles28 from ._cf_cloudfiles import CloudFilesBackend as CFBackend
29
30duplicity.backend.register_backend("cf+http", CFBackend)
2831
=== modified file 'duplicity/backends/dpbxbackend.py'
--- duplicity/backends/dpbxbackend.py 2014-04-17 21:49:37 +0000
+++ duplicity/backends/dpbxbackend.py 2014-04-28 02:49:55 +0000
@@ -32,14 +32,10 @@
32from functools import reduce32from functools import reduce
3333
34import traceback, StringIO34import traceback, StringIO
35from exceptions import Exception
3635
37import duplicity.backend36import duplicity.backend
38from duplicity import globals
39from duplicity import log37from duplicity import log
40from duplicity.errors import *38from duplicity.errors import BackendException
41from duplicity import tempdir
42from duplicity.backend import retry_fatal
4339
4440
45# This application key is registered in my name (jno at pisem dot net).41# This application key is registered in my name (jno at pisem dot net).
@@ -76,14 +72,14 @@
76 def wrapper(self, *args):72 def wrapper(self, *args):
77 from dropbox import rest73 from dropbox import rest
78 if login_required and not self.sess.is_linked():74 if login_required and not self.sess.is_linked():
79 log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin)75 raise BackendException("dpbx Cannot login: check your credentials", log.ErrorCode.dpbx_nologin)
80 return76 return
8177
82 try:78 try:
83 return f(self, *args)79 return f(self, *args)
84 except TypeError as e:80 except TypeError as e:
85 log_exception(e)81 log_exception(e)
86 log.FatalError('dpbx type error "%s"' % (e,), log.ErrorCode.backend_code_error)82 raise BackendException('dpbx type error "%s"' % (e,))
87 except rest.ErrorResponse as e:83 except rest.ErrorResponse as e:
88 msg = e.user_error_msg or str(e)84 msg = e.user_error_msg or str(e)
89 log.Error('dpbx error: %s' % (msg,), log.ErrorCode.backend_command_error)85 log.Error('dpbx error: %s' % (msg,), log.ErrorCode.backend_command_error)
@@ -165,25 +161,22 @@
165 if not self.sess.is_linked(): # stil not logged in161 if not self.sess.is_linked(): # stil not logged in
166 log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin)162 log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin)
167163
168 @retry_fatal164 def _error_code(self, operation, e):
165 from dropbox import rest
166 if isinstance(e, rest.ErrorResponse):
167 if e.status == 404:
168 return log.ErrorCode.backend_not_found
169
169 @command()170 @command()
170 def put(self, source_path, remote_filename = None):171 def _put(self, source_path, remote_filename):
171 """Transfer source_path to remote_filename"""
172 if not remote_filename:
173 remote_filename = source_path.get_filename()
174
175 remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/'))172 remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/'))
176 remote_path = os.path.join(remote_dir, remote_filename).rstrip()173 remote_path = os.path.join(remote_dir, remote_filename).rstrip()
177
178 from_file = open(source_path.name, "rb")174 from_file = open(source_path.name, "rb")
179
180 resp = self.api_client.put_file(remote_path, from_file)175 resp = self.api_client.put_file(remote_path, from_file)
181 log.Debug( 'dpbx,put(%s,%s): %s'%(source_path.name, remote_path, resp))176 log.Debug( 'dpbx,put(%s,%s): %s'%(source_path.name, remote_path, resp))
182177
183 @retry_fatal
184 @command()178 @command()
185 def get(self, remote_filename, local_path):179 def _get(self, remote_filename, local_path):
186 """Get remote filename, saving it to local_path"""
187 remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip()180 remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip()
188181
189 to_file = open( local_path.name, 'wb' )182 to_file = open( local_path.name, 'wb' )
@@ -196,10 +189,8 @@
196189
197 local_path.setdata()190 local_path.setdata()
198191
199 @retry_fatal
200 @command()192 @command()
201 def _list(self,none=None):193 def _list(self):
202 """List files in directory"""
203 # Do a long listing to avoid connection reset194 # Do a long listing to avoid connection reset
204 remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()195 remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
205 resp = self.api_client.metadata(remote_dir)196 resp = self.api_client.metadata(remote_dir)
@@ -214,21 +205,15 @@
214 l.append(name.encode(encoding))205 l.append(name.encode(encoding))
215 return l206 return l
216207
217 @retry_fatal
218 @command()208 @command()
219 def delete(self, filename_list):209 def _delete(self, filename):
220 """Delete files in filename_list"""
221 if not filename_list :
222 log.Debug('dpbx.delete(): no op')
223 return
224 remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()210 remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
225 for filename in filename_list:211 remote_name = os.path.join( remote_dir, filename )
226 remote_name = os.path.join( remote_dir, filename )212 resp = self.api_client.file_delete( remote_name )
227 resp = self.api_client.file_delete( remote_name )213 log.Debug('dpbx.delete(%s): %s'%(remote_name,resp))
228 log.Debug('dpbx.delete(%s): %s'%(remote_name,resp))
229214
230 @command()215 @command()
231 def close(self):216 def _close(self):
232 """close backend session? no! just "flush" the data"""217 """close backend session? no! just "flush" the data"""
233 info = self.api_client.account_info()218 info = self.api_client.account_info()
234 log.Debug('dpbx.close():')219 log.Debug('dpbx.close():')
235220
=== modified file 'duplicity/backends/ftpbackend.py'
--- duplicity/backends/ftpbackend.py 2014-04-25 23:20:12 +0000
+++ duplicity/backends/ftpbackend.py 2014-04-28 02:49:55 +0000
@@ -25,7 +25,6 @@
25import duplicity.backend25import duplicity.backend
26from duplicity import globals26from duplicity import globals
27from duplicity import log27from duplicity import log
28from duplicity.errors import * #@UnusedWildImport
29from duplicity import tempdir28from duplicity import tempdir
3029
31class FTPBackend(duplicity.backend.Backend):30class FTPBackend(duplicity.backend.Backend):
@@ -65,7 +64,7 @@
65 # This squelches the "file not found" result from ncftpls when64 # This squelches the "file not found" result from ncftpls when
66 # the ftp backend looks for a collection that does not exist.65 # the ftp backend looks for a collection that does not exist.
67 # version 3.2.2 has error code 5, 1280 is some legacy value66 # version 3.2.2 has error code 5, 1280 is some legacy value
68 self.popen_persist_breaks[ 'ncftpls' ] = [ 5, 1280 ]67 self.popen_breaks[ 'ncftpls' ] = [ 5, 1280 ]
6968
70 # Use an explicit directory name.69 # Use an explicit directory name.
71 if self.url_string[-1] != '/':70 if self.url_string[-1] != '/':
@@ -88,36 +87,28 @@
88 if parsed_url.port != None and parsed_url.port != 21:87 if parsed_url.port != None and parsed_url.port != 21:
89 self.flags += " -P '%s'" % (parsed_url.port)88 self.flags += " -P '%s'" % (parsed_url.port)
9089
91 def put(self, source_path, remote_filename = None):90 def _put(self, source_path, remote_filename):
92 """Transfer source_path to remote_filename"""
93 if not remote_filename:
94 remote_filename = source_path.get_filename()
95 remote_path = os.path.join(urllib.unquote(self.parsed_url.path.lstrip('/')), remote_filename).rstrip()91 remote_path = os.path.join(urllib.unquote(self.parsed_url.path.lstrip('/')), remote_filename).rstrip()
96 commandline = "ncftpput %s -m -V -C '%s' '%s'" % \92 commandline = "ncftpput %s -m -V -C '%s' '%s'" % \
97 (self.flags, source_path.name, remote_path)93 (self.flags, source_path.name, remote_path)
98 self.run_command_persist(commandline)94 self.subprocess_popen(commandline)
9995
100 def get(self, remote_filename, local_path):96 def _get(self, remote_filename, local_path):
101 """Get remote filename, saving it to local_path"""
102 remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip()97 remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip()
103 commandline = "ncftpget %s -V -C '%s' '%s' '%s'" % \98 commandline = "ncftpget %s -V -C '%s' '%s' '%s'" % \
104 (self.flags, self.parsed_url.hostname, remote_path.lstrip('/'), local_path.name)99 (self.flags, self.parsed_url.hostname, remote_path.lstrip('/'), local_path.name)
105 self.run_command_persist(commandline)100 self.subprocess_popen(commandline)
106 local_path.setdata()
107101
108 def _list(self):102 def _list(self):
109 """List files in directory"""
110 # Do a long listing to avoid connection reset103 # Do a long listing to avoid connection reset
111 commandline = "ncftpls %s -l '%s'" % (self.flags, self.url_string)104 commandline = "ncftpls %s -l '%s'" % (self.flags, self.url_string)
112 l = self.popen_persist(commandline).split('\n')105 _, l, _ = self.subprocess_popen(commandline)
113 # Look for our files as the last element of a long list line106 # Look for our files as the last element of a long list line
114 return [x.split()[-1] for x in l if x and not x.startswith("total ")]107 return [x.split()[-1] for x in l.split('\n') if x and not x.startswith("total ")]
115108
116 def delete(self, filename_list):109 def _delete(self, filename):
117 """Delete files in filename_list"""110 commandline = "ncftpls %s -l -X 'DELE %s' '%s'" % \
118 for filename in filename_list:111 (self.flags, filename, self.url_string)
119 commandline = "ncftpls %s -l -X 'DELE %s' '%s'" % \112 self.subprocess_popen(commandline)
120 (self.flags, filename, self.url_string)
121 self.popen_persist(commandline)
122113
123duplicity.backend.register_backend("ftp", FTPBackend)114duplicity.backend.register_backend("ftp", FTPBackend)
124115
=== modified file 'duplicity/backends/ftpsbackend.py'
--- duplicity/backends/ftpsbackend.py 2014-04-25 23:20:12 +0000
+++ duplicity/backends/ftpsbackend.py 2014-04-28 02:49:55 +0000
@@ -28,7 +28,6 @@
28import duplicity.backend28import duplicity.backend
29from duplicity import globals29from duplicity import globals
30from duplicity import log30from duplicity import log
31from duplicity.errors import *
32from duplicity import tempdir31from duplicity import tempdir
3332
34class FTPSBackend(duplicity.backend.Backend):33class FTPSBackend(duplicity.backend.Backend):
@@ -85,42 +84,29 @@
85 os.write(self.tempfile, "user %s %s\n" % (self.parsed_url.username, self.password))84 os.write(self.tempfile, "user %s %s\n" % (self.parsed_url.username, self.password))
86 os.close(self.tempfile)85 os.close(self.tempfile)
8786
88 self.flags = "-f %s" % self.tempname87 def _put(self, source_path, remote_filename):
89
90 def put(self, source_path, remote_filename = None):
91 """Transfer source_path to remote_filename"""
92 if not remote_filename:
93 remote_filename = source_path.get_filename()
94 remote_path = os.path.join(urllib.unquote(self.parsed_url.path.lstrip('/')), remote_filename).rstrip()88 remote_path = os.path.join(urllib.unquote(self.parsed_url.path.lstrip('/')), remote_filename).rstrip()
95 commandline = "lftp -c 'source %s;put \'%s\' -o \'%s\''" % \89 commandline = "lftp -c 'source %s;put \'%s\' -o \'%s\''" % \
96 (self.tempname, source_path.name, remote_path)90 (self.tempname, source_path.name, remote_path)
97 l = self.run_command_persist(commandline)91 self.subprocess_popen(commandline)
9892
99 def get(self, remote_filename, local_path):93 def _get(self, remote_filename, local_path):
100 """Get remote filename, saving it to local_path"""
101 remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip()94 remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip()
102 commandline = "lftp -c 'source %s;get %s -o %s'" % \95 commandline = "lftp -c 'source %s;get %s -o %s'" % \
103 (self.tempname, remote_path.lstrip('/'), local_path.name)96 (self.tempname, remote_path.lstrip('/'), local_path.name)
104 self.run_command_persist(commandline)97 self.subprocess_popen(commandline)
105 local_path.setdata()
10698
107 def _list(self):99 def _list(self):
108 """List files in directory"""
109 # Do a long listing to avoid connection reset100 # Do a long listing to avoid connection reset
110 remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()101 remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
111 commandline = "lftp -c 'source %s;ls \'%s\''" % (self.tempname, remote_dir)102 commandline = "lftp -c 'source %s;ls \'%s\''" % (self.tempname, remote_dir)
112 l = self.popen_persist(commandline).split('\n')103 _, l, _ = self.subprocess_popen(commandline)
113 # Look for our files as the last element of a long list line104 # Look for our files as the last element of a long list line
114 return [x.split()[-1] for x in l if x]105 return [x.split()[-1] for x in l.split('\n') if x]
115106
116 def delete(self, filename_list):107 def _delete(self, filename):
117 """Delete files in filename_list"""108 remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
118 filelist = ""109 commandline = "lftp -c 'source %s;cd \'%s\';rm \'%s\''" % (self.tempname, remote_dir, filename)
119 for filename in filename_list:110 self.subprocess_popen(commandline)
120 filelist += "\'%s\' " % filename
121 if filelist.rstrip():
122 remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
123 commandline = "lftp -c 'source %s;cd \'%s\';rm %s'" % (self.tempname, remote_dir, filelist.rstrip())
124 self.popen_persist(commandline)
125111
126duplicity.backend.register_backend("ftps", FTPSBackend)112duplicity.backend.register_backend("ftps", FTPSBackend)
127113
=== modified file 'duplicity/backends/gdocsbackend.py'
--- duplicity/backends/gdocsbackend.py 2014-04-17 20:50:57 +0000
+++ duplicity/backends/gdocsbackend.py 2014-04-28 02:49:55 +0000
@@ -23,9 +23,7 @@
23import urllib23import urllib
2424
25import duplicity.backend25import duplicity.backend
26from duplicity.backend import retry26from duplicity.errors import BackendException
27from duplicity import log
28from duplicity.errors import * #@UnusedWildImport
2927
3028
31class GDocsBackend(duplicity.backend.Backend):29class GDocsBackend(duplicity.backend.Backend):
@@ -53,14 +51,14 @@
53 self.client = gdata.docs.client.DocsClient(source='duplicity $version')51 self.client = gdata.docs.client.DocsClient(source='duplicity $version')
54 self.client.ssl = True52 self.client.ssl = True
55 self.client.http_client.debug = False53 self.client.http_client.debug = False
56 self.__authorize(parsed_url.username + '@' + parsed_url.hostname, self.get_password())54 self._authorize(parsed_url.username + '@' + parsed_url.hostname, self.get_password())
5755
58 # Fetch destination folder entry (and crete hierarchy if required).56 # Fetch destination folder entry (and crete hierarchy if required).
59 folder_names = string.split(parsed_url.path[1:], '/')57 folder_names = string.split(parsed_url.path[1:], '/')
60 parent_folder = None58 parent_folder = None
61 parent_folder_id = GDocsBackend.ROOT_FOLDER_ID59 parent_folder_id = GDocsBackend.ROOT_FOLDER_ID
62 for folder_name in folder_names:60 for folder_name in folder_names:
63 entries = self.__fetch_entries(parent_folder_id, 'folder', folder_name)61 entries = self._fetch_entries(parent_folder_id, 'folder', folder_name)
64 if entries is not None:62 if entries is not None:
65 if len(entries) == 1:63 if len(entries) == 1:
66 parent_folder = entries[0]64 parent_folder = entries[0]
@@ -77,106 +75,54 @@
77 raise BackendException("Error while fetching destination folder '%s'." % folder_name)75 raise BackendException("Error while fetching destination folder '%s'." % folder_name)
78 self.folder = parent_folder76 self.folder = parent_folder
7977
80 @retry78 def _put(self, source_path, remote_filename):
81 def put(self, source_path, remote_filename=None, raise_errors=False):79 self._delete(remote_filename)
82 """Transfer source_path to remote_filename"""80
83 # Default remote file name.81 # Set uploader instance. Note that resumable uploads are required in order to
84 if not remote_filename:82 # enable uploads for all file types.
85 remote_filename = source_path.get_filename()83 # (see http://googleappsdeveloper.blogspot.com/2011/05/upload-all-file-types-to-any-google.html)
8684 file = source_path.open()
87 # Upload!85 uploader = gdata.client.ResumableUploader(
88 try:86 self.client, file, GDocsBackend.BACKUP_DOCUMENT_TYPE, os.path.getsize(file.name),
89 # If remote file already exists in destination folder, remove it.87 chunk_size=gdata.client.ResumableUploader.DEFAULT_CHUNK_SIZE,
90 entries = self.__fetch_entries(self.folder.resource_id.text,88 desired_class=gdata.docs.data.Resource)
91 GDocsBackend.BACKUP_DOCUMENT_TYPE,89 if uploader:
92 remote_filename)90 # Chunked upload.
93 for entry in entries:91 entry = gdata.docs.data.Resource(title=atom.data.Title(text=remote_filename))
94 self.client.delete(entry.get_edit_link().href + '?delete=true', force=True)92 uri = self.folder.get_resumable_create_media_link().href + '?convert=false'
9593 entry = uploader.UploadFile(uri, entry=entry)
96 # Set uploader instance. Note that resumable uploads are required in order to94 if not entry:
97 # enable uploads for all file types.95 raise BackendException("Failed to upload file '%s' to remote folder '%s'"
98 # (see http://googleappsdeveloper.blogspot.com/2011/05/upload-all-file-types-to-any-google.html)96 % (source_path.get_filename(), self.folder.title.text))
99 file = source_path.open()97 else:
100 uploader = gdata.client.ResumableUploader(98 raise BackendException("Failed to initialize upload of file '%s' to remote folder '%s'"
101 self.client, file, GDocsBackend.BACKUP_DOCUMENT_TYPE, os.path.getsize(file.name),99 % (source_path.get_filename(), self.folder.title.text))
102 chunk_size=gdata.client.ResumableUploader.DEFAULT_CHUNK_SIZE,100 assert not file.close()
103 desired_class=gdata.docs.data.Resource)101
104 if uploader:102 def _get(self, remote_filename, local_path):
105 # Chunked upload.103 entries = self._fetch_entries(self.folder.resource_id.text,
106 entry = gdata.docs.data.Resource(title=atom.data.Title(text=remote_filename))104 GDocsBackend.BACKUP_DOCUMENT_TYPE,
107 uri = self.folder.get_resumable_create_media_link().href + '?convert=false'105 remote_filename)
108 entry = uploader.UploadFile(uri, entry=entry)106 if len(entries) == 1:
109 if not entry:107 entry = entries[0]
110 self.__handle_error("Failed to upload file '%s' to remote folder '%s'"108 self.client.DownloadResource(entry, local_path.name)
111 % (source_path.get_filename(), self.folder.title.text), raise_errors)109 else:
112 else:110 raise BackendException("Failed to find file '%s' in remote folder '%s'"
113 self.__handle_error("Failed to initialize upload of file '%s' to remote folder '%s'"111 % (remote_filename, self.folder.title.text))
114 % (source_path.get_filename(), self.folder.title.text), raise_errors)112
115 assert not file.close()113 def _list(self):
116 except Exception as e:114 entries = self._fetch_entries(self.folder.resource_id.text,
117 self.__handle_error("Failed to upload file '%s' to remote folder '%s': %s"115 GDocsBackend.BACKUP_DOCUMENT_TYPE)
118 % (source_path.get_filename(), self.folder.title.text, str(e)), raise_errors)116 return [entry.title.text for entry in entries]
119117
120 @retry118 def _delete(self, filename):
121 def get(self, remote_filename, local_path, raise_errors=False):119 entries = self._fetch_entries(self.folder.resource_id.text,
122 """Get remote filename, saving it to local_path"""120 GDocsBackend.BACKUP_DOCUMENT_TYPE,
123 try:121 filename)
124 entries = self.__fetch_entries(self.folder.resource_id.text,122 for entry in entries:
125 GDocsBackend.BACKUP_DOCUMENT_TYPE,123 self.client.delete(entry.get_edit_link().href + '?delete=true', force=True)
126 remote_filename)124
127 if len(entries) == 1:125 def _authorize(self, email, password, captcha_token=None, captcha_response=None):
128 entry = entries[0]
129 self.client.DownloadResource(entry, local_path.name)
130 local_path.setdata()
131 return
132 else:
133 self.__handle_error("Failed to find file '%s' in remote folder '%s'"
134 % (remote_filename, self.folder.title.text), raise_errors)
135 except Exception as e:
136 self.__handle_error("Failed to download file '%s' in remote folder '%s': %s"
137 % (remote_filename, self.folder.title.text, str(e)), raise_errors)
138
139 @retry
140 def _list(self, raise_errors=False):
141 """List files in folder"""
142 try:
143 entries = self.__fetch_entries(self.folder.resource_id.text,
144 GDocsBackend.BACKUP_DOCUMENT_TYPE)
145 return [entry.title.text for entry in entries]
146 except Exception as e:
147 self.__handle_error("Failed to fetch list of files in remote folder '%s': %s"
148 % (self.folder.title.text, str(e)), raise_errors)
149
150 @retry
151 def delete(self, filename_list, raise_errors=False):
152 """Delete files in filename_list"""
153 for filename in filename_list:
154 try:
155 entries = self.__fetch_entries(self.folder.resource_id.text,
156 GDocsBackend.BACKUP_DOCUMENT_TYPE,
157 filename)
158 if len(entries) > 0:
159 success = True
160 for entry in entries:
161 if not self.client.delete(entry.get_edit_link().href + '?delete=true', force=True):
162 success = False
163 if not success:
164 self.__handle_error("Failed to remove file '%s' in remote folder '%s'"
165 % (filename, self.folder.title.text), raise_errors)
166 else:
167 log.Warn("Failed to fetch file '%s' in remote folder '%s'"
168 % (filename, self.folder.title.text))
169 except Exception as e:
170 self.__handle_error("Failed to remove file '%s' in remote folder '%s': %s"
171 % (filename, self.folder.title.text, str(e)), raise_errors)
172
173 def __handle_error(self, message, raise_errors=True):
174 if raise_errors:
175 raise BackendException(message)
176 else:
177 log.FatalError(message, log.ErrorCode.backend_error)
178
179 def __authorize(self, email, password, captcha_token=None, captcha_response=None):
180 try:126 try:
181 self.client.client_login(email,127 self.client.client_login(email,
182 password,128 password,
@@ -189,17 +135,15 @@
189 answer = None135 answer = None
190 while not answer:136 while not answer:
191 answer = raw_input('Answer to the challenge? ')137 answer = raw_input('Answer to the challenge? ')
192 self.__authorize(email, password, challenge.captcha_token, answer)138 self._authorize(email, password, challenge.captcha_token, answer)
193 except gdata.client.BadAuthentication:139 except gdata.client.BadAuthentication:
194 self.__handle_error('Invalid user credentials given. Be aware that accounts '140 raise BackendException('Invalid user credentials given. Be aware that accounts '
195 'that use 2-step verification require creating an application specific '141 'that use 2-step verification require creating an application specific '
196 'access code for using this Duplicity backend. Follow the instrucction in '142 'access code for using this Duplicity backend. Follow the instruction in '
197 'http://www.google.com/support/accounts/bin/static.py?page=guide.cs&guide=1056283&topic=1056286 '143 'http://www.google.com/support/accounts/bin/static.py?page=guide.cs&guide=1056283&topic=1056286 '
198 'and create your application-specific password to run duplicity backups.')144 'and create your application-specific password to run duplicity backups.')
199 except Exception as e:
200 self.__handle_error('Error while authenticating client: %s.' % str(e))
201145
202 def __fetch_entries(self, folder_id, type, title=None):146 def _fetch_entries(self, folder_id, type, title=None):
203 # Build URI.147 # Build URI.
204 uri = '/feeds/default/private/full/%s/contents' % folder_id148 uri = '/feeds/default/private/full/%s/contents' % folder_id
205 if type == 'folder':149 if type == 'folder':
@@ -211,34 +155,31 @@
211 if title:155 if title:
212 uri += '&title=' + urllib.quote(title) + '&title-exact=true'156 uri += '&title=' + urllib.quote(title) + '&title-exact=true'
213157
214 try:158 # Fetch entries.
215 # Fetch entries.159 entries = self.client.get_all_resources(uri=uri)
216 entries = self.client.get_all_resources(uri=uri)160
217161 # When filtering by entry title, API is returning (don't know why) documents in other
218 # When filtering by entry title, API is returning (don't know why) documents in other162 # folders (apart from folder_id) matching the title, so some extra filtering is required.
219 # folders (apart from folder_id) matching the title, so some extra filtering is required.163 if title:
220 if title:164 result = []
221 result = []165 for entry in entries:
222 for entry in entries:166 resource_type = entry.get_resource_type()
223 resource_type = entry.get_resource_type()167 if (not type) \
224 if (not type) \168 or (type == 'folder' and resource_type == 'folder') \
225 or (type == 'folder' and resource_type == 'folder') \169 or (type == GDocsBackend.BACKUP_DOCUMENT_TYPE and resource_type != 'folder'):
226 or (type == GDocsBackend.BACKUP_DOCUMENT_TYPE and resource_type != 'folder'):170
227171 if folder_id != GDocsBackend.ROOT_FOLDER_ID:
228 if folder_id != GDocsBackend.ROOT_FOLDER_ID:172 for link in entry.in_collections():
229 for link in entry.in_collections():173 folder_entry = self.client.get_entry(link.href, None, None,
230 folder_entry = self.client.get_entry(link.href, None, None,174 desired_class=gdata.docs.data.Resource)
231 desired_class=gdata.docs.data.Resource)175 if folder_entry and (folder_entry.resource_id.text == folder_id):
232 if folder_entry and (folder_entry.resource_id.text == folder_id):176 result.append(entry)
233 result.append(entry)177 elif len(entry.in_collections()) == 0:
234 elif len(entry.in_collections()) == 0:178 result.append(entry)
235 result.append(entry)179 else:
236 else:180 result = entries
237 result = entries181
238182 # Done!
239 # Done!183 return result
240 return result
241 except Exception as e:
242 self.__handle_error('Error while fetching remote entries: %s.' % str(e))
243184
244duplicity.backend.register_backend('gdocs', GDocsBackend)185duplicity.backend.register_backend('gdocs', GDocsBackend)
245186
=== modified file 'duplicity/backends/giobackend.py'
--- duplicity/backends/giobackend.py 2014-04-17 20:50:57 +0000
+++ duplicity/backends/giobackend.py 2014-04-28 02:49:55 +0000
@@ -19,18 +19,12 @@
19# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA19# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
2020
21import os21import os
22import types
23import subprocess22import subprocess
24import atexit23import atexit
25import signal24import signal
26from gi.repository import Gio #@UnresolvedImport
27from gi.repository import GLib #@UnresolvedImport
2825
29import duplicity.backend26import duplicity.backend
30from duplicity.backend import retry
31from duplicity import log27from duplicity import log
32from duplicity import util
33from duplicity.errors import * #@UnusedWildImport
3428
35def ensure_dbus():29def ensure_dbus():
36 # GIO requires a dbus session bus which can start the gvfs daemons30 # GIO requires a dbus session bus which can start the gvfs daemons
@@ -46,36 +40,39 @@
46 atexit.register(os.kill, int(parts[1]), signal.SIGTERM)40 atexit.register(os.kill, int(parts[1]), signal.SIGTERM)
47 os.environ[parts[0]] = parts[1]41 os.environ[parts[0]] = parts[1]
4842
49class DupMountOperation(Gio.MountOperation):
50 """A simple MountOperation that grabs the password from the environment
51 or the user.
52 """
53 def __init__(self, backend):
54 Gio.MountOperation.__init__(self)
55 self.backend = backend
56 self.connect('ask-password', self.ask_password_cb)
57 self.connect('ask-question', self.ask_question_cb)
58
59 def ask_password_cb(self, *args, **kwargs):
60 self.set_password(self.backend.get_password())
61 self.reply(Gio.MountOperationResult.HANDLED)
62
63 def ask_question_cb(self, *args, **kwargs):
64 # Obviously just always answering with the first choice is a naive
65 # approach. But there's no easy way to allow for answering questions
66 # in duplicity's typical run-from-cron mode with environment variables.
67 # And only a couple gvfs backends ask questions: 'sftp' does about
68 # new hosts and 'afc' does if the device is locked. 0 should be a
69 # safe choice.
70 self.set_choice(0)
71 self.reply(Gio.MountOperationResult.HANDLED)
72
73class GIOBackend(duplicity.backend.Backend):43class GIOBackend(duplicity.backend.Backend):
74 """Use this backend when saving to a GIO URL.44 """Use this backend when saving to a GIO URL.
75 This is a bit of a meta-backend, in that it can handle multiple schemas.45 This is a bit of a meta-backend, in that it can handle multiple schemas.
76 URLs look like schema://user@server/path.46 URLs look like schema://user@server/path.
77 """47 """
78 def __init__(self, parsed_url):48 def __init__(self, parsed_url):
49 from gi.repository import Gio #@UnresolvedImport
50 from gi.repository import GLib #@UnresolvedImport
51
52 class DupMountOperation(Gio.MountOperation):
53 """A simple MountOperation that grabs the password from the environment
54 or the user.
55 """
56 def __init__(self, backend):
57 Gio.MountOperation.__init__(self)
58 self.backend = backend
59 self.connect('ask-password', self.ask_password_cb)
60 self.connect('ask-question', self.ask_question_cb)
61
62 def ask_password_cb(self, *args, **kwargs):
63 self.set_password(self.backend.get_password())
64 self.reply(Gio.MountOperationResult.HANDLED)
65
66 def ask_question_cb(self, *args, **kwargs):
67 # Obviously just always answering with the first choice is a naive
68 # approach. But there's no easy way to allow for answering questions
69 # in duplicity's typical run-from-cron mode with environment variables.
70 # And only a couple gvfs backends ask questions: 'sftp' does about
71 # new hosts and 'afc' does if the device is locked. 0 should be a
72 # safe choice.
73 self.set_choice(0)
74 self.reply(Gio.MountOperationResult.HANDLED)
75
79 duplicity.backend.Backend.__init__(self, parsed_url)76 duplicity.backend.Backend.__init__(self, parsed_url)
8077
81 ensure_dbus()78 ensure_dbus()
@@ -86,8 +83,8 @@
86 op = DupMountOperation(self)83 op = DupMountOperation(self)
87 loop = GLib.MainLoop()84 loop = GLib.MainLoop()
88 self.remote_file.mount_enclosing_volume(Gio.MountMountFlags.NONE,85 self.remote_file.mount_enclosing_volume(Gio.MountMountFlags.NONE,
89 op, None, self.done_with_mount,86 op, None,
90 loop)87 self.__done_with_mount, loop)
91 loop.run() # halt program until we're done mounting88 loop.run() # halt program until we're done mounting
9289
93 # Now make the directory if it doesn't exist90 # Now make the directory if it doesn't exist
@@ -97,7 +94,9 @@
97 if e.code != Gio.IOErrorEnum.EXISTS:94 if e.code != Gio.IOErrorEnum.EXISTS:
98 raise95 raise
9996
100 def done_with_mount(self, fileobj, result, loop):97 def __done_with_mount(self, fileobj, result, loop):
98 from gi.repository import Gio #@UnresolvedImport
99 from gi.repository import GLib #@UnresolvedImport
101 try:100 try:
102 fileobj.mount_enclosing_volume_finish(result)101 fileobj.mount_enclosing_volume_finish(result)
103 except GLib.GError as e:102 except GLib.GError as e:
@@ -107,97 +106,63 @@
107 % str(e), log.ErrorCode.connection_failed)106 % str(e), log.ErrorCode.connection_failed)
108 loop.quit()107 loop.quit()
109108
110 def handle_error(self, raise_error, e, op, file1=None, file2=None):109 def __copy_progress(self, *args, **kwargs):
111 if raise_error:110 pass
112 raise e111
113 code = log.ErrorCode.backend_error112 def __copy_file(self, source, target):
113 from gi.repository import Gio #@UnresolvedImport
114 source.copy(target,
115 Gio.FileCopyFlags.OVERWRITE | Gio.FileCopyFlags.NOFOLLOW_SYMLINKS,
116 None, self.__copy_progress, None)
117
118 def _error_code(self, operation, e):
119 from gi.repository import Gio #@UnresolvedImport
120 from gi.repository import GLib #@UnresolvedImport
114 if isinstance(e, GLib.GError):121 if isinstance(e, GLib.GError):
115 if e.code == Gio.IOErrorEnum.PERMISSION_DENIED:122 if e.code == Gio.IOErrorEnum.FAILED and operation == 'delete':
116 code = log.ErrorCode.backend_permission_denied123 # Sometimes delete will return a generic failure on a file not
124 # found (notably the FTP does that)
125 return log.ErrorCode.backend_not_found
126 elif e.code == Gio.IOErrorEnum.PERMISSION_DENIED:
127 return log.ErrorCode.backend_permission_denied
117 elif e.code == Gio.IOErrorEnum.NOT_FOUND:128 elif e.code == Gio.IOErrorEnum.NOT_FOUND:
118 code = log.ErrorCode.backend_not_found129 return log.ErrorCode.backend_not_found
119 elif e.code == Gio.IOErrorEnum.NO_SPACE:130 elif e.code == Gio.IOErrorEnum.NO_SPACE:
120 code = log.ErrorCode.backend_no_space131 return log.ErrorCode.backend_no_space
121 extra = ' '.join([util.escape(x) for x in [file1, file2] if x])132
122 extra = ' '.join([op, extra])133 def _put(self, source_path, remote_filename):
123 log.FatalError(str(e), code, extra)134 from gi.repository import Gio #@UnresolvedImport
124
125 def copy_progress(self, *args, **kwargs):
126 pass
127
128 @retry
129 def copy_file(self, op, source, target, raise_errors=False):
130 log.Info(_("Writing %s") % target.get_parse_name())
131 try:
132 source.copy(target,
133 Gio.FileCopyFlags.OVERWRITE | Gio.FileCopyFlags.NOFOLLOW_SYMLINKS,
134 None, self.copy_progress, None)
135 except Exception as e:
136 self.handle_error(raise_errors, e, op, source.get_parse_name(),
137 target.get_parse_name())
138
139 def put(self, source_path, remote_filename = None):
140 """Copy file to remote"""
141 if not remote_filename:
142 remote_filename = source_path.get_filename()
143 source_file = Gio.File.new_for_path(source_path.name)135 source_file = Gio.File.new_for_path(source_path.name)
144 target_file = self.remote_file.get_child(remote_filename)136 target_file = self.remote_file.get_child(remote_filename)
145 self.copy_file('put', source_file, target_file)137 self.__copy_file(source_file, target_file)
146138
147 def get(self, filename, local_path):139 def _get(self, filename, local_path):
148 """Get file and put in local_path (Path object)"""140 from gi.repository import Gio #@UnresolvedImport
149 source_file = self.remote_file.get_child(filename)141 source_file = self.remote_file.get_child(filename)
150 target_file = Gio.File.new_for_path(local_path.name)142 target_file = Gio.File.new_for_path(local_path.name)
151 self.copy_file('get', source_file, target_file)143 self.__copy_file(source_file, target_file)
152 local_path.setdata()
153144
154 @retry145 def _list(self):
155 def _list(self, raise_errors=False):146 from gi.repository import Gio #@UnresolvedImport
156 """List files in that directory"""
157 files = []147 files = []
158 try:148 enum = self.remote_file.enumerate_children(Gio.FILE_ATTRIBUTE_STANDARD_NAME,
159 enum = self.remote_file.enumerate_children(Gio.FILE_ATTRIBUTE_STANDARD_NAME,149 Gio.FileQueryInfoFlags.NOFOLLOW_SYMLINKS,
160 Gio.FileQueryInfoFlags.NOFOLLOW_SYMLINKS,150 None)
161 None)151 info = enum.next_file(None)
152 while info:
153 files.append(info.get_name())
162 info = enum.next_file(None)154 info = enum.next_file(None)
163 while info:
164 files.append(info.get_name())
165 info = enum.next_file(None)
166 except Exception as e:
167 self.handle_error(raise_errors, e, 'list',
168 self.remote_file.get_parse_name())
169 return files155 return files
170156
171 @retry157 def _delete(self, filename):
172 def delete(self, filename_list, raise_errors=False):158 target_file = self.remote_file.get_child(filename)
173 """Delete all files in filename list"""159 target_file.delete(None)
174 assert type(filename_list) is not types.StringType160
175 for filename in filename_list:161 def _query(self, filename):
176 target_file = self.remote_file.get_child(filename)162 from gi.repository import Gio #@UnresolvedImport
177 try:163 target_file = self.remote_file.get_child(filename)
178 target_file.delete(None)164 info = target_file.query_info(Gio.FILE_ATTRIBUTE_STANDARD_SIZE,
179 except Exception as e:165 Gio.FileQueryInfoFlags.NONE, None)
180 if isinstance(e, GLib.GError):166 return {'size': info.get_size()}
181 if e.code == Gio.IOErrorEnum.NOT_FOUND:167
182 continue168duplicity.backend.register_backend_prefix('gio', GIOBackend)
183 self.handle_error(raise_errors, e, 'delete',
184 target_file.get_parse_name())
185 return
186
187 @retry
188 def _query_file_info(self, filename, raise_errors=False):
189 """Query attributes on filename"""
190 target_file = self.remote_file.get_child(filename)
191 attrs = Gio.FILE_ATTRIBUTE_STANDARD_SIZE
192 try:
193 info = target_file.query_info(attrs, Gio.FileQueryInfoFlags.NONE,
194 None)
195 return {'size': info.get_size()}
196 except Exception as e:
197 if isinstance(e, GLib.GError):
198 if e.code == Gio.IOErrorEnum.NOT_FOUND:
199 return {'size': -1} # early exit, no need to retry
200 if raise_errors:
201 raise e
202 else:
203 return {'size': None}
204169
=== modified file 'duplicity/backends/hsibackend.py'
--- duplicity/backends/hsibackend.py 2014-04-25 23:20:12 +0000
+++ duplicity/backends/hsibackend.py 2014-04-28 02:49:55 +0000
@@ -20,9 +20,7 @@
20# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA20# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
2121
22import os22import os
23
24import duplicity.backend23import duplicity.backend
25from duplicity.errors import * #@UnusedWildImport
2624
27hsi_command = "hsi"25hsi_command = "hsi"
28class HSIBackend(duplicity.backend.Backend):26class HSIBackend(duplicity.backend.Backend):
@@ -35,35 +33,23 @@
35 else:33 else:
36 self.remote_prefix = ""34 self.remote_prefix = ""
3735
38 def put(self, source_path, remote_filename = None):36 def _put(self, source_path, remote_filename):
39 if not remote_filename:
40 remote_filename = source_path.get_filename()
41 commandline = '%s "put %s : %s%s"' % (hsi_command,source_path.name,self.remote_prefix,remote_filename)37 commandline = '%s "put %s : %s%s"' % (hsi_command,source_path.name,self.remote_prefix,remote_filename)
42 try:38 self.subprocess_popen(commandline)
43 self.run_command(commandline)
44 except Exception:
45 print commandline
4639
47 def get(self, remote_filename, local_path):40 def _get(self, remote_filename, local_path):
48 commandline = '%s "get %s : %s%s"' % (hsi_command, local_path.name, self.remote_prefix, remote_filename)41 commandline = '%s "get %s : %s%s"' % (hsi_command, local_path.name, self.remote_prefix, remote_filename)
49 self.run_command(commandline)42 self.subprocess_popen(commandline)
50 local_path.setdata()
51 if not local_path.exists():
52 raise BackendException("File %s not found" % local_path.name)
5343
54 def list(self):44 def _list(self):
55 commandline = '%s "ls -l %s"' % (hsi_command, self.remote_dir)45 commandline = '%s "ls -l %s"' % (hsi_command, self.remote_dir)
56 l = os.popen3(commandline)[2].readlines()[3:]46 l = os.popen3(commandline)[2].readlines()[3:]
57 for i in range(0,len(l)):47 for i in range(0,len(l)):
58 l[i] = l[i].split()[-1]48 l[i] = l[i].split()[-1]
59 return [x for x in l if x]49 return [x for x in l if x]
6050
61 def delete(self, filename_list):51 def _delete(self, filename):
62 assert len(filename_list) > 052 commandline = '%s "rm %s%s"' % (hsi_command, self.remote_prefix, filename)
63 for fn in filename_list:53 self.subprocess_popen(commandline)
64 commandline = '%s "rm %s%s"' % (hsi_command, self.remote_prefix, fn)
65 self.run_command(commandline)
6654
67duplicity.backend.register_backend("hsi", HSIBackend)55duplicity.backend.register_backend("hsi", HSIBackend)
68
69
7056
=== modified file 'duplicity/backends/imapbackend.py'
--- duplicity/backends/imapbackend.py 2014-04-17 22:03:10 +0000
+++ duplicity/backends/imapbackend.py 2014-04-28 02:49:55 +0000
@@ -44,7 +44,7 @@
44 (self.__class__.__name__, parsed_url.scheme, parsed_url.hostname, parsed_url.username))44 (self.__class__.__name__, parsed_url.scheme, parsed_url.hostname, parsed_url.username))
4545
46 # Store url for reconnection on error46 # Store url for reconnection on error
47 self._url = parsed_url47 self.url = parsed_url
4848
49 # Set the username49 # Set the username
50 if ( parsed_url.username is None ):50 if ( parsed_url.username is None ):
@@ -61,12 +61,12 @@
61 else:61 else:
62 password = parsed_url.password62 password = parsed_url.password
6363
64 self._username = username64 self.username = username
65 self._password = password65 self.password = password
66 self._resetConnection()66 self.resetConnection()
6767
68 def _resetConnection(self):68 def resetConnection(self):
69 parsed_url = self._url69 parsed_url = self.url
70 try:70 try:
71 imap_server = os.environ['IMAP_SERVER']71 imap_server = os.environ['IMAP_SERVER']
72 except KeyError:72 except KeyError:
@@ -74,32 +74,32 @@
7474
75 # Try to close the connection cleanly75 # Try to close the connection cleanly
76 try:76 try:
77 self._conn.close()77 self.conn.close()
78 except Exception:78 except Exception:
79 pass79 pass
8080
81 if (parsed_url.scheme == "imap"):81 if (parsed_url.scheme == "imap"):
82 cl = imaplib.IMAP482 cl = imaplib.IMAP4
83 self._conn = cl(imap_server, 143)83 self.conn = cl(imap_server, 143)
84 elif (parsed_url.scheme == "imaps"):84 elif (parsed_url.scheme == "imaps"):
85 cl = imaplib.IMAP4_SSL85 cl = imaplib.IMAP4_SSL
86 self._conn = cl(imap_server, 993)86 self.conn = cl(imap_server, 993)
8787
88 log.Debug("Type of imap class: %s" % (cl.__name__))88 log.Debug("Type of imap class: %s" % (cl.__name__))
89 self.remote_dir = re.sub(r'^/', r'', parsed_url.path, 1)89 self.remote_dir = re.sub(r'^/', r'', parsed_url.path, 1)
9090
91 # Login91 # Login
92 if (not(globals.imap_full_address)):92 if (not(globals.imap_full_address)):
93 self._conn.login(self._username, self._password)93 self.conn.login(self.username, self.password)
94 self._conn.select(globals.imap_mailbox)94 self.conn.select(globals.imap_mailbox)
95 log.Info("IMAP connected")95 log.Info("IMAP connected")
96 else:96 else:
97 self._conn.login(self._username + "@" + parsed_url.hostname, self._password)97 self.conn.login(self.username + "@" + parsed_url.hostname, self.password)
98 self._conn.select(globals.imap_mailbox)98 self.conn.select(globals.imap_mailbox)
99 log.Info("IMAP connected")99 log.Info("IMAP connected")
100100
101101
102 def _prepareBody(self,f,rname):102 def prepareBody(self,f,rname):
103 mp = email.MIMEMultipart.MIMEMultipart()103 mp = email.MIMEMultipart.MIMEMultipart()
104104
105 # I am going to use the remote_dir as the From address so that105 # I am going to use the remote_dir as the From address so that
@@ -117,9 +117,7 @@
117117
118 return mp.as_string()118 return mp.as_string()
119119
120 def put(self, source_path, remote_filename = None):120 def _put(self, source_path, remote_filename):
121 if not remote_filename:
122 remote_filename = source_path.get_filename()
123 f=source_path.open("rb")121 f=source_path.open("rb")
124 allowedTimeout = globals.timeout122 allowedTimeout = globals.timeout
125 if (allowedTimeout == 0):123 if (allowedTimeout == 0):
@@ -127,12 +125,12 @@
127 allowedTimeout = 2880125 allowedTimeout = 2880
128 while allowedTimeout > 0:126 while allowedTimeout > 0:
129 try:127 try:
130 self._conn.select(remote_filename)128 self.conn.select(remote_filename)
131 body=self._prepareBody(f,remote_filename)129 body=self.prepareBody(f,remote_filename)
132 # If we don't select the IMAP folder before130 # If we don't select the IMAP folder before
133 # append, the message goes into the INBOX.131 # append, the message goes into the INBOX.
134 self._conn.select(globals.imap_mailbox)132 self.conn.select(globals.imap_mailbox)
135 self._conn.append(globals.imap_mailbox, None, None, body)133 self.conn.append(globals.imap_mailbox, None, None, body)
136 break134 break
137 except (imaplib.IMAP4.abort, socket.error, socket.sslerror):135 except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
138 allowedTimeout -= 1136 allowedTimeout -= 1
@@ -140,7 +138,7 @@
140 time.sleep(30)138 time.sleep(30)
141 while allowedTimeout > 0:139 while allowedTimeout > 0:
142 try:140 try:
143 self._resetConnection()141 self.resetConnection()
144 break142 break
145 except (imaplib.IMAP4.abort, socket.error, socket.sslerror):143 except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
146 allowedTimeout -= 1144 allowedTimeout -= 1
@@ -149,15 +147,15 @@
149147
150 log.Info("IMAP mail with '%s' subject stored" % remote_filename)148 log.Info("IMAP mail with '%s' subject stored" % remote_filename)
151149
152 def get(self, remote_filename, local_path):150 def _get(self, remote_filename, local_path):
153 allowedTimeout = globals.timeout151 allowedTimeout = globals.timeout
154 if (allowedTimeout == 0):152 if (allowedTimeout == 0):
155 # Allow a total timeout of 1 day153 # Allow a total timeout of 1 day
156 allowedTimeout = 2880154 allowedTimeout = 2880
157 while allowedTimeout > 0:155 while allowedTimeout > 0:
158 try:156 try:
159 self._conn.select(globals.imap_mailbox)157 self.conn.select(globals.imap_mailbox)
160 (result,list) = self._conn.search(None, 'Subject', remote_filename)158 (result,list) = self.conn.search(None, 'Subject', remote_filename)
161 if result != "OK":159 if result != "OK":
162 raise Exception(list[0])160 raise Exception(list[0])
163161
@@ -165,7 +163,7 @@
165 if list[0] == '':163 if list[0] == '':
166 raise Exception("no mail with subject %s")164 raise Exception("no mail with subject %s")
167165
168 (result,list) = self._conn.fetch(list[0],"(RFC822)")166 (result,list) = self.conn.fetch(list[0],"(RFC822)")
169167
170 if result != "OK":168 if result != "OK":
171 raise Exception(list[0])169 raise Exception(list[0])
@@ -185,7 +183,7 @@
185 time.sleep(30)183 time.sleep(30)
186 while allowedTimeout > 0:184 while allowedTimeout > 0:
187 try:185 try:
188 self._resetConnection()186 self.resetConnection()
189 break187 break
190 except (imaplib.IMAP4.abort, socket.error, socket.sslerror):188 except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
191 allowedTimeout -= 1189 allowedTimeout -= 1
@@ -199,7 +197,7 @@
199197
200 def _list(self):198 def _list(self):
201 ret = []199 ret = []
202 (result,list) = self._conn.select(globals.imap_mailbox)200 (result,list) = self.conn.select(globals.imap_mailbox)
203 if result != "OK":201 if result != "OK":
204 raise BackendException(list[0])202 raise BackendException(list[0])
205203
@@ -207,14 +205,14 @@
207 # address205 # address
208206
209 # Search returns an error if you haven't selected an IMAP folder.207 # Search returns an error if you haven't selected an IMAP folder.
210 (result,list) = self._conn.search(None, 'FROM', self.remote_dir)208 (result,list) = self.conn.search(None, 'FROM', self.remote_dir)
211 if result!="OK":209 if result!="OK":
212 raise Exception(list[0])210 raise Exception(list[0])
213 if list[0]=='':211 if list[0]=='':
214 return ret212 return ret
215 nums=list[0].split(" ")213 nums=list[0].split(" ")
216 set="%s:%s"%(nums[0],nums[-1])214 set="%s:%s"%(nums[0],nums[-1])
217 (result,list) = self._conn.fetch(set,"(BODY[HEADER])")215 (result,list) = self.conn.fetch(set,"(BODY[HEADER])")
218 if result!="OK":216 if result!="OK":
219 raise Exception(list[0])217 raise Exception(list[0])
220218
@@ -232,34 +230,32 @@
232 log.Info("IMAP LIST: %s %s" % (subj,header_from))230 log.Info("IMAP LIST: %s %s" % (subj,header_from))
233 return ret231 return ret
234232
235 def _imapf(self,fun,*args):233 def imapf(self,fun,*args):
236 (ret,list)=fun(*args)234 (ret,list)=fun(*args)
237 if ret != "OK":235 if ret != "OK":
238 raise Exception(list[0])236 raise Exception(list[0])
239 return list237 return list
240238
241 def _delete_single_mail(self,i):239 def delete_single_mail(self,i):
242 self._imapf(self._conn.store,i,"+FLAGS",'\\DELETED')240 self.imapf(self.conn.store,i,"+FLAGS",'\\DELETED')
243241
244 def _expunge(self):242 def expunge(self):
245 list=self._imapf(self._conn.expunge)243 list=self.imapf(self.conn.expunge)
246244
247 def delete(self, filename_list):245 def _delete_list(self, filename_list):
248 assert len(filename_list) > 0
249 for filename in filename_list:246 for filename in filename_list:
250 list = self._imapf(self._conn.search,None,"(SUBJECT %s)"%filename)247 list = self.imapf(self.conn.search,None,"(SUBJECT %s)"%filename)
251 list = list[0].split()248 list = list[0].split()
252 if len(list)==0 or list[0]=="":raise Exception("no such mail with subject '%s'"%filename)249 if len(list) > 0 and list[0] != "":
253 self._delete_single_mail(list[0])250 self.delete_single_mail(list[0])
254 log.Notice("marked %s to be deleted" % filename)251 log.Notice("marked %s to be deleted" % filename)
255 self._expunge()252 self.expunge()
256 log.Notice("IMAP expunged %s files" % len(list))253 log.Notice("IMAP expunged %s files" % len(filename_list))
257254
258 def close(self):255 def _close(self):
259 self._conn.select(globals.imap_mailbox)256 self.conn.select(globals.imap_mailbox)
260 self._conn.close()257 self.conn.close()
261 self._conn.logout()258 self.conn.logout()
262259
263duplicity.backend.register_backend("imap", ImapBackend);260duplicity.backend.register_backend("imap", ImapBackend);
264duplicity.backend.register_backend("imaps", ImapBackend);261duplicity.backend.register_backend("imaps", ImapBackend);
265
266262
=== modified file 'duplicity/backends/localbackend.py'
--- duplicity/backends/localbackend.py 2014-04-17 20:50:57 +0000
+++ duplicity/backends/localbackend.py 2014-04-28 02:49:55 +0000
@@ -20,14 +20,11 @@
20# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA20# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
2121
22import os22import os
23import types
24import errno
2523
26import duplicity.backend24import duplicity.backend
27from duplicity import log25from duplicity import log
28from duplicity import path26from duplicity import path
29from duplicity import util27from duplicity.errors import BackendException
30from duplicity.errors import * #@UnusedWildImport
3128
3229
33class LocalBackend(duplicity.backend.Backend):30class LocalBackend(duplicity.backend.Backend):
@@ -43,90 +40,37 @@
43 if not parsed_url.path.startswith('//'):40 if not parsed_url.path.startswith('//'):
44 raise BackendException("Bad file:// path syntax.")41 raise BackendException("Bad file:// path syntax.")
45 self.remote_pathdir = path.Path(parsed_url.path[2:])42 self.remote_pathdir = path.Path(parsed_url.path[2:])
4643 try:
47 def handle_error(self, e, op, file1 = None, file2 = None):44 os.makedirs(self.remote_pathdir.base)
48 code = log.ErrorCode.backend_error45 except Exception:
49 if hasattr(e, 'errno'):46 pass
50 if e.errno == errno.EACCES:47
51 code = log.ErrorCode.backend_permission_denied48 def _move(self, source_path, remote_filename):
52 elif e.errno == errno.ENOENT:49 target_path = self.remote_pathdir.append(remote_filename)
53 code = log.ErrorCode.backend_not_found50 try:
54 elif e.errno == errno.ENOSPC:51 source_path.rename(target_path)
55 code = log.ErrorCode.backend_no_space52 return True
56 extra = ' '.join([util.escape(x) for x in [file1, file2] if x])53 except OSError:
57 extra = ' '.join([op, extra])54 return False
58 if op != 'delete' and op != 'query':55
59 log.FatalError(str(e), code, extra)56 def _put(self, source_path, remote_filename):
60 else:57 target_path = self.remote_pathdir.append(remote_filename)
61 log.Warn(str(e), code, extra)58 target_path.writefileobj(source_path.open("rb"))
6259
63 def move(self, source_path, remote_filename = None):60 def _get(self, filename, local_path):
64 self.put(source_path, remote_filename, rename_instead = True)
65
66 def put(self, source_path, remote_filename = None, rename_instead = False):
67 if not remote_filename:
68 remote_filename = source_path.get_filename()
69 target_path = self.remote_pathdir.append(remote_filename)
70 log.Info("Writing %s" % target_path.name)
71 """Try renaming first (if allowed to), copying if doesn't work"""
72 if rename_instead:
73 try:
74 source_path.rename(target_path)
75 except OSError:
76 pass
77 except Exception as e:
78 self.handle_error(e, 'put', source_path.name, target_path.name)
79 else:
80 return
81 try:
82 target_path.writefileobj(source_path.open("rb"))
83 except Exception as e:
84 self.handle_error(e, 'put', source_path.name, target_path.name)
85
86 """If we get here, renaming failed previously"""
87 if rename_instead:
88 """We need to simulate its behaviour"""
89 source_path.delete()
90
91 def get(self, filename, local_path):
92 """Get file and put in local_path (Path object)"""
93 source_path = self.remote_pathdir.append(filename)61 source_path = self.remote_pathdir.append(filename)
94 try:62 local_path.writefileobj(source_path.open("rb"))
95 local_path.writefileobj(source_path.open("rb"))
96 except Exception as e:
97 self.handle_error(e, 'get', source_path.name, local_path.name)
9863
99 def _list(self):64 def _list(self):
100 """List files in that directory"""65 return self.remote_pathdir.listdir()
101 try:66
102 os.makedirs(self.remote_pathdir.base)67 def _delete(self, filename):
103 except Exception:68 self.remote_pathdir.append(filename).delete()
104 pass69
105 try:70 def _query(self, filename):
106 return self.remote_pathdir.listdir()71 target_file = self.remote_pathdir.append(filename)
107 except Exception as e:72 target_file.setdata()
108 self.handle_error(e, 'list', self.remote_pathdir.name)73 size = target_file.getsize() if target_file.exists() else -1
10974 return {'size': size}
110 def delete(self, filename_list):
111 """Delete all files in filename list"""
112 assert type(filename_list) is not types.StringType
113 for filename in filename_list:
114 try:
115 self.remote_pathdir.append(filename).delete()
116 except Exception as e:
117 self.handle_error(e, 'delete', self.remote_pathdir.append(filename).name)
118
119 def _query_file_info(self, filename):
120 """Query attributes on filename"""
121 try:
122 target_file = self.remote_pathdir.append(filename)
123 if not os.path.exists(target_file.name):
124 return {'size': -1}
125 target_file.setdata()
126 size = target_file.getsize()
127 return {'size': size}
128 except Exception as e:
129 self.handle_error(e, 'query', target_file.name)
130 return {'size': None}
13175
132duplicity.backend.register_backend("file", LocalBackend)76duplicity.backend.register_backend("file", LocalBackend)
13377
=== modified file 'duplicity/backends/megabackend.py'
--- duplicity/backends/megabackend.py 2014-04-17 20:50:57 +0000
+++ duplicity/backends/megabackend.py 2014-04-28 02:49:55 +0000
@@ -22,9 +22,8 @@
22# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA22# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
2323
24import duplicity.backend24import duplicity.backend
25from duplicity.backend import retry
26from duplicity import log25from duplicity import log
27from duplicity.errors import * #@UnusedWildImport26from duplicity.errors import BackendException
2827
2928
30class MegaBackend(duplicity.backend.Backend):29class MegaBackend(duplicity.backend.Backend):
@@ -63,113 +62,64 @@
6362
64 self.folder = parent_folder63 self.folder = parent_folder
6564
66 @retry65 def _put(self, source_path, remote_filename):
67 def put(self, source_path, remote_filename=None, raise_errors=False):66 try:
68 """Transfer source_path to remote_filename"""67 self._delete(remote_filename)
69 # Default remote file name.68 except Exception:
70 if not remote_filename:69 pass
71 remote_filename = source_path.get_filename()70 self.client.upload(source_path.get_canonical(), self.folder, dest_filename=remote_filename)
7271
73 try:72 def _get(self, remote_filename, local_path):
74 # If remote file already exists in destination folder, remove it.73 files = self.client.get_files()
75 files = self.client.get_files()74 entries = self.__filter_entries(files, self.folder, remote_filename, 'file')
76 entries = self.__filter_entries(files, self.folder, remote_filename, 'file')75 if len(entries):
7776 # get first matching remote file
78 for entry in entries:77 entry = entries.keys()[0]
79 self.client.delete(entry)78 self.client.download((entry, entries[entry]), dest_filename=local_path.name)
8079 else:
81 self.client.upload(source_path.get_canonical(), self.folder, dest_filename=remote_filename)80 raise BackendException("Failed to find file '%s' in remote folder '%s'"
8281 % (remote_filename, self.__get_node_name(self.folder)),
83 except Exception as e:82 code=log.ErrorCode.backend_not_found)
84 self.__handle_error("Failed to upload file '%s' to remote folder '%s': %s"83
85 % (source_path.get_canonical(), self.__get_node_name(self.folder), str(e)), raise_errors)84 def _list(self):
8685 entries = self.client.get_files_in_node(self.folder)
87 @retry86 return [self.client.get_name_from_file({entry:entries[entry]}) for entry in entries]
88 def get(self, remote_filename, local_path, raise_errors=False):87
89 """Get remote filename, saving it to local_path"""88 def _delete(self, filename):
90 try:89 files = self.client.get_files()
91 files = self.client.get_files()90 entries = self.__filter_entries(files, self.folder, filename, 'file')
92 entries = self.__filter_entries(files, self.folder, remote_filename, 'file')91 if len(entries):
9392 self.client.destroy(entries.keys()[0])
94 if len(entries):93 else:
95 # get first matching remote file94 raise BackendException("Failed to find file '%s' in remote folder '%s'"
96 entry = entries.keys()[0]95 % (filename, self.__get_node_name(self.folder)),
97 self.client.download((entry, entries[entry]), dest_filename=local_path.name)96 code=log.ErrorCode.backend_not_found)
98 local_path.setdata()
99 return
100 else:
101 self.__handle_error("Failed to find file '%s' in remote folder '%s'"
102 % (remote_filename, self.__get_node_name(self.folder)), raise_errors)
103 except Exception as e:
104 self.__handle_error("Failed to download file '%s' in remote folder '%s': %s"
105 % (remote_filename, self.__get_node_name(self.folder), str(e)), raise_errors)
106
107 @retry
108 def _list(self, raise_errors=False):
109 """List files in folder"""
110 try:
111 entries = self.client.get_files_in_node(self.folder)
112 return [ self.client.get_name_from_file({entry:entries[entry]}) for entry in entries]
113 except Exception as e:
114 self.__handle_error("Failed to fetch list of files in remote folder '%s': %s"
115 % (self.__get_node_name(self.folder), str(e)), raise_errors)
116
117 @retry
118 def delete(self, filename_list, raise_errors=False):
119 """Delete files in filename_list"""
120 files = self.client.get_files()
121 for filename in filename_list:
122 entries = self.__filter_entries(files, self.folder, filename)
123 try:
124 if len(entries) > 0:
125 for entry in entries:
126 if self.client.destroy(entry):
127 self.__handle_error("Failed to remove file '%s' in remote folder '%s'"
128 % (filename, self.__get_node_name(self.folder)), raise_errors)
129 else:
130 log.Warn("Failed to fetch file '%s' in remote folder '%s'"
131 % (filename, self.__get_node_name(self.folder)))
132 except Exception as e:
133 self.__handle_error("Failed to remove file '%s' in remote folder '%s': %s"
134 % (filename, self.__get_node_name(self.folder), str(e)), raise_errors)
13597
136 def __get_node_name(self, handle):98 def __get_node_name(self, handle):
137 """get node name from public handle"""99 """get node name from public handle"""
138 files = self.client.get_files()100 files = self.client.get_files()
139 return self.client.get_name_from_file({handle:files[handle]})101 return self.client.get_name_from_file({handle:files[handle]})
140
141 def __handle_error(self, message, raise_errors=True):
142 if raise_errors:
143 raise BackendException(message)
144 else:
145 log.FatalError(message, log.ErrorCode.backend_error)
146102
147 def __authorize(self, email, password):103 def __authorize(self, email, password):
148 try:104 self.client.login(email, password)
149 self.client.login(email, password)
150 except Exception as e:
151 self.__handle_error('Error while authenticating client: %s.' % str(e))
152105
153 def __filter_entries(self, entries, parent_id=None, title=None, type=None):106 def __filter_entries(self, entries, parent_id=None, title=None, type=None):
154 result = {}107 result = {}
155 type_map = { 'folder': 1, 'file': 0 }108 type_map = { 'folder': 1, 'file': 0 }
156109
157 try:110 for k, v in entries.items():
158 for k, v in entries.items():111 try:
159 try:112 if parent_id != None:
160 if parent_id != None:113 assert(v['p'] == parent_id)
161 assert(v['p'] == parent_id)114 if title != None:
162 if title != None:115 assert(v['a']['n'] == title)
163 assert(v['a']['n'] == title)116 if type != None:
164 if type != None:117 assert(v['t'] == type_map[type])
165 assert(v['t'] == type_map[type])118 except AssertionError:
166 except AssertionError:119 continue
167 continue120
168121 result.update({k:v})
169 result.update({k:v})122
170123 return result
171 return result
172 except Exception as e:
173 self.__handle_error('Error while fetching remote entries: %s.' % str(e))
174124
175duplicity.backend.register_backend('mega', MegaBackend)125duplicity.backend.register_backend('mega', MegaBackend)
176126
=== renamed file 'duplicity/backends/~par2wrapperbackend.py' => 'duplicity/backends/par2backend.py'
--- duplicity/backends/~par2wrapperbackend.py 2014-04-17 19:53:30 +0000
+++ duplicity/backends/par2backend.py 2014-04-28 02:49:55 +0000
@@ -16,14 +16,16 @@
16# along with duplicity; if not, write to the Free Software Foundation,16# along with duplicity; if not, write to the Free Software Foundation,
17# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA17# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
1818
19from future_builtins import filter
20
19import os21import os
20import re22import re
21from duplicity import backend23from duplicity import backend
22from duplicity.errors import UnsupportedBackendScheme, BackendException24from duplicity.errors import BackendException
23from duplicity import log25from duplicity import log
24from duplicity import globals26from duplicity import globals
2527
26class Par2WrapperBackend(backend.Backend):28class Par2Backend(backend.Backend):
27 """This backend wrap around other backends and create Par2 recovery files29 """This backend wrap around other backends and create Par2 recovery files
28 before the file and the Par2 files are transfered with the wrapped backend.30 before the file and the Par2 files are transfered with the wrapped backend.
29 31
@@ -37,13 +39,15 @@
37 except AttributeError:39 except AttributeError:
38 self.redundancy = 1040 self.redundancy = 10
3941
40 try:42 self.wrapped_backend = backend.get_backend_object(parsed_url.url_string)
41 url_string = self.parsed_url.url_string.lstrip('par2+')43
42 self.wrapped_backend = backend.get_backend(url_string)44 for attr in ['_get', '_put', '_list', '_delete', '_delete_list',
43 except:45 '_query', '_query_list', '_retry_cleanup', '_error_code',
44 raise UnsupportedBackendScheme(self.parsed_url.url_string)46 '_move', '_close']:
4547 if hasattr(self.wrapped_backend, attr):
46 def put(self, source_path, remote_filename = None):48 setattr(self, attr, getattr(self, attr[1:]))
49
50 def transfer(self, method, source_path, remote_filename):
47 """create Par2 files and transfer the given file and the Par2 files51 """create Par2 files and transfer the given file and the Par2 files
48 with the wrapped backend.52 with the wrapped backend.
49 53
@@ -52,13 +56,14 @@
52 the soure_path with remote_filename into this. 56 the soure_path with remote_filename into this.
53 """57 """
54 import pexpect58 import pexpect
55 if remote_filename is None:
56 remote_filename = source_path.get_filename()
5759
58 par2temp = source_path.get_temp_in_same_dir()60 par2temp = source_path.get_temp_in_same_dir()
59 par2temp.mkdir()61 par2temp.mkdir()
60 source_symlink = par2temp.append(remote_filename)62 source_symlink = par2temp.append(remote_filename)
61 os.symlink(source_path.get_canonical(), source_symlink.get_canonical())63 source_target = source_path.get_canonical()
64 if not os.path.isabs(source_target):
65 source_target = os.path.join(os.getcwd(), source_target)
66 os.symlink(source_target, source_symlink.get_canonical())
62 source_symlink.setdata()67 source_symlink.setdata()
6368
64 log.Info("Create Par2 recovery files")69 log.Info("Create Par2 recovery files")
@@ -70,16 +75,17 @@
70 for file in par2temp.listdir():75 for file in par2temp.listdir():
71 files_to_transfer.append(par2temp.append(file))76 files_to_transfer.append(par2temp.append(file))
7277
73 ret = self.wrapped_backend.put(source_path, remote_filename)78 method(source_path, remote_filename)
74 for file in files_to_transfer:79 for file in files_to_transfer:
75 self.wrapped_backend.put(file, file.get_filename())80 method(file, file.get_filename())
7681
77 par2temp.deltree()82 par2temp.deltree()
78 return ret83
7984 def put(self, local, remote):
80 def move(self, source_path, remote_filename = None):85 self.transfer(self.wrapped_backend._put, local, remote)
81 self.put(source_path, remote_filename)86
82 source_path.delete()87 def move(self, local, remote):
88 self.transfer(self.wrapped_backend._move, local, remote)
8389
84 def get(self, remote_filename, local_path):90 def get(self, remote_filename, local_path):
85 """transfer remote_filename and the related .par2 file into91 """transfer remote_filename and the related .par2 file into
@@ -94,22 +100,23 @@
94 par2temp.mkdir()100 par2temp.mkdir()
95 local_path_temp = par2temp.append(remote_filename)101 local_path_temp = par2temp.append(remote_filename)
96102
97 ret = self.wrapped_backend.get(remote_filename, local_path_temp)103 self.wrapped_backend._get(remote_filename, local_path_temp)
98104
99 try:105 try:
100 par2file = par2temp.append(remote_filename + '.par2')106 par2file = par2temp.append(remote_filename + '.par2')
101 self.wrapped_backend.get(par2file.get_filename(), par2file)107 self.wrapped_backend._get(par2file.get_filename(), par2file)
102108
103 par2verify = 'par2 v -q -q %s %s' % (par2file.get_canonical(), local_path_temp.get_canonical())109 par2verify = 'par2 v -q -q %s %s' % (par2file.get_canonical(), local_path_temp.get_canonical())
104 out, returncode = pexpect.run(par2verify, -1, True)110 out, returncode = pexpect.run(par2verify, -1, True)
105111
106 if returncode:112 if returncode:
107 log.Warn("File is corrupt. Try to repair %s" % remote_filename)113 log.Warn("File is corrupt. Try to repair %s" % remote_filename)
108 par2volumes = self.list(re.compile(r'%s\.vol[\d+]*\.par2' % remote_filename))114 par2volumes = filter(re.compile((r'%s\.vol[\d+]*\.par2' % remote_filename).match,
115 self.wrapped_backend._list()))
109116
110 for filename in par2volumes:117 for filename in par2volumes:
111 file = par2temp.append(filename)118 file = par2temp.append(filename)
112 self.wrapped_backend.get(filename, file)119 self.wrapped_backend._get(filename, file)
113120
114 par2repair = 'par2 r -q -q %s %s' % (par2file.get_canonical(), local_path_temp.get_canonical())121 par2repair = 'par2 r -q -q %s %s' % (par2file.get_canonical(), local_path_temp.get_canonical())
115 out, returncode = pexpect.run(par2repair, -1, True)122 out, returncode = pexpect.run(par2repair, -1, True)
@@ -124,25 +131,23 @@
124 finally:131 finally:
125 local_path_temp.rename(local_path)132 local_path_temp.rename(local_path)
126 par2temp.deltree()133 par2temp.deltree()
127 return ret
128134
129 def list(self, filter = re.compile(r'(?!.*\.par2$)')):135 def delete(self, filename):
130 """default filter all files that ends with ".par"136 """delete given filename and its .par2 files
131 filter can be a re.compile instance or False for all remote files
132 """137 """
133 list = self.wrapped_backend.list()138 self.wrapped_backend._delete(filename)
134 if not filter:139
135 return list140 remote_list = self.list()
136 filtered_list = []141 filename_list = [filename]
137 for item in list:142 c = re.compile(r'%s(?:\.vol[\d+]*)?\.par2' % filename)
138 if filter.match(item):143 for remote_filename in remote_list:
139 filtered_list.append(item)144 if c.match(remote_filename):
140 return filtered_list145 self.wrapped_backend._delete(remote_filename)
141146
142 def delete(self, filename_list):147 def delete_list(self, filename_list):
143 """delete given filename_list and all .par2 files that belong to them148 """delete given filename_list and all .par2 files that belong to them
144 """149 """
145 remote_list = self.list(False)150 remote_list = self.list()
146151
147 for filename in filename_list[:]:152 for filename in filename_list[:]:
148 c = re.compile(r'%s(?:\.vol[\d+]*)?\.par2' % filename)153 c = re.compile(r'%s(?:\.vol[\d+]*)?\.par2' % filename)
@@ -150,46 +155,25 @@
150 if c.match(remote_filename):155 if c.match(remote_filename):
151 filename_list.append(remote_filename)156 filename_list.append(remote_filename)
152157
153 return self.wrapped_backend.delete(filename_list)158 return self.wrapped_backend._delete_list(filename_list)
154159
155 """just return the output of coresponding wrapped backend160
156 for all other functions161 def list(self):
157 """162 return self.wrapped_backend._list()
158 def query_info(self, filename_list, raise_errors=True):163
159 return self.wrapped_backend.query_info(filename_list, raise_errors)164 def retry_cleanup(self):
160165 self.wrapped_backend._retry_cleanup()
161 def get_password(self):166
162 return self.wrapped_backend.get_password()167 def error_code(self, operation, e):
163168 return self.wrapped_backend._error_code(operation, e)
164 def munge_password(self, commandline):169
165 return self.wrapped_backend.munge_password(commandline)170 def query(self, filename):
166171 return self.wrapped_backend._query(filename)
167 def run_command(self, commandline):172
168 return self.wrapped_backend.run_command(commandline)173 def query_list(self, filename_list):
169 def run_command_persist(self, commandline):174 return self.wrapped_backend._query(filename_list)
170 return self.wrapped_backend.run_command_persist(commandline)
171
172 def popen(self, commandline):
173 return self.wrapped_backend.popen(commandline)
174 def popen_persist(self, commandline):
175 return self.wrapped_backend.popen_persist(commandline)
176
177 def _subprocess_popen(self, commandline):
178 return self.wrapped_backend._subprocess_popen(commandline)
179
180 def subprocess_popen(self, commandline):
181 return self.wrapped_backend.subprocess_popen(commandline)
182
183 def subprocess_popen_persist(self, commandline):
184 return self.wrapped_backend.subprocess_popen_persist(commandline)
185175
186 def close(self):176 def close(self):
187 return self.wrapped_backend.close()177 self.wrapped_backend._close()
188178
189"""register this backend with leading "par2+" for all already known backends179backend.register_backend_prefix('par2', Par2Backend)
190
191files must be sorted in duplicity.backend.import_backends to catch
192all supported backends
193"""
194for item in backend._backends.keys():
195 backend.register_backend('par2+' + item, Par2WrapperBackend)
196180
=== modified file 'duplicity/backends/rsyncbackend.py'
--- duplicity/backends/rsyncbackend.py 2014-04-25 23:20:12 +0000
+++ duplicity/backends/rsyncbackend.py 2014-04-28 02:49:55 +0000
@@ -23,7 +23,7 @@
23import tempfile23import tempfile
2424
25import duplicity.backend25import duplicity.backend
26from duplicity.errors import * #@UnusedWildImport26from duplicity.errors import InvalidBackendURL
27from duplicity import globals, tempdir, util27from duplicity import globals, tempdir, util
2828
29class RsyncBackend(duplicity.backend.Backend):29class RsyncBackend(duplicity.backend.Backend):
@@ -58,12 +58,13 @@
58 if port:58 if port:
59 port = " --port=%s" % port59 port = " --port=%s" % port
60 else:60 else:
61 host_string = host + ":" if host else ""
61 if parsed_url.path.startswith("//"):62 if parsed_url.path.startswith("//"):
62 # its an absolute path63 # its an absolute path
63 self.url_string = "%s:/%s" % (host, parsed_url.path.lstrip('/'))64 self.url_string = "%s/%s" % (host_string, parsed_url.path.lstrip('/'))
64 else:65 else:
65 # its a relative path66 # its a relative path
66 self.url_string = "%s:%s" % (host, parsed_url.path.lstrip('/'))67 self.url_string = "%s%s" % (host_string, parsed_url.path.lstrip('/'))
67 if parsed_url.port:68 if parsed_url.port:
68 port = " -p %s" % parsed_url.port69 port = " -p %s" % parsed_url.port
69 # add trailing slash if missing70 # add trailing slash if missing
@@ -105,29 +106,17 @@
105 raise InvalidBackendURL("Could not determine rsync path: %s"106 raise InvalidBackendURL("Could not determine rsync path: %s"
106 "" % self.munge_password( url ) )107 "" % self.munge_password( url ) )
107108
108 def run_command(self, commandline):109 def _put(self, source_path, remote_filename):
109 result, stdout, stderr = self.subprocess_popen_persist(commandline)
110 return result, stdout
111
112 def put(self, source_path, remote_filename = None):
113 """Use rsync to copy source_dir/filename to remote computer"""
114 if not remote_filename:
115 remote_filename = source_path.get_filename()
116 remote_path = os.path.join(self.url_string, remote_filename)110 remote_path = os.path.join(self.url_string, remote_filename)
117 commandline = "%s %s %s" % (self.cmd, source_path.name, remote_path)111 commandline = "%s %s %s" % (self.cmd, source_path.name, remote_path)
118 self.run_command(commandline)112 self.subprocess_popen(commandline)
119113
120 def get(self, remote_filename, local_path):114 def _get(self, remote_filename, local_path):
121 """Use rsync to get a remote file"""
122 remote_path = os.path.join (self.url_string, remote_filename)115 remote_path = os.path.join (self.url_string, remote_filename)
123 commandline = "%s %s %s" % (self.cmd, remote_path, local_path.name)116 commandline = "%s %s %s" % (self.cmd, remote_path, local_path.name)
124 self.run_command(commandline)117 self.subprocess_popen(commandline)
125 local_path.setdata()
126 if not local_path.exists():
127 raise BackendException("File %s not found" % local_path.name)
128118
129 def list(self):119 def _list(self):
130 """List files"""
131 def split (str):120 def split (str):
132 line = str.split ()121 line = str.split ()
133 if len (line) > 4 and line[4] != '.':122 if len (line) > 4 and line[4] != '.':
@@ -135,20 +124,17 @@
135 else:124 else:
136 return None125 return None
137 commandline = "%s %s" % (self.cmd, self.url_string)126 commandline = "%s %s" % (self.cmd, self.url_string)
138 result, stdout = self.run_command(commandline)127 result, stdout, stderr = self.subprocess_popen(commandline)
139 return [x for x in map (split, stdout.split('\n')) if x]128 return [x for x in map (split, stdout.split('\n')) if x]
140129
141 def delete(self, filename_list):130 def _delete_list(self, filename_list):
142 """Delete files."""
143 delete_list = filename_list131 delete_list = filename_list
144 dont_delete_list = []132 dont_delete_list = []
145 for file in self.list ():133 for file in self._list ():
146 if file in delete_list:134 if file in delete_list:
147 delete_list.remove (file)135 delete_list.remove (file)
148 else:136 else:
149 dont_delete_list.append (file)137 dont_delete_list.append (file)
150 if len (delete_list) > 0:
151 raise BackendException("Files %s not found" % str (delete_list))
152138
153 dir = tempfile.mkdtemp()139 dir = tempfile.mkdtemp()
154 exclude, exclude_name = tempdir.default().mkstemp_file()140 exclude, exclude_name = tempdir.default().mkstemp_file()
@@ -162,7 +148,7 @@
162 exclude.close()148 exclude.close()
163 commandline = ("%s --recursive --delete --exclude-from=%s %s/ %s" %149 commandline = ("%s --recursive --delete --exclude-from=%s %s/ %s" %
164 (self.cmd, exclude_name, dir, self.url_string))150 (self.cmd, exclude_name, dir, self.url_string))
165 self.run_command(commandline)151 self.subprocess_popen(commandline)
166 for file in to_delete:152 for file in to_delete:
167 util.ignore_missing(os.unlink, file)153 util.ignore_missing(os.unlink, file)
168 os.rmdir (dir)154 os.rmdir (dir)
169155
=== modified file 'duplicity/backends/sshbackend.py'
--- duplicity/backends/sshbackend.py 2014-04-17 21:54:04 +0000
+++ duplicity/backends/sshbackend.py 2014-04-28 02:49:55 +0000
@@ -18,6 +18,7 @@
18# along with duplicity; if not, write to the Free Software Foundation,18# along with duplicity; if not, write to the Free Software Foundation,
19# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA19# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
2020
21import duplicity.backend
21from duplicity import globals, log22from duplicity import globals, log
2223
23def warn_option(option, optionvar):24def warn_option(option, optionvar):
@@ -26,11 +27,15 @@
2627
27if (globals.ssh_backend and28if (globals.ssh_backend and
28 globals.ssh_backend.lower().strip() == 'pexpect'):29 globals.ssh_backend.lower().strip() == 'pexpect'):
29 from . import _ssh_pexpect30 from ._ssh_pexpect import SSHPExpectBackend as SSHBackend
30else:31else:
31 # take user by the hand to prevent typo driven bug reports32 # take user by the hand to prevent typo driven bug reports
32 if globals.ssh_backend.lower().strip() != 'paramiko':33 if globals.ssh_backend.lower().strip() != 'paramiko':
33 log.Warn(_("Warning: Selected ssh backend '%s' is neither 'paramiko nor 'pexpect'. Will use default paramiko instead.") % globals.ssh_backend)34 log.Warn(_("Warning: Selected ssh backend '%s' is neither 'paramiko nor 'pexpect'. Will use default paramiko instead.") % globals.ssh_backend)
34 warn_option("--scp-command", globals.scp_command)35 warn_option("--scp-command", globals.scp_command)
35 warn_option("--sftp-command", globals.sftp_command)36 warn_option("--sftp-command", globals.sftp_command)
36 from . import _ssh_paramiko37 from ._ssh_paramiko import SSHParamikoBackend as SSHBackend
38
39duplicity.backend.register_backend("sftp", SSHBackend)
40duplicity.backend.register_backend("scp", SSHBackend)
41duplicity.backend.register_backend("ssh", SSHBackend)
3742
=== modified file 'duplicity/backends/swiftbackend.py'
--- duplicity/backends/swiftbackend.py 2014-04-17 22:03:10 +0000
+++ duplicity/backends/swiftbackend.py 2014-04-28 02:49:55 +0000
@@ -19,14 +19,11 @@
19# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA19# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
2020
21import os21import os
22import time
2322
24import duplicity.backend23import duplicity.backend
25from duplicity import globals
26from duplicity import log24from duplicity import log
27from duplicity.errors import * #@UnusedWildImport25from duplicity.errors import BackendException
28from duplicity.util import exception_traceback26
29from duplicity.backend import retry
3027
31class SwiftBackend(duplicity.backend.Backend):28class SwiftBackend(duplicity.backend.Backend):
32 """29 """
@@ -82,121 +79,30 @@
82 % (e.__class__.__name__, str(e)),79 % (e.__class__.__name__, str(e)),
83 log.ErrorCode.connection_failed)80 log.ErrorCode.connection_failed)
8481
85 def put(self, source_path, remote_filename = None):82 def _error_code(self, operation, e):
86 if not remote_filename:83 if isinstance(e, self.resp_exc):
87 remote_filename = source_path.get_filename()84 if e.http_status == 404:
8885 return log.ErrorCode.backend_not_found
89 for n in range(1, globals.num_retries+1):86
90 log.Info("Uploading '%s/%s' " % (self.container, remote_filename))87 def _put(self, source_path, remote_filename):
91 try:88 self.conn.put_object(self.container, remote_filename,
92 self.conn.put_object(self.container,89 file(source_path.name))
93 remote_filename, 90
94 file(source_path.name))91 def _get(self, remote_filename, local_path):
95 return92 headers, body = self.conn.get_object(self.container, remote_filename)
96 except self.resp_exc as error:93 with open(local_path.name, 'wb') as f:
97 log.Warn("Upload of '%s' failed (attempt %d): Swift server returned: %s %s"94 for chunk in body:
98 % (remote_filename, n, error.http_status, error.message))95 f.write(chunk)
99 except Exception as e:
100 log.Warn("Upload of '%s' failed (attempt %s): %s: %s"
101 % (remote_filename, n, e.__class__.__name__, str(e)))
102 log.Debug("Backtrace of previous error: %s"
103 % exception_traceback())
104 time.sleep(30)
105 log.Warn("Giving up uploading '%s' after %s attempts"
106 % (remote_filename, globals.num_retries))
107 raise BackendException("Error uploading '%s'" % remote_filename)
108
109 def get(self, remote_filename, local_path):
110 for n in range(1, globals.num_retries+1):
111 log.Info("Downloading '%s/%s'" % (self.container, remote_filename))
112 try:
113 headers, body = self.conn.get_object(self.container,
114 remote_filename)
115 f = open(local_path.name, 'w')
116 for chunk in body:
117 f.write(chunk)
118 local_path.setdata()
119 return
120 except self.resp_exc as resperr:
121 log.Warn("Download of '%s' failed (attempt %s): Swift server returned: %s %s"
122 % (remote_filename, n, resperr.http_status, resperr.message))
123 except Exception as e:
124 log.Warn("Download of '%s' failed (attempt %s): %s: %s"
125 % (remote_filename, n, e.__class__.__name__, str(e)))
126 log.Debug("Backtrace of previous error: %s"
127 % exception_traceback())
128 time.sleep(30)
129 log.Warn("Giving up downloading '%s' after %s attempts"
130 % (remote_filename, globals.num_retries))
131 raise BackendException("Error downloading '%s/%s'"
132 % (self.container, remote_filename))
13396
134 def _list(self):97 def _list(self):
135 for n in range(1, globals.num_retries+1):98 headers, objs = self.conn.get_container(self.container)
136 log.Info("Listing '%s'" % (self.container))99 return [ o['name'] for o in objs ]
137 try:100
138 # Cloud Files will return a max of 10,000 objects. We have101 def _delete(self, filename):
139 # to make multiple requests to get them all.102 self.conn.delete_object(self.container, filename)
140 headers, objs = self.conn.get_container(self.container)103
141 return [ o['name'] for o in objs ]104 def _query(self, filename):
142 except self.resp_exc as resperr:105 sobject = self.conn.head_object(self.container, filename)
143 log.Warn("Listing of '%s' failed (attempt %s): Swift server returned: %s %s"106 return {'size': int(sobject['content-length'])}
144 % (self.container, n, resperr.http_status, resperr.message))
145 except Exception as e:
146 log.Warn("Listing of '%s' failed (attempt %s): %s: %s"
147 % (self.container, n, e.__class__.__name__, str(e)))
148 log.Debug("Backtrace of previous error: %s"
149 % exception_traceback())
150 time.sleep(30)
151 log.Warn("Giving up listing of '%s' after %s attempts"
152 % (self.container, globals.num_retries))
153 raise BackendException("Error listing '%s'"
154 % (self.container))
155
156 def delete_one(self, remote_filename):
157 for n in range(1, globals.num_retries+1):
158 log.Info("Deleting '%s/%s'" % (self.container, remote_filename))
159 try:
160 self.conn.delete_object(self.container, remote_filename)
161 return
162 except self.resp_exc as resperr:
163 if n > 1 and resperr.http_status == 404:
164 # We failed on a timeout, but delete succeeded on the server
165 log.Warn("Delete of '%s' missing after retry - must have succeded earlier" % remote_filename )
166 return
167 log.Warn("Delete of '%s' failed (attempt %s): Swift server returned: %s %s"
168 % (remote_filename, n, resperr.http_status, resperr.message))
169 except Exception as e:
170 log.Warn("Delete of '%s' failed (attempt %s): %s: %s"
171 % (remote_filename, n, e.__class__.__name__, str(e)))
172 log.Debug("Backtrace of previous error: %s"
173 % exception_traceback())
174 time.sleep(30)
175 log.Warn("Giving up deleting '%s' after %s attempts"
176 % (remote_filename, globals.num_retries))
177 raise BackendException("Error deleting '%s/%s'"
178 % (self.container, remote_filename))
179
180 def delete(self, filename_list):
181 for file in filename_list:
182 self.delete_one(file)
183 log.Debug("Deleted '%s/%s'" % (self.container, file))
184
185 @retry
186 def _query_file_info(self, filename, raise_errors=False):
187 try:
188 sobject = self.conn.head_object(self.container, filename)
189 return {'size': int(sobject['content-length'])}
190 except self.resp_exc:
191 return {'size': -1}
192 except Exception as e:
193 log.Warn("Error querying '%s/%s': %s"
194 "" % (self.container,
195 filename,
196 str(e)))
197 if raise_errors:
198 raise e
199 else:
200 return {'size': None}
201107
202duplicity.backend.register_backend("swift", SwiftBackend)108duplicity.backend.register_backend("swift", SwiftBackend)
203109
=== modified file 'duplicity/backends/tahoebackend.py'
--- duplicity/backends/tahoebackend.py 2013-12-27 06:39:00 +0000
+++ duplicity/backends/tahoebackend.py 2014-04-28 02:49:55 +0000
@@ -20,9 +20,8 @@
2020
21import duplicity.backend21import duplicity.backend
22from duplicity import log22from duplicity import log
23from duplicity.errors import * #@UnusedWildImport23from duplicity.errors import BackendException
2424
25from commands import getstatusoutput
2625
27class TAHOEBackend(duplicity.backend.Backend):26class TAHOEBackend(duplicity.backend.Backend):
28 """27 """
@@ -36,10 +35,8 @@
3635
37 self.alias = url[0]36 self.alias = url[0]
3837
39 if len(url) > 2:38 if len(url) > 1:
40 self.directory = "/".join(url[1:])39 self.directory = "/".join(url[1:])
41 elif len(url) == 2:
42 self.directory = url[1]
43 else:40 else:
44 self.directory = ""41 self.directory = ""
4542
@@ -59,28 +56,20 @@
5956
60 def run(self, *args):57 def run(self, *args):
61 cmd = " ".join(args)58 cmd = " ".join(args)
62 log.Debug("tahoe execute: %s" % cmd)59 _, output, _ = self.subprocess_popen(cmd)
63 (status, output) = getstatusoutput(cmd)60 return output
6461
65 if status != 0:62 def _put(self, source_path, remote_filename):
66 raise BackendException("Error running %s" % cmd)
67 else:
68 return output
69
70 def put(self, source_path, remote_filename=None):
71 self.run("tahoe", "cp", source_path.name, self.get_remote_path(remote_filename))63 self.run("tahoe", "cp", source_path.name, self.get_remote_path(remote_filename))
7264
73 def get(self, remote_filename, local_path):65 def _get(self, remote_filename, local_path):
74 self.run("tahoe", "cp", self.get_remote_path(remote_filename), local_path.name)66 self.run("tahoe", "cp", self.get_remote_path(remote_filename), local_path.name)
75 local_path.setdata()
7667
77 def _list(self):68 def _list(self):
78 log.Debug("tahoe: List")69 output = self.run("tahoe", "ls", self.get_remote_path())
79 return self.run("tahoe", "ls", self.get_remote_path()).split('\n')70 return output.split('\n') if output else []
8071
81 def delete(self, filename_list):72 def _delete(self, filename):
82 log.Debug("tahoe: delete(%s)" % filename_list)73 self.run("tahoe", "rm", self.get_remote_path(filename))
83 for filename in filename_list:
84 self.run("tahoe", "rm", self.get_remote_path(filename))
8574
86duplicity.backend.register_backend("tahoe", TAHOEBackend)75duplicity.backend.register_backend("tahoe", TAHOEBackend)
8776
=== modified file 'duplicity/backends/webdavbackend.py'
--- duplicity/backends/webdavbackend.py 2014-04-25 15:03:00 +0000
+++ duplicity/backends/webdavbackend.py 2014-04-28 02:49:55 +0000
@@ -32,8 +32,7 @@
32import duplicity.backend32import duplicity.backend
33from duplicity import globals33from duplicity import globals
34from duplicity import log34from duplicity import log
35from duplicity.errors import * #@UnusedWildImport35from duplicity.errors import BackendException, FatalBackendException
36from duplicity.backend import retry_fatal
3736
38class CustomMethodRequest(urllib2.Request):37class CustomMethodRequest(urllib2.Request):
39 """38 """
@@ -54,7 +53,7 @@
54 global socket, ssl53 global socket, ssl
55 import socket, ssl54 import socket, ssl
56 except ImportError:55 except ImportError:
57 raise FatalBackendError("Missing socket or ssl libraries.")56 raise FatalBackendException("Missing socket or ssl libraries.")
5857
59 httplib.HTTPSConnection.__init__(self, *args, **kwargs)58 httplib.HTTPSConnection.__init__(self, *args, **kwargs)
6059
@@ -71,21 +70,21 @@
71 break70 break
72 # still no cacert file, inform user71 # still no cacert file, inform user
73 if not self.cacert_file:72 if not self.cacert_file:
74 raise FatalBackendError("""For certificate verification a cacert database file is needed in one of these locations: %s73 raise FatalBackendException("""For certificate verification a cacert database file is needed in one of these locations: %s
75Hints:74Hints:
76 Consult the man page, chapter 'SSL Certificate Verification'.75 Consult the man page, chapter 'SSL Certificate Verification'.
77 Consider using the options --ssl-cacert-file, --ssl-no-check-certificate .""" % ", ".join(cacert_candidates) )76 Consider using the options --ssl-cacert-file, --ssl-no-check-certificate .""" % ", ".join(cacert_candidates) )
78 # check if file is accessible (libssl errors are not very detailed)77 # check if file is accessible (libssl errors are not very detailed)
79 if not os.access(self.cacert_file, os.R_OK):78 if not os.access(self.cacert_file, os.R_OK):
80 raise FatalBackendError("Cacert database file '%s' is not readable." % cacert_file)79 raise FatalBackendException("Cacert database file '%s' is not readable." % cacert_file)
8180
82 def connect(self):81 def connect(self):
83 # create new socket82 # create new socket
84 sock = socket.create_connection((self.host, self.port),83 sock = socket.create_connection((self.host, self.port),
85 self.timeout)84 self.timeout)
86 if self._tunnel_host:85 if self.tunnel_host:
87 self.sock = sock86 self.sock = sock
88 self._tunnel()87 self.tunnel()
8988
90 # wrap the socket in ssl using verification89 # wrap the socket in ssl using verification
91 self.sock = ssl.wrap_socket(sock,90 self.sock = ssl.wrap_socket(sock,
@@ -126,7 +125,7 @@
126125
127 self.username = parsed_url.username126 self.username = parsed_url.username
128 self.password = self.get_password()127 self.password = self.get_password()
129 self.directory = self._sanitize_path(parsed_url.path)128 self.directory = self.sanitize_path(parsed_url.path)
130129
131 log.Info("Using WebDAV protocol %s" % (globals.webdav_proto,))130 log.Info("Using WebDAV protocol %s" % (globals.webdav_proto,))
132 log.Info("Using WebDAV host %s port %s" % (parsed_url.hostname, parsed_url.port))131 log.Info("Using WebDAV host %s port %s" % (parsed_url.hostname, parsed_url.port))
@@ -134,30 +133,33 @@
134133
135 self.conn = None134 self.conn = None
136135
137 def _sanitize_path(self,path):136 def sanitize_path(self,path):
138 if path:137 if path:
139 foldpath = re.compile('/+')138 foldpath = re.compile('/+')
140 return foldpath.sub('/', path + '/' )139 return foldpath.sub('/', path + '/' )
141 else:140 else:
142 return '/'141 return '/'
143142
144 def _getText(self,nodelist):143 def getText(self,nodelist):
145 rc = ""144 rc = ""
146 for node in nodelist:145 for node in nodelist:
147 if node.nodeType == node.TEXT_NODE:146 if node.nodeType == node.TEXT_NODE:
148 rc = rc + node.data147 rc = rc + node.data
149 return rc148 return rc
150149
151 def _connect(self, forced=False):150 def _retry_cleanup(self):
151 self.connect(forced=True)
152
153 def connect(self, forced=False):
152 """154 """
153 Connect or re-connect to the server, updates self.conn155 Connect or re-connect to the server, updates self.conn
154 # reconnect on errors as a precaution, there are errors e.g.156 # reconnect on errors as a precaution, there are errors e.g.
155 # "[Errno 32] Broken pipe" or SSl errors that render the connection unusable157 # "[Errno 32] Broken pipe" or SSl errors that render the connection unusable
156 """158 """
157 if self.retry_count<=1 and self.conn \159 if not forced and self.conn \
158 and self.conn.host == self.parsed_url.hostname: return160 and self.conn.host == self.parsed_url.hostname: return
159161
160 log.Info("WebDAV create connection on '%s' (retry %s) " % (self.parsed_url.hostname,self.retry_count) )162 log.Info("WebDAV create connection on '%s'" % (self.parsed_url.hostname))
161 if self.conn: self.conn.close()163 if self.conn: self.conn.close()
162 # http schemes needed for redirect urls from servers164 # http schemes needed for redirect urls from servers
163 if self.parsed_url.scheme in ['webdav','http']:165 if self.parsed_url.scheme in ['webdav','http']:
@@ -168,9 +170,9 @@
168 else:170 else:
169 self.conn = VerifiedHTTPSConnection(self.parsed_url.hostname, self.parsed_url.port)171 self.conn = VerifiedHTTPSConnection(self.parsed_url.hostname, self.parsed_url.port)
170 else:172 else:
171 raise FatalBackendError("WebDAV Unknown URI scheme: %s" % (self.parsed_url.scheme))173 raise FatalBackendException("WebDAV Unknown URI scheme: %s" % (self.parsed_url.scheme))
172174
173 def close(self):175 def _close(self):
174 self.conn.close()176 self.conn.close()
175177
176 def request(self, method, path, data=None, redirected=0):178 def request(self, method, path, data=None, redirected=0):
@@ -178,7 +180,7 @@
178 Wraps the connection.request method to retry once if authentication is180 Wraps the connection.request method to retry once if authentication is
179 required181 required
180 """182 """
181 self._connect()183 self.connect()
182184
183 quoted_path = urllib.quote(path,"/:~")185 quoted_path = urllib.quote(path,"/:~")
184186
@@ -197,12 +199,12 @@
197 if redirect_url:199 if redirect_url:
198 log.Notice("WebDAV redirect to: %s " % urllib.unquote(redirect_url) )200 log.Notice("WebDAV redirect to: %s " % urllib.unquote(redirect_url) )
199 if redirected > 10:201 if redirected > 10:
200 raise FatalBackendError("WebDAV redirected 10 times. Giving up.")202 raise FatalBackendException("WebDAV redirected 10 times. Giving up.")
201 self.parsed_url = duplicity.backend.ParsedUrl(redirect_url)203 self.parsed_url = duplicity.backend.ParsedUrl(redirect_url)
202 self.directory = self._sanitize_path(self.parsed_url.path)204 self.directory = self.sanitize_path(self.parsed_url.path)
203 return self.request(method,self.directory,data,redirected+1)205 return self.request(method,self.directory,data,redirected+1)
204 else:206 else:
205 raise FatalBackendError("WebDAV missing location header in redirect response.")207 raise FatalBackendException("WebDAV missing location header in redirect response.")
206 elif response.status == 401:208 elif response.status == 401:
207 response.close()209 response.close()
208 self.headers['Authorization'] = self.get_authorization(response, quoted_path)210 self.headers['Authorization'] = self.get_authorization(response, quoted_path)
@@ -261,10 +263,7 @@
261 auth_string = self.digest_auth_handler.get_authorization(dummy_req, self.digest_challenge)263 auth_string = self.digest_auth_handler.get_authorization(dummy_req, self.digest_challenge)
262 return 'Digest %s' % auth_string264 return 'Digest %s' % auth_string
263265
264 @retry_fatal
265 def _list(self):266 def _list(self):
266 """List files in directory"""
267 log.Info("Listing directory %s on WebDAV server" % (self.directory,))
268 response = None267 response = None
269 try:268 try:
270 self.headers['Depth'] = "1"269 self.headers['Depth'] = "1"
@@ -289,7 +288,7 @@
289 dom = xml.dom.minidom.parseString(document)288 dom = xml.dom.minidom.parseString(document)
290 result = []289 result = []
291 for href in dom.getElementsByTagName('d:href') + dom.getElementsByTagName('D:href'):290 for href in dom.getElementsByTagName('d:href') + dom.getElementsByTagName('D:href'):
292 filename = self.__taste_href(href)291 filename = self.taste_href(href)
293 if filename:292 if filename:
294 result.append(filename)293 result.append(filename)
295 return result294 return result
@@ -308,7 +307,7 @@
308 for i in range(1,len(dirs)):307 for i in range(1,len(dirs)):
309 d="/".join(dirs[0:i+1])+"/"308 d="/".join(dirs[0:i+1])+"/"
310309
311 self.close() # or we get previous request's data or exception310 self._close() # or we get previous request's data or exception
312 self.headers['Depth'] = "1"311 self.headers['Depth'] = "1"
313 response = self.request("PROPFIND", d)312 response = self.request("PROPFIND", d)
314 del self.headers['Depth']313 del self.headers['Depth']
@@ -317,21 +316,21 @@
317316
318 if response.status == 404:317 if response.status == 404:
319 log.Info("Creating missing directory %s" % d)318 log.Info("Creating missing directory %s" % d)
320 self.close() # or we get previous request's data or exception319 self._close() # or we get previous request's data or exception
321320
322 res = self.request("MKCOL", d)321 res = self.request("MKCOL", d)
323 if res.status != 201:322 if res.status != 201:
324 raise BackendException("WebDAV MKCOL %s failed: %s %s" % (d,res.status,res.reason))323 raise BackendException("WebDAV MKCOL %s failed: %s %s" % (d,res.status,res.reason))
325 self.close()324 self._close()
326325
327 def __taste_href(self, href):326 def taste_href(self, href):
328 """327 """
329 Internal helper to taste the given href node and, if328 Internal helper to taste the given href node and, if
330 it is a duplicity file, collect it as a result file.329 it is a duplicity file, collect it as a result file.
331330
332 @return: A matching filename, or None if the href did not match.331 @return: A matching filename, or None if the href did not match.
333 """332 """
334 raw_filename = self._getText(href.childNodes).strip()333 raw_filename = self.getText(href.childNodes).strip()
335 parsed_url = urlparse.urlparse(urllib.unquote(raw_filename))334 parsed_url = urlparse.urlparse(urllib.unquote(raw_filename))
336 filename = parsed_url.path335 filename = parsed_url.path
337 log.Debug("webdav path decoding and translation: "336 log.Debug("webdav path decoding and translation: "
@@ -361,11 +360,8 @@
361 else:360 else:
362 return None361 return None
363362
364 @retry_fatal363 def _get(self, remote_filename, local_path):
365 def get(self, remote_filename, local_path):
366 """Get remote filename, saving it to local_path"""
367 url = self.directory + remote_filename364 url = self.directory + remote_filename
368 log.Info("Retrieving %s from WebDAV server" % (url ,))
369 response = None365 response = None
370 try:366 try:
371 target_file = local_path.open("wb")367 target_file = local_path.open("wb")
@@ -376,7 +372,6 @@
376 #import hashlib372 #import hashlib
377 #log.Info("WebDAV GOT %s bytes with md5=%s" % (len(data),hashlib.md5(data).hexdigest()) )373 #log.Info("WebDAV GOT %s bytes with md5=%s" % (len(data),hashlib.md5(data).hexdigest()) )
378 assert not target_file.close()374 assert not target_file.close()
379 local_path.setdata()
380 response.close()375 response.close()
381 else:376 else:
382 status = response.status377 status = response.status
@@ -388,13 +383,8 @@
388 finally:383 finally:
389 if response: response.close()384 if response: response.close()
390385
391 @retry_fatal386 def _put(self, source_path, remote_filename):
392 def put(self, source_path, remote_filename = None):
393 """Transfer source_path to remote_filename"""
394 if not remote_filename:
395 remote_filename = source_path.get_filename()
396 url = self.directory + remote_filename387 url = self.directory + remote_filename
397 log.Info("Saving %s on WebDAV server" % (url ,))
398 response = None388 response = None
399 try:389 try:
400 source_file = source_path.open("rb")390 source_file = source_path.open("rb")
@@ -412,27 +402,23 @@
412 finally:402 finally:
413 if response: response.close()403 if response: response.close()
414404
415 @retry_fatal405 def _delete(self, filename):
416 def delete(self, filename_list):406 url = self.directory + filename
417 """Delete files in filename_list"""407 response = None
418 for filename in filename_list:408 try:
419 url = self.directory + filename409 response = self.request("DELETE", url)
420 log.Info("Deleting %s from WebDAV server" % (url ,))410 if response.status in [200, 204]:
421 response = None411 response.read()
422 try:412 response.close()
423 response = self.request("DELETE", url)413 else:
424 if response.status in [200, 204]:414 status = response.status
425 response.read()415 reason = response.reason
426 response.close()416 response.close()
427 else:417 raise BackendException("Bad status code %s reason %s." % (status,reason))
428 status = response.status418 except Exception as e:
429 reason = response.reason419 raise e
430 response.close()420 finally:
431 raise BackendException("Bad status code %s reason %s." % (status,reason))421 if response: response.close()
432 except Exception as e:
433 raise e
434 finally:
435 if response: response.close()
436422
437duplicity.backend.register_backend("webdav", WebDAVBackend)423duplicity.backend.register_backend("webdav", WebDAVBackend)
438duplicity.backend.register_backend("webdavs", WebDAVBackend)424duplicity.backend.register_backend("webdavs", WebDAVBackend)
439425
=== modified file 'duplicity/commandline.py'
--- duplicity/commandline.py 2014-04-25 23:20:12 +0000
+++ duplicity/commandline.py 2014-04-28 02:49:55 +0000
@@ -210,13 +210,6 @@
210 global select_opts, select_files, full_backup210 global select_opts, select_files, full_backup
211 global list_current, collection_status, cleanup, remove_time, verify211 global list_current, collection_status, cleanup, remove_time, verify
212212
213 def use_gio(*args):
214 try:
215 import duplicity.backends.giobackend
216 backend.force_backend(duplicity.backends.giobackend.GIOBackend)
217 except ImportError:
218 log.FatalError(_("Unable to load gio backend: %s") % str(sys.exc_info()[1]), log.ErrorCode.gio_not_available)
219
220 def set_log_fd(fd):213 def set_log_fd(fd):
221 if fd < 1:214 if fd < 1:
222 raise optparse.OptionValueError("log-fd must be greater than zero.")215 raise optparse.OptionValueError("log-fd must be greater than zero.")
@@ -365,7 +358,9 @@
365 # the time specified358 # the time specified
366 parser.add_option("--full-if-older-than", type = "time", dest = "full_force_time", metavar = _("time"))359 parser.add_option("--full-if-older-than", type = "time", dest = "full_force_time", metavar = _("time"))
367360
368 parser.add_option("--gio", action = "callback", callback = use_gio)361 parser.add_option("--gio",action = "callback", dest = "use_gio",
362 callback = lambda o, s, v, p: (setattr(p.values, o.dest, True),
363 old_fn_deprecation(s)))
369364
370 parser.add_option("--gpg-options", action = "extend", metavar = _("options"))365 parser.add_option("--gpg-options", action = "extend", metavar = _("options"))
371366
@@ -521,8 +516,8 @@
521 # sftp command to use (ssh pexpect backend)516 # sftp command to use (ssh pexpect backend)
522 parser.add_option("--sftp-command", metavar = _("command"))517 parser.add_option("--sftp-command", metavar = _("command"))
523518
524 # sftp command to use (ssh pexpect backend)519 # allow the user to switch cloudfiles backend
525 parser.add_option("--cf-command", metavar = _("command"))520 parser.add_option("--cf-backend", metavar = _("pyrax|cloudfiles"))
526521
527 # If set, use short (< 30 char) filenames for all the remote files.522 # If set, use short (< 30 char) filenames for all the remote files.
528 parser.add_option("--short-filenames", action = "callback",523 parser.add_option("--short-filenames", action = "callback",
529524
=== modified file 'duplicity/errors.py'
--- duplicity/errors.py 2013-01-10 19:04:39 +0000
+++ duplicity/errors.py 2014-04-28 02:49:55 +0000
@@ -23,6 +23,8 @@
23Error/exception classes that do not fit naturally anywhere else.23Error/exception classes that do not fit naturally anywhere else.
24"""24"""
2525
26from duplicity import log
27
26class DuplicityError(Exception):28class DuplicityError(Exception):
27 pass29 pass
2830
@@ -68,9 +70,11 @@
68 """70 """
69 Raised to indicate a backend specific problem.71 Raised to indicate a backend specific problem.
70 """72 """
71 pass73 def __init__(self, msg, code=log.ErrorCode.backend_error):
74 super(BackendException, self).__init__(msg)
75 self.code = code
7276
73class FatalBackendError(DuplicityError):77class FatalBackendException(BackendException):
74 """78 """
75 Raised to indicate a backend failed fatally.79 Raised to indicate a backend failed fatally.
76 """80 """
7781
=== modified file 'duplicity/globals.py'
--- duplicity/globals.py 2014-04-17 21:49:37 +0000
+++ duplicity/globals.py 2014-04-28 02:49:55 +0000
@@ -284,3 +284,6 @@
284284
285# Level of Redundancy in % for Par2 files285# Level of Redundancy in % for Par2 files
286par2_redundancy = 10286par2_redundancy = 10
287
288# Whether to enable gio backend
289use_gio = False
287290
=== modified file 'po/duplicity.pot'
--- po/duplicity.pot 2014-04-25 15:03:00 +0000
+++ po/duplicity.pot 2014-04-28 02:49:55 +0000
@@ -8,7 +8,7 @@
8msgstr ""8msgstr ""
9"Project-Id-Version: PACKAGE VERSION\n"9"Project-Id-Version: PACKAGE VERSION\n"
10"Report-Msgid-Bugs-To: Kenneth Loafman <kenneth@loafman.com>\n"10"Report-Msgid-Bugs-To: Kenneth Loafman <kenneth@loafman.com>\n"
11"POT-Creation-Date: 2014-04-21 11:04-0500\n"11"POT-Creation-Date: 2014-04-27 22:39-0400\n"
12"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"12"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
13"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"13"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
14"Language-Team: LANGUAGE <LL@li.org>\n"14"Language-Team: LANGUAGE <LL@li.org>\n"
@@ -69,243 +69,243 @@
69"Continuing restart on file %s."69"Continuing restart on file %s."
70msgstr ""70msgstr ""
7171
72#: ../bin/duplicity:29972#: ../bin/duplicity:297
73#, python-format73#, python-format
74msgid "File %s was corrupted during upload."74msgid "File %s was corrupted during upload."
75msgstr ""75msgstr ""
7676
77#: ../bin/duplicity:33377#: ../bin/duplicity:331
78msgid ""78msgid ""
79"Restarting backup, but current encryption settings do not match original "79"Restarting backup, but current encryption settings do not match original "
80"settings"80"settings"
81msgstr ""81msgstr ""
8282
83#: ../bin/duplicity:35683#: ../bin/duplicity:354
84#, python-format84#, python-format
85msgid "Restarting after volume %s, file %s, block %s"85msgid "Restarting after volume %s, file %s, block %s"
86msgstr ""86msgstr ""
8787
88#: ../bin/duplicity:42388#: ../bin/duplicity:421
89#, python-format89#, python-format
90msgid "Processed volume %d"90msgid "Processed volume %d"
91msgstr ""91msgstr ""
9292
93#: ../bin/duplicity:57293#: ../bin/duplicity:570
94msgid ""94msgid ""
95"Fatal Error: Unable to start incremental backup. Old signatures not found "95"Fatal Error: Unable to start incremental backup. Old signatures not found "
96"and incremental specified"96"and incremental specified"
97msgstr ""97msgstr ""
9898
99#: ../bin/duplicity:57699#: ../bin/duplicity:574
100msgid "No signatures found, switching to full backup."100msgid "No signatures found, switching to full backup."
101msgstr ""101msgstr ""
102102
103#: ../bin/duplicity:590103#: ../bin/duplicity:588
104msgid "Backup Statistics"104msgid "Backup Statistics"
105msgstr ""105msgstr ""
106106
107#: ../bin/duplicity:695107#: ../bin/duplicity:693
108#, python-format108#, python-format
109msgid "%s not found in archive, no files restored."109msgid "%s not found in archive, no files restored."
110msgstr ""110msgstr ""
111111
112#: ../bin/duplicity:699112#: ../bin/duplicity:697
113msgid "No files found in archive - nothing restored."113msgid "No files found in archive - nothing restored."
114msgstr ""114msgstr ""
115115
116#: ../bin/duplicity:732116#: ../bin/duplicity:730
117#, python-format117#, python-format
118msgid "Processed volume %d of %d"118msgid "Processed volume %d of %d"
119msgstr ""119msgstr ""
120120
121#: ../bin/duplicity:764
122#, python-format
123msgid "Invalid data - %s hash mismatch for file:"
124msgstr ""
125
121#: ../bin/duplicity:766126#: ../bin/duplicity:766
122#, python-format127#, python-format
123msgid "Invalid data - %s hash mismatch for file:"
124msgstr ""
125
126#: ../bin/duplicity:768
127#, python-format
128msgid "Calculated hash: %s"128msgid "Calculated hash: %s"
129msgstr ""129msgstr ""
130130
131#: ../bin/duplicity:769131#: ../bin/duplicity:767
132#, python-format132#, python-format
133msgid "Manifest hash: %s"133msgid "Manifest hash: %s"
134msgstr ""134msgstr ""
135135
136#: ../bin/duplicity:807136#: ../bin/duplicity:805
137#, python-format137#, python-format
138msgid "Volume was signed by key %s, not %s"138msgid "Volume was signed by key %s, not %s"
139msgstr ""139msgstr ""
140140
141#: ../bin/duplicity:837141#: ../bin/duplicity:835
142#, python-format142#, python-format
143msgid "Verify complete: %s, %s."143msgid "Verify complete: %s, %s."
144msgstr ""144msgstr ""
145145
146#: ../bin/duplicity:838146#: ../bin/duplicity:836
147#, python-format147#, python-format
148msgid "%d file compared"148msgid "%d file compared"
149msgid_plural "%d files compared"149msgid_plural "%d files compared"
150msgstr[0] ""150msgstr[0] ""
151msgstr[1] ""151msgstr[1] ""
152152
153#: ../bin/duplicity:840153#: ../bin/duplicity:838
154#, python-format154#, python-format
155msgid "%d difference found"155msgid "%d difference found"
156msgid_plural "%d differences found"156msgid_plural "%d differences found"
157msgstr[0] ""157msgstr[0] ""
158msgstr[1] ""158msgstr[1] ""
159159
160#: ../bin/duplicity:859160#: ../bin/duplicity:857
161msgid "No extraneous files found, nothing deleted in cleanup."161msgid "No extraneous files found, nothing deleted in cleanup."
162msgstr ""162msgstr ""
163163
164#: ../bin/duplicity:864164#: ../bin/duplicity:862
165msgid "Deleting this file from backend:"165msgid "Deleting this file from backend:"
166msgid_plural "Deleting these files from backend:"166msgid_plural "Deleting these files from backend:"
167msgstr[0] ""167msgstr[0] ""
168msgstr[1] ""168msgstr[1] ""
169169
170#: ../bin/duplicity:876170#: ../bin/duplicity:874
171msgid "Found the following file to delete:"171msgid "Found the following file to delete:"
172msgid_plural "Found the following files to delete:"172msgid_plural "Found the following files to delete:"
173msgstr[0] ""173msgstr[0] ""
174msgstr[1] ""174msgstr[1] ""
175175
176#: ../bin/duplicity:880176#: ../bin/duplicity:878
177msgid "Run duplicity again with the --force option to actually delete."177msgid "Run duplicity again with the --force option to actually delete."
178msgstr ""178msgstr ""
179179
180#: ../bin/duplicity:921
181msgid "There are backup set(s) at time(s):"
182msgstr ""
183
180#: ../bin/duplicity:923184#: ../bin/duplicity:923
181msgid "There are backup set(s) at time(s):"
182msgstr ""
183
184#: ../bin/duplicity:925
185msgid "Which can't be deleted because newer sets depend on them."185msgid "Which can't be deleted because newer sets depend on them."
186msgstr ""186msgstr ""
187187
188#: ../bin/duplicity:929188#: ../bin/duplicity:927
189msgid ""189msgid ""
190"Current active backup chain is older than specified time. However, it will "190"Current active backup chain is older than specified time. However, it will "
191"not be deleted. To remove all your backups, manually purge the repository."191"not be deleted. To remove all your backups, manually purge the repository."
192msgstr ""192msgstr ""
193193
194#: ../bin/duplicity:935194#: ../bin/duplicity:933
195msgid "No old backup sets found, nothing deleted."195msgid "No old backup sets found, nothing deleted."
196msgstr ""196msgstr ""
197197
198#: ../bin/duplicity:938198#: ../bin/duplicity:936
199msgid "Deleting backup chain at time:"199msgid "Deleting backup chain at time:"
200msgid_plural "Deleting backup chains at times:"200msgid_plural "Deleting backup chains at times:"
201msgstr[0] ""201msgstr[0] ""
202msgstr[1] ""202msgstr[1] ""
203203
204#: ../bin/duplicity:947
205#, python-format
206msgid "Deleting incremental signature chain %s"
207msgstr ""
208
204#: ../bin/duplicity:949209#: ../bin/duplicity:949
205#, python-format210#, python-format
206msgid "Deleting incremental signature chain %s"
207msgstr ""
208
209#: ../bin/duplicity:951
210#, python-format
211msgid "Deleting incremental backup chain %s"211msgid "Deleting incremental backup chain %s"
212msgstr ""212msgstr ""
213213
214#: ../bin/duplicity:952
215#, python-format
216msgid "Deleting complete signature chain %s"
217msgstr ""
218
214#: ../bin/duplicity:954219#: ../bin/duplicity:954
215#, python-format220#, python-format
216msgid "Deleting complete signature chain %s"
217msgstr ""
218
219#: ../bin/duplicity:956
220#, python-format
221msgid "Deleting complete backup chain %s"221msgid "Deleting complete backup chain %s"
222msgstr ""222msgstr ""
223223
224#: ../bin/duplicity:962224#: ../bin/duplicity:960
225msgid "Found old backup chain at the following time:"225msgid "Found old backup chain at the following time:"
226msgid_plural "Found old backup chains at the following times:"226msgid_plural "Found old backup chains at the following times:"
227msgstr[0] ""227msgstr[0] ""
228msgstr[1] ""228msgstr[1] ""
229229
230#: ../bin/duplicity:966230#: ../bin/duplicity:964
231msgid "Rerun command with --force option to actually delete."231msgid "Rerun command with --force option to actually delete."
232msgstr ""232msgstr ""
233233
234#: ../bin/duplicity:1043234#: ../bin/duplicity:1041
235#, python-format235#, python-format
236msgid "Deleting local %s (not authoritative at backend)."236msgid "Deleting local %s (not authoritative at backend)."
237msgstr ""237msgstr ""
238238
239#: ../bin/duplicity:1047239#: ../bin/duplicity:1045
240#, python-format240#, python-format
241msgid "Unable to delete %s: %s"241msgid "Unable to delete %s: %s"
242msgstr ""242msgstr ""
243243
244#: ../bin/duplicity:1075 ../duplicity/dup_temp.py:263244#: ../bin/duplicity:1073 ../duplicity/dup_temp.py:263
245#, python-format245#, python-format
246msgid "Failed to read %s: %s"246msgid "Failed to read %s: %s"
247msgstr ""247msgstr ""
248248
249#: ../bin/duplicity:1089249#: ../bin/duplicity:1087
250#, python-format250#, python-format
251msgid "Copying %s to local cache."251msgid "Copying %s to local cache."
252msgstr ""252msgstr ""
253253
254#: ../bin/duplicity:1137254#: ../bin/duplicity:1135
255msgid "Local and Remote metadata are synchronized, no sync needed."255msgid "Local and Remote metadata are synchronized, no sync needed."
256msgstr ""256msgstr ""
257257
258#: ../bin/duplicity:1142258#: ../bin/duplicity:1140
259msgid "Synchronizing remote metadata to local cache..."259msgid "Synchronizing remote metadata to local cache..."
260msgstr ""260msgstr ""
261261
262#: ../bin/duplicity:1157262#: ../bin/duplicity:1155
263msgid "Sync would copy the following from remote to local:"263msgid "Sync would copy the following from remote to local:"
264msgstr ""264msgstr ""
265265
266#: ../bin/duplicity:1160266#: ../bin/duplicity:1158
267msgid "Sync would remove the following spurious local files:"267msgid "Sync would remove the following spurious local files:"
268msgstr ""268msgstr ""
269269
270#: ../bin/duplicity:1203270#: ../bin/duplicity:1201
271msgid "Unable to get free space on temp."271msgid "Unable to get free space on temp."
272msgstr ""272msgstr ""
273273
274#: ../bin/duplicity:1211274#: ../bin/duplicity:1209
275#, python-format275#, python-format
276msgid "Temp space has %d available, backup needs approx %d."276msgid "Temp space has %d available, backup needs approx %d."
277msgstr ""277msgstr ""
278278
279#: ../bin/duplicity:1214279#: ../bin/duplicity:1212
280#, python-format280#, python-format
281msgid "Temp has %d available, backup will use approx %d."281msgid "Temp has %d available, backup will use approx %d."
282msgstr ""282msgstr ""
283283
284#: ../bin/duplicity:1222284#: ../bin/duplicity:1220
285msgid "Unable to get max open files."285msgid "Unable to get max open files."
286msgstr ""286msgstr ""
287287
288#: ../bin/duplicity:1226288#: ../bin/duplicity:1224
289#, python-format289#, python-format
290msgid ""290msgid ""
291"Max open files of %s is too low, should be >= 1024.\n"291"Max open files of %s is too low, should be >= 1024.\n"
292"Use 'ulimit -n 1024' or higher to correct.\n"292"Use 'ulimit -n 1024' or higher to correct.\n"
293msgstr ""293msgstr ""
294294
295#: ../bin/duplicity:1275295#: ../bin/duplicity:1273
296msgid ""296msgid ""
297"RESTART: The first volume failed to upload before termination.\n"297"RESTART: The first volume failed to upload before termination.\n"
298" Restart is impossible...starting backup from beginning."298" Restart is impossible...starting backup from beginning."
299msgstr ""299msgstr ""
300300
301#: ../bin/duplicity:1281301#: ../bin/duplicity:1279
302#, python-format302#, python-format
303msgid ""303msgid ""
304"RESTART: Volumes %d to %d failed to upload before termination.\n"304"RESTART: Volumes %d to %d failed to upload before termination.\n"
305" Restarting backup at volume %d."305" Restarting backup at volume %d."
306msgstr ""306msgstr ""
307307
308#: ../bin/duplicity:1288308#: ../bin/duplicity:1286
309#, python-format309#, python-format
310msgid ""310msgid ""
311"RESTART: Impossible backup state: manifest has %d vols, remote has %d vols.\n"311"RESTART: Impossible backup state: manifest has %d vols, remote has %d vols.\n"
@@ -314,7 +314,7 @@
314" backup then restart the backup from the beginning."314" backup then restart the backup from the beginning."
315msgstr ""315msgstr ""
316316
317#: ../bin/duplicity:1310317#: ../bin/duplicity:1308
318msgid ""318msgid ""
319"\n"319"\n"
320"PYTHONOPTIMIZE in the environment causes duplicity to fail to\n"320"PYTHONOPTIMIZE in the environment causes duplicity to fail to\n"
@@ -324,54 +324,54 @@
324"See https://bugs.launchpad.net/duplicity/+bug/931175\n"324"See https://bugs.launchpad.net/duplicity/+bug/931175\n"
325msgstr ""325msgstr ""
326326
327#: ../bin/duplicity:1401327#: ../bin/duplicity:1399
328#, python-format328#, python-format
329msgid "Last %s backup left a partial set, restarting."329msgid "Last %s backup left a partial set, restarting."
330msgstr ""330msgstr ""
331331
332#: ../bin/duplicity:1405332#: ../bin/duplicity:1403
333#, python-format333#, python-format
334msgid "Cleaning up previous partial %s backup set, restarting."334msgid "Cleaning up previous partial %s backup set, restarting."
335msgstr ""335msgstr ""
336336
337#: ../bin/duplicity:1414
338msgid "Last full backup date:"
339msgstr ""
340
337#: ../bin/duplicity:1416341#: ../bin/duplicity:1416
338msgid "Last full backup date:"342msgid "Last full backup date: none"
339msgstr ""343msgstr ""
340344
341#: ../bin/duplicity:1418345#: ../bin/duplicity:1418
342msgid "Last full backup date: none"
343msgstr ""
344
345#: ../bin/duplicity:1420
346msgid "Last full backup is too old, forcing full backup"346msgid "Last full backup is too old, forcing full backup"
347msgstr ""347msgstr ""
348348
349#: ../bin/duplicity:1463349#: ../bin/duplicity:1461
350msgid ""350msgid ""
351"When using symmetric encryption, the signing passphrase must equal the "351"When using symmetric encryption, the signing passphrase must equal the "
352"encryption passphrase."352"encryption passphrase."
353msgstr ""353msgstr ""
354354
355#: ../bin/duplicity:1516355#: ../bin/duplicity:1514
356msgid "INT intercepted...exiting."356msgid "INT intercepted...exiting."
357msgstr ""357msgstr ""
358358
359#: ../bin/duplicity:1524359#: ../bin/duplicity:1522
360#, python-format360#, python-format
361msgid "GPG error detail: %s"361msgid "GPG error detail: %s"
362msgstr ""362msgstr ""
363363
364#: ../bin/duplicity:1534364#: ../bin/duplicity:1532
365#, python-format365#, python-format
366msgid "User error detail: %s"366msgid "User error detail: %s"
367msgstr ""367msgstr ""
368368
369#: ../bin/duplicity:1544369#: ../bin/duplicity:1542
370#, python-format370#, python-format
371msgid "Backend error detail: %s"371msgid "Backend error detail: %s"
372msgstr ""372msgstr ""
373373
374#: ../bin/rdiffdir:56 ../duplicity/commandline.py:238374#: ../bin/rdiffdir:56 ../duplicity/commandline.py:233
375#, python-format375#, python-format
376msgid "Error opening file %s"376msgid "Error opening file %s"
377msgstr ""377msgstr ""
@@ -381,33 +381,33 @@
381msgid "File %s already exists, will not overwrite."381msgid "File %s already exists, will not overwrite."
382msgstr ""382msgstr ""
383383
384#: ../duplicity/selection.py:119384#: ../duplicity/selection.py:121
385#, python-format385#, python-format
386msgid "Skipping socket %s"386msgid "Skipping socket %s"
387msgstr ""387msgstr ""
388388
389#: ../duplicity/selection.py:123389#: ../duplicity/selection.py:125
390#, python-format390#, python-format
391msgid "Error initializing file %s"391msgid "Error initializing file %s"
392msgstr ""392msgstr ""
393393
394#: ../duplicity/selection.py:127 ../duplicity/selection.py:148394#: ../duplicity/selection.py:129 ../duplicity/selection.py:150
395#, python-format395#, python-format
396msgid "Error accessing possibly locked file %s"396msgid "Error accessing possibly locked file %s"
397msgstr ""397msgstr ""
398398
399#: ../duplicity/selection.py:163399#: ../duplicity/selection.py:165
400#, python-format400#, python-format
401msgid "Warning: base %s doesn't exist, continuing"401msgid "Warning: base %s doesn't exist, continuing"
402msgstr ""402msgstr ""
403403
404#: ../duplicity/selection.py:166 ../duplicity/selection.py:184404#: ../duplicity/selection.py:168 ../duplicity/selection.py:186
405#: ../duplicity/selection.py:187405#: ../duplicity/selection.py:189
406#, python-format406#, python-format
407msgid "Selecting %s"407msgid "Selecting %s"
408msgstr ""408msgstr ""
409409
410#: ../duplicity/selection.py:268410#: ../duplicity/selection.py:270
411#, python-format411#, python-format
412msgid ""412msgid ""
413"Fatal Error: The file specification\n"413"Fatal Error: The file specification\n"
@@ -418,14 +418,14 @@
418"pattern (such as '**') which matches the base directory."418"pattern (such as '**') which matches the base directory."
419msgstr ""419msgstr ""
420420
421#: ../duplicity/selection.py:276421#: ../duplicity/selection.py:278
422#, python-format422#, python-format
423msgid ""423msgid ""
424"Fatal Error while processing expression\n"424"Fatal Error while processing expression\n"
425"%s"425"%s"
426msgstr ""426msgstr ""
427427
428#: ../duplicity/selection.py:286428#: ../duplicity/selection.py:288
429#, python-format429#, python-format
430msgid ""430msgid ""
431"Last selection expression:\n"431"Last selection expression:\n"
@@ -435,49 +435,49 @@
435"probably isn't what you meant."435"probably isn't what you meant."
436msgstr ""436msgstr ""
437437
438#: ../duplicity/selection.py:311438#: ../duplicity/selection.py:313
439#, python-format439#, python-format
440msgid "Reading filelist %s"440msgid "Reading filelist %s"
441msgstr ""441msgstr ""
442442
443#: ../duplicity/selection.py:314443#: ../duplicity/selection.py:316
444#, python-format444#, python-format
445msgid "Sorting filelist %s"445msgid "Sorting filelist %s"
446msgstr ""446msgstr ""
447447
448#: ../duplicity/selection.py:341448#: ../duplicity/selection.py:343
449#, python-format449#, python-format
450msgid ""450msgid ""
451"Warning: file specification '%s' in filelist %s\n"451"Warning: file specification '%s' in filelist %s\n"
452"doesn't start with correct prefix %s. Ignoring."452"doesn't start with correct prefix %s. Ignoring."
453msgstr ""453msgstr ""
454454
455#: ../duplicity/selection.py:345455#: ../duplicity/selection.py:347
456msgid "Future prefix errors will not be logged."456msgid "Future prefix errors will not be logged."
457msgstr ""457msgstr ""
458458
459#: ../duplicity/selection.py:361459#: ../duplicity/selection.py:363
460#, python-format460#, python-format
461msgid "Error closing filelist %s"461msgid "Error closing filelist %s"
462msgstr ""462msgstr ""
463463
464#: ../duplicity/selection.py:428464#: ../duplicity/selection.py:430
465#, python-format465#, python-format
466msgid "Reading globbing filelist %s"466msgid "Reading globbing filelist %s"
467msgstr ""467msgstr ""
468468
469#: ../duplicity/selection.py:461469#: ../duplicity/selection.py:463
470#, python-format470#, python-format
471msgid "Error compiling regular expression %s"471msgid "Error compiling regular expression %s"
472msgstr ""472msgstr ""
473473
474#: ../duplicity/selection.py:477474#: ../duplicity/selection.py:479
475msgid ""475msgid ""
476"Warning: exclude-device-files is not the first selector.\n"476"Warning: exclude-device-files is not the first selector.\n"
477"This may not be what you intended"477"This may not be what you intended"
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches

to all changes: