Merge lp:~dawgfoto/duplicity/replicate into lp:~duplicity-team/duplicity/0.8-series
- replicate
- Merge into 0.8-series
Status: | Merged |
---|---|
Merged at revision: | 1209 |
Proposed branch: | lp:~dawgfoto/duplicity/replicate |
Merge into: | lp:~duplicity-team/duplicity/0.8-series |
Diff against target: |
380 lines (+190/-24) 5 files modified
bin/duplicity (+116/-2) bin/duplicity.1 (+17/-0) duplicity/collections.py (+12/-3) duplicity/commandline.py (+36/-19) duplicity/file_naming.py (+9/-0) |
To merge this branch: | bzr merge lp:~dawgfoto/duplicity/replicate |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
duplicity-team | Pending | ||
Review via email: mp+322836@code.launchpad.net |
Commit message
Description of the change
Initial request for feedback.
Add replicate command to replicate a backup (or backup sets older than a given time) to another backend, leveraging duplicity's backend and compression/
- 1191. By Kenneth Loafman
-
2017-04-20 Kenneth Loafman <email address hidden>
* Fixed bug #1680682 with patch supplied from Dave Allan
- Only specify --pinentry-mode=loopback when --use-agent is not specified
* Fixed man page that had 'cancel' instead of 'loopback' for pinentry mode
* Fixed bug #1684312 with suggestion from Wade Rossman
- Use shutil.copyfile instead of os.system('cp ...')
- Should reduce overhead of os.system() memory usage. - 1192. By Kenneth Loafman
-
* Merged in lp:~dernils/duplicity/testing
- Fixed minor stuff in requirements.txt.
- Added a Dockerfile for testing.
- Minor changes to README files.
- Added README-TESTING with some information on testing. - 1193. By Kenneth Loafman
-
* Merged in lp:~dernils/duplicity/documentation
- Minor changes to README-REPO, README-TESTING
- Also redo the changes to requirements.txt and Dockerfile - 1194. By ken
-
* Add rdiff install and newline at end of file.
- 1195. By Kenneth Loafman
-
Move pep8 and pylint to requirements.
- 1196. By Kenneth Loafman
-
Whoops, deleted too much. Add rdiff again.
- 1197. By Kenneth Loafman
-
Merged in lp:~dernils/duplicity/Dockerfile
Fixed variable name change in commandline.py - 1198. By Kenneth Loafman
-
More changes for testing:
- keep gpg1 version for future testing
- some changes for debugging functional tests
- add gpg-agent.conf with allow-loopback-pinentry - 1199. By ken
-
A little reorg, just keeping pip things together.
- 1200. By ken
-
Quick fix for bug #1680682 and gnupg v1, add missing comma.
- 1201. By ken
-
- Simplify Dockerfile per https:/
/docs.docker. com/engine/ userguide/ eng-image/ dockerfile_ best-practices/
- Add a .dockerignore file
- Uncomment some debug prints
- quick fix for bug #1680682 and gnupg v1, add missing comma - 1202. By ken
-
Move branch duplicity up the food chain.
- 1203. By ken
-
Add test user and swap to non-priviledged.
- 1204. By ken
-
- Remove dependencies we did not need
- 1205. By ken
-
Merged in lp:~dernils/duplicity/Dockerfile
- separated requirements into requirements for duplicity (in requirements.txt) and for testing (in tox.ini) - 1206. By ken
-
Add libffi-dev back. My bad.
- 1207. By ken
-
You need tox to run tox. Doh!
- 1208. By ken
-
We need tzdata (timezone data).
- 1209. By Kenneth Loafman
-
* Merged in lp:~dawgfoto/duplicity/replicate
- Add replicate command to replicate a backup (or backup
sets older than a given time) to another backend, leveraging
duplicity's backend and compression/encryption infrastructure.
* Fixed some incoming PyLint and PEP-8 errors.
Aaron Whitehouse (aaron-whitehouse) wrote : | # |
Preview Diff
1 | === modified file 'bin/duplicity' | |||
2 | --- bin/duplicity 2017-03-02 22:38:47 +0000 | |||
3 | +++ bin/duplicity 2017-04-24 16:35:20 +0000 | |||
4 | @@ -28,6 +28,7 @@ | |||
5 | 28 | # any suggestions. | 28 | # any suggestions. |
6 | 29 | 29 | ||
7 | 30 | import duplicity.errors | 30 | import duplicity.errors |
8 | 31 | import copy | ||
9 | 31 | import gzip | 32 | import gzip |
10 | 32 | import os | 33 | import os |
11 | 33 | import platform | 34 | import platform |
12 | @@ -1006,6 +1007,116 @@ | |||
13 | 1006 | "\n" + chain_times_str(chainlist) + "\n" + | 1007 | "\n" + chain_times_str(chainlist) + "\n" + |
14 | 1007 | _("Rerun command with --force option to actually delete.")) | 1008 | _("Rerun command with --force option to actually delete.")) |
15 | 1008 | 1009 | ||
16 | 1010 | def replicate(): | ||
17 | 1011 | """ | ||
18 | 1012 | Replicate backup files from one remote to another, possibly encrypting or adding parity. | ||
19 | 1013 | |||
20 | 1014 | @rtype: void | ||
21 | 1015 | @return: void | ||
22 | 1016 | """ | ||
23 | 1017 | time = globals.restore_time or dup_time.curtime | ||
24 | 1018 | src_stats = collections.CollectionsStatus(globals.src_backend, None).set_values(sig_chain_warning=None) | ||
25 | 1019 | tgt_stats = collections.CollectionsStatus(globals.backend, None).set_values(sig_chain_warning=None) | ||
26 | 1020 | |||
27 | 1021 | src_list = globals.src_backend.list() | ||
28 | 1022 | tgt_list = globals.backend.list() | ||
29 | 1023 | |||
30 | 1024 | src_chainlist = src_stats.get_signature_chains(local=False, filelist=src_list)[0] | ||
31 | 1025 | tgt_chainlist = tgt_stats.get_signature_chains(local=False, filelist=tgt_list)[0] | ||
32 | 1026 | sorted(src_chainlist, key=lambda chain: chain.start_time) | ||
33 | 1027 | sorted(tgt_chainlist, key=lambda chain: chain.start_time) | ||
34 | 1028 | if not src_chainlist: | ||
35 | 1029 | log.Notice(_("No old backup sets found.")) | ||
36 | 1030 | return | ||
37 | 1031 | for src_chain in src_chainlist: | ||
38 | 1032 | try: | ||
39 | 1033 | tgt_chain = filter(lambda chain: chain.start_time == src_chain.start_time, tgt_chainlist)[0] | ||
40 | 1034 | except IndexError: | ||
41 | 1035 | tgt_chain = None | ||
42 | 1036 | |||
43 | 1037 | tgt_sigs = map(file_naming.parse, tgt_chain.get_filenames()) if tgt_chain else [] | ||
44 | 1038 | for src_sig_filename in src_chain.get_filenames(): | ||
45 | 1039 | src_sig = file_naming.parse(src_sig_filename) | ||
46 | 1040 | if not (src_sig.time or src_sig.end_time) < time: | ||
47 | 1041 | continue | ||
48 | 1042 | try: | ||
49 | 1043 | tgt_sigs.remove(src_sig) | ||
50 | 1044 | log.Info(_("Signature %s already replicated") % (src_sig_filename,)) | ||
51 | 1045 | continue | ||
52 | 1046 | except ValueError: | ||
53 | 1047 | pass | ||
54 | 1048 | if src_sig.type == 'new-sig': | ||
55 | 1049 | dup_time.setprevtime(src_sig.start_time) | ||
56 | 1050 | dup_time.setcurtime(src_sig.time or src_sig.end_time) | ||
57 | 1051 | log.Notice(_("Replicating %s.") % (src_sig_filename,)) | ||
58 | 1052 | fileobj = globals.src_backend.get_fileobj_read(src_sig_filename) | ||
59 | 1053 | filename = file_naming.get(src_sig.type, encrypted=globals.encryption, gzipped=globals.compression) | ||
60 | 1054 | tdp = dup_temp.new_tempduppath(file_naming.parse(filename)) | ||
61 | 1055 | tmpobj = tdp.filtered_open(mode='wb') | ||
62 | 1056 | util.copyfileobj(fileobj, tmpobj) # decrypt, compress, (re)-encrypt | ||
63 | 1057 | fileobj.close() | ||
64 | 1058 | tmpobj.close() | ||
65 | 1059 | globals.backend.put(tdp, filename) | ||
66 | 1060 | tdp.delete() | ||
67 | 1061 | |||
68 | 1062 | src_chainlist = src_stats.get_backup_chains(filename_list = src_list)[0] | ||
69 | 1063 | tgt_chainlist = tgt_stats.get_backup_chains(filename_list = tgt_list)[0] | ||
70 | 1064 | sorted(src_chainlist, key=lambda chain: chain.start_time) | ||
71 | 1065 | sorted(tgt_chainlist, key=lambda chain: chain.start_time) | ||
72 | 1066 | for src_chain in src_chainlist: | ||
73 | 1067 | try: | ||
74 | 1068 | tgt_chain = filter(lambda chain: chain.start_time == src_chain.start_time, tgt_chainlist)[0] | ||
75 | 1069 | except IndexError: | ||
76 | 1070 | tgt_chain = None | ||
77 | 1071 | |||
78 | 1072 | tgt_sets = tgt_chain.get_all_sets() if tgt_chain else [] | ||
79 | 1073 | for src_set in src_chain.get_all_sets(): | ||
80 | 1074 | if not src_set.get_time() < time: | ||
81 | 1075 | continue | ||
82 | 1076 | try: | ||
83 | 1077 | tgt_sets.remove(src_set) | ||
84 | 1078 | log.Info(_("Backupset %s already replicated") % (src_set.remote_manifest_name,)) | ||
85 | 1079 | continue | ||
86 | 1080 | except ValueError: | ||
87 | 1081 | pass | ||
88 | 1082 | if src_set.type == 'inc': | ||
89 | 1083 | dup_time.setprevtime(src_set.start_time) | ||
90 | 1084 | dup_time.setcurtime(src_set.get_time()) | ||
91 | 1085 | rmf = src_set.get_remote_manifest() | ||
92 | 1086 | mf_filename = file_naming.get(src_set.type, manifest=True) | ||
93 | 1087 | mf_tdp = dup_temp.new_tempduppath(file_naming.parse(mf_filename)) | ||
94 | 1088 | mf = manifest.Manifest(fh=mf_tdp.filtered_open(mode='wb')) | ||
95 | 1089 | for i, filename in src_set.volume_name_dict.iteritems(): | ||
96 | 1090 | log.Notice(_("Replicating %s.") % (filename,)) | ||
97 | 1091 | fileobj = restore_get_enc_fileobj(globals.src_backend, filename, rmf.volume_info_dict[i]) | ||
98 | 1092 | filename = file_naming.get(src_set.type, i, encrypted=globals.encryption, gzipped=globals.compression) | ||
99 | 1093 | tdp = dup_temp.new_tempduppath(file_naming.parse(filename)) | ||
100 | 1094 | tmpobj = tdp.filtered_open(mode='wb') | ||
101 | 1095 | util.copyfileobj(fileobj, tmpobj) # decrypt, compress, (re)-encrypt | ||
102 | 1096 | fileobj.close() | ||
103 | 1097 | tmpobj.close() | ||
104 | 1098 | globals.backend.put(tdp, filename) | ||
105 | 1099 | |||
106 | 1100 | vi = copy.copy(rmf.volume_info_dict[i]) | ||
107 | 1101 | vi.set_hash("SHA1", gpg.get_hash("SHA1", tdp)) | ||
108 | 1102 | mf.add_volume_info(vi) | ||
109 | 1103 | |||
110 | 1104 | tdp.delete() | ||
111 | 1105 | |||
112 | 1106 | mf.fh.close() | ||
113 | 1107 | # incremental GPG writes hang on close, so do any encryption here at once | ||
114 | 1108 | mf_fileobj = mf_tdp.filtered_open_with_delete(mode='rb') | ||
115 | 1109 | mf_final_filename = file_naming.get(src_set.type, manifest=True, encrypted=globals.encryption, gzipped=globals.compression) | ||
116 | 1110 | mf_final_tdp = dup_temp.new_tempduppath(file_naming.parse(mf_final_filename)) | ||
117 | 1111 | mf_final_fileobj = mf_final_tdp.filtered_open(mode='wb') | ||
118 | 1112 | util.copyfileobj(mf_fileobj, mf_final_fileobj) # compress, encrypt | ||
119 | 1113 | mf_fileobj.close() | ||
120 | 1114 | mf_final_fileobj.close() | ||
121 | 1115 | globals.backend.put(mf_final_tdp, mf_final_filename) | ||
122 | 1116 | mf_final_tdp.delete() | ||
123 | 1117 | |||
124 | 1118 | globals.src_backend.close() | ||
125 | 1119 | globals.backend.close() | ||
126 | 1009 | 1120 | ||
127 | 1010 | def sync_archive(decrypt): | 1121 | def sync_archive(decrypt): |
128 | 1011 | """ | 1122 | """ |
129 | @@ -1408,8 +1519,9 @@ | |||
130 | 1408 | check_resources(action) | 1519 | check_resources(action) |
131 | 1409 | 1520 | ||
132 | 1410 | # check archive synch with remote, fix if needed | 1521 | # check archive synch with remote, fix if needed |
135 | 1411 | decrypt = action not in ["collection-status"] | 1522 | if not action == "replicate": |
136 | 1412 | sync_archive(decrypt) | 1523 | decrypt = action not in ["collection-status"] |
137 | 1524 | sync_archive(decrypt) | ||
138 | 1413 | 1525 | ||
139 | 1414 | # get current collection status | 1526 | # get current collection status |
140 | 1415 | col_stats = collections.CollectionsStatus(globals.backend, | 1527 | col_stats = collections.CollectionsStatus(globals.backend, |
141 | @@ -1483,6 +1595,8 @@ | |||
142 | 1483 | remove_all_but_n_full(col_stats) | 1595 | remove_all_but_n_full(col_stats) |
143 | 1484 | elif action == "sync": | 1596 | elif action == "sync": |
144 | 1485 | sync_archive(True) | 1597 | sync_archive(True) |
145 | 1598 | elif action == "replicate": | ||
146 | 1599 | replicate() | ||
147 | 1486 | else: | 1600 | else: |
148 | 1487 | assert action == "inc" or action == "full", action | 1601 | assert action == "inc" or action == "full", action |
149 | 1488 | # the passphrase for full and inc is used by --sign-key | 1602 | # the passphrase for full and inc is used by --sign-key |
150 | 1489 | 1603 | ||
151 | === modified file 'bin/duplicity.1' | |||
152 | --- bin/duplicity.1 2017-04-22 19:30:28 +0000 | |||
153 | +++ bin/duplicity.1 2017-04-24 16:35:20 +0000 | |||
154 | @@ -48,6 +48,10 @@ | |||
155 | 48 | .I [options] [--force] [--extra-clean] | 48 | .I [options] [--force] [--extra-clean] |
156 | 49 | target_url | 49 | target_url |
157 | 50 | 50 | ||
158 | 51 | .B duplicity replicate | ||
159 | 52 | .I [options] [--time time] | ||
160 | 53 | source_url target_url | ||
161 | 54 | |||
162 | 51 | .SH DESCRIPTION | 55 | .SH DESCRIPTION |
163 | 52 | Duplicity incrementally backs up files and folders into | 56 | Duplicity incrementally backs up files and folders into |
164 | 53 | tar-format volumes encrypted with GnuPG and places them to a | 57 | tar-format volumes encrypted with GnuPG and places them to a |
165 | @@ -243,6 +247,19 @@ | |||
166 | 243 | .I --force | 247 | .I --force |
167 | 244 | will be needed to delete the files instead of just listing them. | 248 | will be needed to delete the files instead of just listing them. |
168 | 245 | 249 | ||
169 | 250 | .TP | ||
170 | 251 | .BI "replicate " "[--time time] <source_url> <target_url>" | ||
171 | 252 | Replicate backup sets from source to target backend. Files will be | ||
172 | 253 | (re)-encrypted and (re)-compressed depending on normal backend | ||
173 | 254 | options. Signatures and volumes will not get recomputed, thus options like | ||
174 | 255 | .BI --volsize | ||
175 | 256 | or | ||
176 | 257 | .BI --max-blocksize | ||
177 | 258 | have no effect. | ||
178 | 259 | When | ||
179 | 260 | .I --time time | ||
180 | 261 | is given, only backup sets older than time will be replicated. | ||
181 | 262 | |||
182 | 246 | .SH OPTIONS | 263 | .SH OPTIONS |
183 | 247 | 264 | ||
184 | 248 | .TP | 265 | .TP |
185 | 249 | 266 | ||
186 | === modified file 'duplicity/collections.py' | |||
187 | --- duplicity/collections.py 2017-02-27 13:18:57 +0000 | |||
188 | +++ duplicity/collections.py 2017-04-24 16:35:20 +0000 | |||
189 | @@ -294,6 +294,15 @@ | |||
190 | 294 | """ | 294 | """ |
191 | 295 | return len(self.volume_name_dict.keys()) | 295 | return len(self.volume_name_dict.keys()) |
192 | 296 | 296 | ||
193 | 297 | def __eq__(self, other): | ||
194 | 298 | """ | ||
195 | 299 | Return whether this backup set is equal to other | ||
196 | 300 | """ | ||
197 | 301 | return self.type == other.type and \ | ||
198 | 302 | self.time == other.time and \ | ||
199 | 303 | self.start_time == other.start_time and \ | ||
200 | 304 | self.end_time == other.end_time and \ | ||
201 | 305 | len(self) == len(other) | ||
202 | 297 | 306 | ||
203 | 298 | class BackupChain: | 307 | class BackupChain: |
204 | 299 | """ | 308 | """ |
205 | @@ -642,7 +651,7 @@ | |||
206 | 642 | u"-----------------", | 651 | u"-----------------", |
207 | 643 | _("Connecting with backend: %s") % | 652 | _("Connecting with backend: %s") % |
208 | 644 | (self.backend.__class__.__name__,), | 653 | (self.backend.__class__.__name__,), |
210 | 645 | _("Archive dir: %s") % (util.ufn(self.archive_dir_path.name),)] | 654 | _("Archive dir: %s") % (util.ufn(self.archive_dir_path.name if self.archive_dir_path else 'None'),)] |
211 | 646 | 655 | ||
212 | 647 | l.append("\n" + | 656 | l.append("\n" + |
213 | 648 | ngettext("Found %d secondary backup chain.", | 657 | ngettext("Found %d secondary backup chain.", |
214 | @@ -697,7 +706,7 @@ | |||
215 | 697 | len(backend_filename_list)) | 706 | len(backend_filename_list)) |
216 | 698 | 707 | ||
217 | 699 | # get local filename list | 708 | # get local filename list |
219 | 700 | local_filename_list = self.archive_dir_path.listdir() | 709 | local_filename_list = self.archive_dir_path.listdir() if self.archive_dir_path else [] |
220 | 701 | log.Debug(ngettext("%d file exists in cache", | 710 | log.Debug(ngettext("%d file exists in cache", |
221 | 702 | "%d files exist in cache", | 711 | "%d files exist in cache", |
222 | 703 | len(local_filename_list)) % | 712 | len(local_filename_list)) % |
223 | @@ -894,7 +903,7 @@ | |||
224 | 894 | if filelist is not None: | 903 | if filelist is not None: |
225 | 895 | return filelist | 904 | return filelist |
226 | 896 | elif local: | 905 | elif local: |
228 | 897 | return self.archive_dir_path.listdir() | 906 | return self.archive_dir_path.listdir() if self.archive_dir_path else [] |
229 | 898 | else: | 907 | else: |
230 | 899 | return self.backend.list() | 908 | return self.backend.list() |
231 | 900 | 909 | ||
232 | 901 | 910 | ||
233 | === modified file 'duplicity/commandline.py' | |||
234 | --- duplicity/commandline.py 2017-02-27 13:18:57 +0000 | |||
235 | +++ duplicity/commandline.py 2017-04-24 16:35:20 +0000 | |||
236 | @@ -54,6 +54,7 @@ | |||
237 | 54 | collection_status = None # Will be set to true if collection-status command given | 54 | collection_status = None # Will be set to true if collection-status command given |
238 | 55 | cleanup = None # Set to true if cleanup command given | 55 | cleanup = None # Set to true if cleanup command given |
239 | 56 | verify = None # Set to true if verify command given | 56 | verify = None # Set to true if verify command given |
240 | 57 | replicate = None # Set to true if replicate command given | ||
241 | 57 | 58 | ||
242 | 58 | commands = ["cleanup", | 59 | commands = ["cleanup", |
243 | 59 | "collection-status", | 60 | "collection-status", |
244 | @@ -65,6 +66,7 @@ | |||
245 | 65 | "remove-all-inc-of-but-n-full", | 66 | "remove-all-inc-of-but-n-full", |
246 | 66 | "restore", | 67 | "restore", |
247 | 67 | "verify", | 68 | "verify", |
248 | 69 | "replicate" | ||
249 | 68 | ] | 70 | ] |
250 | 69 | 71 | ||
251 | 70 | 72 | ||
252 | @@ -236,7 +238,7 @@ | |||
253 | 236 | def parse_cmdline_options(arglist): | 238 | def parse_cmdline_options(arglist): |
254 | 237 | """Parse argument list""" | 239 | """Parse argument list""" |
255 | 238 | global select_opts, select_files, full_backup | 240 | global select_opts, select_files, full_backup |
257 | 239 | global list_current, collection_status, cleanup, remove_time, verify | 241 | global list_current, collection_status, cleanup, remove_time, verify, replicate |
258 | 240 | 242 | ||
259 | 241 | def set_log_fd(fd): | 243 | def set_log_fd(fd): |
260 | 242 | if fd < 1: | 244 | if fd < 1: |
261 | @@ -706,6 +708,9 @@ | |||
262 | 706 | num_expect = 1 | 708 | num_expect = 1 |
263 | 707 | elif cmd == "verify": | 709 | elif cmd == "verify": |
264 | 708 | verify = True | 710 | verify = True |
265 | 711 | elif cmd == "replicate": | ||
266 | 712 | replicate = True | ||
267 | 713 | num_expect = 2 | ||
268 | 709 | 714 | ||
269 | 710 | if len(args) != num_expect: | 715 | if len(args) != num_expect: |
270 | 711 | command_line_error("Expected %d args, got %d" % (num_expect, len(args))) | 716 | command_line_error("Expected %d args, got %d" % (num_expect, len(args))) |
271 | @@ -724,7 +729,12 @@ | |||
272 | 724 | elif len(args) == 1: | 729 | elif len(args) == 1: |
273 | 725 | backend_url = args[0] | 730 | backend_url = args[0] |
274 | 726 | elif len(args) == 2: | 731 | elif len(args) == 2: |
276 | 727 | lpath, backend_url = args_to_path_backend(args[0], args[1]) # @UnusedVariable | 732 | if replicate: |
277 | 733 | if not backend.is_backend_url(args[0]) or not backend.is_backend_url(args[1]): | ||
278 | 734 | command_line_error("Two URLs expected for replicate.") | ||
279 | 735 | src_backend_url, backend_url= args[0], args[1] | ||
280 | 736 | else: | ||
281 | 737 | lpath, backend_url = args_to_path_backend(args[0], args[1]) # @UnusedVariable | ||
282 | 728 | else: | 738 | else: |
283 | 729 | command_line_error("Too many arguments") | 739 | command_line_error("Too many arguments") |
284 | 730 | 740 | ||
285 | @@ -899,6 +909,7 @@ | |||
286 | 899 | duplicity remove-older-than %(time)s [%(options)s] %(target_url)s | 909 | duplicity remove-older-than %(time)s [%(options)s] %(target_url)s |
287 | 900 | duplicity remove-all-but-n-full %(count)s [%(options)s] %(target_url)s | 910 | duplicity remove-all-but-n-full %(count)s [%(options)s] %(target_url)s |
288 | 901 | duplicity remove-all-inc-of-but-n-full %(count)s [%(options)s] %(target_url)s | 911 | duplicity remove-all-inc-of-but-n-full %(count)s [%(options)s] %(target_url)s |
289 | 912 | duplicity replicate %(source_url)s %(target_url)s | ||
290 | 902 | 913 | ||
291 | 903 | """ % dict | 914 | """ % dict |
292 | 904 | 915 | ||
293 | @@ -944,7 +955,8 @@ | |||
294 | 944 | remove-older-than <%(time)s> <%(target_url)s> | 955 | remove-older-than <%(time)s> <%(target_url)s> |
295 | 945 | remove-all-but-n-full <%(count)s> <%(target_url)s> | 956 | remove-all-but-n-full <%(count)s> <%(target_url)s> |
296 | 946 | remove-all-inc-of-but-n-full <%(count)s> <%(target_url)s> | 957 | remove-all-inc-of-but-n-full <%(count)s> <%(target_url)s> |
298 | 947 | verify <%(target_url)s> <%(source_dir)s>""" % dict | 958 | verify <%(target_url)s> <%(source_dir)s> |
299 | 959 | replicate <%(source_url)s> <%(target_url)s>""" % dict | ||
300 | 948 | 960 | ||
301 | 949 | return msg | 961 | return msg |
302 | 950 | 962 | ||
303 | @@ -1047,7 +1059,7 @@ | |||
304 | 1047 | 1059 | ||
305 | 1048 | def check_consistency(action): | 1060 | def check_consistency(action): |
306 | 1049 | """Final consistency check, see if something wrong with command line""" | 1061 | """Final consistency check, see if something wrong with command line""" |
308 | 1050 | global full_backup, select_opts, list_current | 1062 | global full_backup, select_opts, list_current, collection_status, cleanup, replicate |
309 | 1051 | 1063 | ||
310 | 1052 | def assert_only_one(arglist): | 1064 | def assert_only_one(arglist): |
311 | 1053 | """Raises error if two or more of the elements of arglist are true""" | 1065 | """Raises error if two or more of the elements of arglist are true""" |
312 | @@ -1058,8 +1070,8 @@ | |||
313 | 1058 | assert n <= 1, "Invalid syntax, two conflicting modes specified" | 1070 | assert n <= 1, "Invalid syntax, two conflicting modes specified" |
314 | 1059 | 1071 | ||
315 | 1060 | if action in ["list-current", "collection-status", | 1072 | if action in ["list-current", "collection-status", |
318 | 1061 | "cleanup", "remove-old", "remove-all-but-n-full", "remove-all-inc-of-but-n-full"]: | 1073 | "cleanup", "remove-old", "remove-all-but-n-full", "remove-all-inc-of-but-n-full", "replicate"]: |
319 | 1062 | assert_only_one([list_current, collection_status, cleanup, | 1074 | assert_only_one([list_current, collection_status, cleanup, replicate, |
320 | 1063 | globals.remove_time is not None]) | 1075 | globals.remove_time is not None]) |
321 | 1064 | elif action == "restore" or action == "verify": | 1076 | elif action == "restore" or action == "verify": |
322 | 1065 | if full_backup: | 1077 | if full_backup: |
323 | @@ -1137,22 +1149,27 @@ | |||
324 | 1137 | "file:///usr/local". See the man page for more information.""") % (args[0],), | 1149 | "file:///usr/local". See the man page for more information.""") % (args[0],), |
325 | 1138 | log.ErrorCode.bad_url) | 1150 | log.ErrorCode.bad_url) |
326 | 1139 | elif len(args) == 2: | 1151 | elif len(args) == 2: |
334 | 1140 | # Figure out whether backup or restore | 1152 | if replicate: |
335 | 1141 | backup, local_pathname = set_backend(args[0], args[1]) | 1153 | globals.src_backend = backend.get_backend(args[0]) |
336 | 1142 | if backup: | 1154 | globals.backend = backend.get_backend(args[1]) |
337 | 1143 | if full_backup: | 1155 | action = "replicate" |
331 | 1144 | action = "full" | ||
332 | 1145 | else: | ||
333 | 1146 | action = "inc" | ||
338 | 1147 | else: | 1156 | else: |
341 | 1148 | if verify: | 1157 | # Figure out whether backup or restore |
342 | 1149 | action = "verify" | 1158 | backup, local_pathname = set_backend(args[0], args[1]) |
343 | 1159 | if backup: | ||
344 | 1160 | if full_backup: | ||
345 | 1161 | action = "full" | ||
346 | 1162 | else: | ||
347 | 1163 | action = "inc" | ||
348 | 1150 | else: | 1164 | else: |
350 | 1151 | action = "restore" | 1165 | if verify: |
351 | 1166 | action = "verify" | ||
352 | 1167 | else: | ||
353 | 1168 | action = "restore" | ||
354 | 1152 | 1169 | ||
358 | 1153 | process_local_dir(action, local_pathname) | 1170 | process_local_dir(action, local_pathname) |
359 | 1154 | if action in ['full', 'inc', 'verify']: | 1171 | if action in ['full', 'inc', 'verify']: |
360 | 1155 | set_selection() | 1172 | set_selection() |
361 | 1156 | elif len(args) > 2: | 1173 | elif len(args) > 2: |
362 | 1157 | raise AssertionError("this code should not be reachable") | 1174 | raise AssertionError("this code should not be reachable") |
363 | 1158 | 1175 | ||
364 | 1159 | 1176 | ||
365 | === modified file 'duplicity/file_naming.py' | |||
366 | --- duplicity/file_naming.py 2016-06-28 21:03:46 +0000 | |||
367 | +++ duplicity/file_naming.py 2017-04-24 16:35:20 +0000 | |||
368 | @@ -436,3 +436,12 @@ | |||
369 | 436 | self.encrypted = encrypted # true if gpg encrypted | 436 | self.encrypted = encrypted # true if gpg encrypted |
370 | 437 | 437 | ||
371 | 438 | self.partial = partial | 438 | self.partial = partial |
372 | 439 | |||
373 | 440 | def __eq__(self, other): | ||
374 | 441 | return self.type == other.type and \ | ||
375 | 442 | self.manifest == other.manifest and \ | ||
376 | 443 | self.time == other.time and \ | ||
377 | 444 | self.start_time == other.start_time and \ | ||
378 | 445 | self.end_time == other.end_time and \ | ||
379 | 446 | self.partial == other.partial | ||
380 | 447 |
Many thanks Martin!
Could you please submit some tests that ensure your code keeps working as expected? There should be an example of most things you want to test and I would be happy to help navigate them to the extent that I can.
Just an end-to-end functional test would be a great start.