Merge lp:~mandel/ubuntuone-windows-installer/add_python_u1sync into lp:ubuntuone-windows-installer/beta
- add_python_u1sync
- Merge into beta
Proposed by
Manuel de la Peña
Status: | Merged |
---|---|
Approved by: | John Lenton |
Approved revision: | 56 |
Merged at revision: | 72 |
Proposed branch: | lp:~mandel/ubuntuone-windows-installer/add_python_u1sync |
Merge into: | lp:ubuntuone-windows-installer/beta |
Diff against target: |
2487 lines (+2397/-4) 14 files modified
README.txt (+15/-2) main.build (+11/-2) src/setup.py (+56/-0) src/u1sync/__init__.py (+14/-0) src/u1sync/client.py (+754/-0) src/u1sync/constants.py (+30/-0) src/u1sync/genericmerge.py (+88/-0) src/u1sync/main.py (+360/-0) src/u1sync/merge.py (+186/-0) src/u1sync/metadata.py (+145/-0) src/u1sync/scan.py (+102/-0) src/u1sync/sync.py (+384/-0) src/u1sync/ubuntuone_optparse.py (+202/-0) src/u1sync/utils.py (+50/-0) |
To merge this branch: | bzr merge lp:~mandel/ubuntuone-windows-installer/add_python_u1sync |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Rick McBride (community) | Approve | ||
Vincenzo Di Somma (community) | Approve | ||
Review via email:
|
Commit message
Description of the change
Modified u1sync to work on windows and added its packaging with py2exe. This provides a temp solution until we refactor certain parts on the syncdaemon.
To post a comment you must log in.
Revision history for this message
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Rick McBride (rmcbride) : | # |
review:
Approve
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'README.txt' | |||
2 | --- README.txt 2010-08-19 11:13:02 +0000 | |||
3 | +++ README.txt 2010-08-27 14:56:43 +0000 | |||
4 | @@ -7,8 +7,21 @@ | |||
5 | 7 | * UbuntuOne Windows Service: Windows service that allows to start, stop, pause and resume the sync daemon. | 7 | * UbuntuOne Windows Service: Windows service that allows to start, stop, pause and resume the sync daemon. |
6 | 8 | * UbuntuOne Windows Service Brodcaster: Provides a WCF service hosted in a windows service that allows to .Net languages | 8 | * UbuntuOne Windows Service Brodcaster: Provides a WCF service hosted in a windows service that allows to .Net languages |
7 | 9 | to communicate with the sync daemon and interact with it. | 9 | to communicate with the sync daemon and interact with it. |
10 | 10 | 10 | ||
11 | 11 | 2. Build | 11 | 2. Enviroment setup |
12 | 12 | |||
13 | 13 | The ubuntuone windows port solution provides a port of the python code that is used on Linux to perform the u1 sync operations. Due to the fact that | ||
14 | 14 | we did not want user to have to be hunting down the python runtime plus the differen modules that have been used. To simplify the live of the | ||
15 | 15 | windows users we have opted to use py2exe to create an executable that will carry all the different python dependencies of the code. As you | ||
16 | 16 | may already know py2exe is not perfect and does not support egg files. In order to make sure that you easy_install does extra the eggs files after | ||
17 | 17 | the packages are installed please use the following command: | ||
18 | 18 | |||
19 | 19 | easy_install -Z %package_name% | ||
20 | 20 | |||
21 | 21 | In order to be able to build the solution you will need to have python win32, the win32 python extensions and the ubuntu one storage protocol in your system.: | ||
22 | 22 | |||
23 | 23 | |||
24 | 24 | 3. Build | ||
25 | 12 | 25 | ||
26 | 13 | In order to simplify the buil as much as possible, all the required tools for the compilation of the project are provided in the soruce tree. The | 26 | In order to simplify the buil as much as possible, all the required tools for the compilation of the project are provided in the soruce tree. The |
27 | 14 | compilation uses a nant project that allows to run the following targets: | 27 | compilation uses a nant project that allows to run the following targets: |
28 | 15 | 28 | ||
29 | === modified file 'main.build' | |||
30 | --- main.build 2010-08-24 18:32:48 +0000 | |||
31 | +++ main.build 2010-08-27 14:56:43 +0000 | |||
32 | @@ -176,10 +176,19 @@ | |||
33 | 176 | program="nunit-console.exe" | 176 | program="nunit-console.exe" |
34 | 177 | commandline="UbuntuOneClient.Tests.dll /xml=../../../../test-results/UbuntuOneClient.Tests-Result.xml" /> | 177 | commandline="UbuntuOneClient.Tests.dll /xml=../../../../test-results/UbuntuOneClient.Tests-Result.xml" /> |
35 | 178 | </target> | 178 | </target> |
37 | 179 | 179 | <target name="package_python" | |
38 | 180 | description="Creates the exe binary that embeds the python libs that will be used to perform the sync operation in the windows platform"> | ||
39 | 181 | |||
40 | 182 | <exec basedir="${python_path}" | ||
41 | 183 | managed="true" | ||
42 | 184 | workingdir="src" | ||
43 | 185 | program="python.exe" | ||
44 | 186 | commandline="setup.py py2exe" /> | ||
45 | 187 | </target> | ||
46 | 188 | |||
47 | 180 | <target name="installer" | 189 | <target name="installer" |
48 | 181 | description="Compiles the solution and create a merge installer that allows to install the solution and other related apps." | 190 | description="Compiles the solution and create a merge installer that allows to install the solution and other related apps." |
50 | 182 | depends="tests"> | 191 | depends="tests, package_python"> |
51 | 183 | 192 | ||
52 | 184 | <mkdir dir="${build_results}" /> | 193 | <mkdir dir="${build_results}" /> |
53 | 185 | 194 | ||
54 | 186 | 195 | ||
55 | === added file 'src/setup.py' | |||
56 | --- src/setup.py 1970-01-01 00:00:00 +0000 | |||
57 | +++ src/setup.py 2010-08-27 14:56:43 +0000 | |||
58 | @@ -0,0 +1,56 @@ | |||
59 | 1 | #!/usr/bin/env python | ||
60 | 2 | # Copyright (C) 2010 Canonical - All Rights Reserved | ||
61 | 3 | |||
62 | 4 | """ """ | ||
63 | 5 | import sys | ||
64 | 6 | |||
65 | 7 | # ModuleFinder can't handle runtime changes to __path__, but win32com uses them | ||
66 | 8 | try: | ||
67 | 9 | # py2exe 0.6.4 introduced a replacement modulefinder. | ||
68 | 10 | # This means we have to add package paths there, not to the built-in | ||
69 | 11 | # one. If this new modulefinder gets integrated into Python, then | ||
70 | 12 | # we might be able to revert this some day. | ||
71 | 13 | # if this doesn't work, try import modulefinder | ||
72 | 14 | try: | ||
73 | 15 | import py2exe.mf as modulefinder | ||
74 | 16 | except ImportError: | ||
75 | 17 | import modulefinder | ||
76 | 18 | import win32com | ||
77 | 19 | for p in win32com.__path__[1:]: | ||
78 | 20 | modulefinder.AddPackagePath("win32com", p) | ||
79 | 21 | for extra in ["win32com.shell"]: #,"win32com.mapi" | ||
80 | 22 | __import__(extra) | ||
81 | 23 | m = sys.modules[extra] | ||
82 | 24 | for p in m.__path__[1:]: | ||
83 | 25 | modulefinder.AddPackagePath(extra, p) | ||
84 | 26 | except ImportError: | ||
85 | 27 | # no build path setup, no worries. | ||
86 | 28 | pass | ||
87 | 29 | |||
88 | 30 | from distutils.core import setup | ||
89 | 31 | import py2exe | ||
90 | 32 | |||
91 | 33 | if __name__ == '__main__': | ||
92 | 34 | |||
93 | 35 | setup( | ||
94 | 36 | options = { | ||
95 | 37 | "py2exe": { | ||
96 | 38 | "compressed": 1, | ||
97 | 39 | "optimize": 2}}, | ||
98 | 40 | name='u1sync', | ||
99 | 41 | version='0.0.1', | ||
100 | 42 | author = "Canonical Online Services Hackers", | ||
101 | 43 | description="""u1sync is a utility of Ubuntu One. | ||
102 | 44 | Ubuntu One is a suite of on-line | ||
103 | 45 | services. This package contains the a synchronization client for the | ||
104 | 46 | Ubuntu One file sharing service.""", | ||
105 | 47 | license='GPLv3', | ||
106 | 48 | console=['u1sync\\main.py'], | ||
107 | 49 | requires=[ | ||
108 | 50 | 'python (>= 2.5)', | ||
109 | 51 | 'oauth', | ||
110 | 52 | 'twisted.names', | ||
111 | 53 | 'twisted.web', | ||
112 | 54 | 'ubuntuone.storageprotocol (>= 1.3.0)', | ||
113 | 55 | ], | ||
114 | 56 | ) | ||
115 | 0 | 57 | ||
116 | === added directory 'src/u1sync' | |||
117 | === added file 'src/u1sync/__init__.py' | |||
118 | --- src/u1sync/__init__.py 1970-01-01 00:00:00 +0000 | |||
119 | +++ src/u1sync/__init__.py 2010-08-27 14:56:43 +0000 | |||
120 | @@ -0,0 +1,14 @@ | |||
121 | 1 | # Copyright 2009 Canonical Ltd. | ||
122 | 2 | # | ||
123 | 3 | # This program is free software: you can redistribute it and/or modify it | ||
124 | 4 | # under the terms of the GNU General Public License version 3, as published | ||
125 | 5 | # by the Free Software Foundation. | ||
126 | 6 | # | ||
127 | 7 | # This program is distributed in the hope that it will be useful, but | ||
128 | 8 | # WITHOUT ANY WARRANTY; without even the implied warranties of | ||
129 | 9 | # MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR | ||
130 | 10 | # PURPOSE. See the GNU General Public License for more details. | ||
131 | 11 | # | ||
132 | 12 | # You should have received a copy of the GNU General Public License along | ||
133 | 13 | # with this program. If not, see <http://www.gnu.org/licenses/>. | ||
134 | 14 | """The guts of the u1sync tool.""" | ||
135 | 0 | 15 | ||
136 | === added file 'src/u1sync/client.py' | |||
137 | --- src/u1sync/client.py 1970-01-01 00:00:00 +0000 | |||
138 | +++ src/u1sync/client.py 2010-08-27 14:56:43 +0000 | |||
139 | @@ -0,0 +1,754 @@ | |||
140 | 1 | # ubuntuone.u1sync.client | ||
141 | 2 | # | ||
142 | 3 | # Client/protocol end of u1sync | ||
143 | 4 | # | ||
144 | 5 | # Author: Lucio Torre <lucio.torre@canonical.com> | ||
145 | 6 | # Author: Tim Cole <tim.cole@canonical.com> | ||
146 | 7 | # | ||
147 | 8 | # Copyright 2009 Canonical Ltd. | ||
148 | 9 | # | ||
149 | 10 | # This program is free software: you can redistribute it and/or modify it | ||
150 | 11 | # under the terms of the GNU General Public License version 3, as published | ||
151 | 12 | # by the Free Software Foundation. | ||
152 | 13 | # | ||
153 | 14 | # This program is distributed in the hope that it will be useful, but | ||
154 | 15 | # WITHOUT ANY WARRANTY; without even the implied warranties of | ||
155 | 16 | # MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR | ||
156 | 17 | # PURPOSE. See the GNU General Public License for more details. | ||
157 | 18 | # | ||
158 | 19 | # You should have received a copy of the GNU General Public License along | ||
159 | 20 | # with this program. If not, see <http://www.gnu.org/licenses/>. | ||
160 | 21 | """Pretty API for protocol client.""" | ||
161 | 22 | |||
162 | 23 | from __future__ import with_statement | ||
163 | 24 | |||
164 | 25 | import os | ||
165 | 26 | import sys | ||
166 | 27 | import shutil | ||
167 | 28 | from Queue import Queue | ||
168 | 29 | from threading import Lock | ||
169 | 30 | import zlib | ||
170 | 31 | import urlparse | ||
171 | 32 | import ConfigParser | ||
172 | 33 | from cStringIO import StringIO | ||
173 | 34 | from twisted.internet import reactor, defer | ||
174 | 35 | from twisted.internet.defer import inlineCallbacks, returnValue | ||
175 | 36 | from ubuntuone.logger import LOGFOLDER | ||
176 | 37 | from ubuntuone.storageprotocol.content_hash import crc32 | ||
177 | 38 | from ubuntuone.storageprotocol.context import get_ssl_context | ||
178 | 39 | from u1sync.genericmerge import MergeNode | ||
179 | 40 | from u1sync.utils import should_sync | ||
180 | 41 | |||
181 | 42 | CONSUMER_KEY = "ubuntuone" | ||
182 | 43 | |||
183 | 44 | from oauth.oauth import OAuthConsumer | ||
184 | 45 | from ubuntuone.storageprotocol.client import ( | ||
185 | 46 | StorageClientFactory, StorageClient) | ||
186 | 47 | from ubuntuone.storageprotocol import request, volumes | ||
187 | 48 | from ubuntuone.storageprotocol.dircontent_pb2 import \ | ||
188 | 49 | DirectoryContent, DIRECTORY | ||
189 | 50 | import uuid | ||
190 | 51 | import logging | ||
191 | 52 | from logging.handlers import RotatingFileHandler | ||
192 | 53 | import time | ||
193 | 54 | |||
194 | 55 | def share_str(share_uuid): | ||
195 | 56 | """Converts a share UUID to a form the protocol likes.""" | ||
196 | 57 | return str(share_uuid) if share_uuid is not None else request.ROOT | ||
197 | 58 | |||
198 | 59 | LOGFILENAME = os.path.join(LOGFOLDER, 'u1sync.log') | ||
199 | 60 | u1_logger = logging.getLogger("u1sync.timing.log") | ||
200 | 61 | handler = RotatingFileHandler(LOGFILENAME) | ||
201 | 62 | u1_logger.addHandler(handler) | ||
202 | 63 | |||
203 | 64 | def log_timing(func): | ||
204 | 65 | def wrapper(*arg, **kwargs): | ||
205 | 66 | start = time.time() | ||
206 | 67 | ent = func(*arg, **kwargs) | ||
207 | 68 | stop = time.time() | ||
208 | 69 | u1_logger.debug('for %s %0.5f ms elapsed' % (func.func_name, \ | ||
209 | 70 | (stop-start)*1000.0)) | ||
210 | 71 | return ent | ||
211 | 72 | return wrapper | ||
212 | 73 | |||
213 | 74 | |||
214 | 75 | class ForcedShutdown(Exception): | ||
215 | 76 | """Client shutdown forced.""" | ||
216 | 77 | |||
217 | 78 | |||
218 | 79 | class Waiter(object): | ||
219 | 80 | """Wait object for blocking waits.""" | ||
220 | 81 | |||
221 | 82 | def __init__(self): | ||
222 | 83 | """Initializes the wait object.""" | ||
223 | 84 | self.queue = Queue() | ||
224 | 85 | |||
225 | 86 | def wake(self, result): | ||
226 | 87 | """Wakes the waiter with a result.""" | ||
227 | 88 | self.queue.put((result, None)) | ||
228 | 89 | |||
229 | 90 | def wakeAndRaise(self, exc_info): | ||
230 | 91 | """Wakes the waiter, raising the given exception in it.""" | ||
231 | 92 | self.queue.put((None, exc_info)) | ||
232 | 93 | |||
233 | 94 | def wakeWithResult(self, func, *args, **kw): | ||
234 | 95 | """Wakes the waiter with the result of the given function.""" | ||
235 | 96 | try: | ||
236 | 97 | result = func(*args, **kw) | ||
237 | 98 | except Exception: | ||
238 | 99 | self.wakeAndRaise(sys.exc_info()) | ||
239 | 100 | else: | ||
240 | 101 | self.wake(result) | ||
241 | 102 | |||
242 | 103 | def wait(self): | ||
243 | 104 | """Waits for wakeup.""" | ||
244 | 105 | (result, exc_info) = self.queue.get() | ||
245 | 106 | if exc_info: | ||
246 | 107 | try: | ||
247 | 108 | raise exc_info[0], exc_info[1], exc_info[2] | ||
248 | 109 | finally: | ||
249 | 110 | exc_info = None | ||
250 | 111 | else: | ||
251 | 112 | return result | ||
252 | 113 | |||
253 | 114 | |||
254 | 115 | class SyncStorageClient(StorageClient): | ||
255 | 116 | """Simple client that calls a callback on connection.""" | ||
256 | 117 | |||
257 | 118 | @log_timing | ||
258 | 119 | def connectionMade(self): | ||
259 | 120 | """Setup and call callback.""" | ||
260 | 121 | StorageClient.connectionMade(self) | ||
261 | 122 | if self.factory.current_protocol not in (None, self): | ||
262 | 123 | self.factory.current_protocol.transport.loseConnection() | ||
263 | 124 | self.factory.current_protocol = self | ||
264 | 125 | self.factory.observer.connected() | ||
265 | 126 | |||
266 | 127 | @log_timing | ||
267 | 128 | def connectionLost(self, reason=None): | ||
268 | 129 | """Callback for established connection lost""" | ||
269 | 130 | if self.factory.current_protocol is self: | ||
270 | 131 | self.factory.current_protocol = None | ||
271 | 132 | self.factory.observer.disconnected(reason) | ||
272 | 133 | |||
273 | 134 | |||
274 | 135 | class SyncClientFactory(StorageClientFactory): | ||
275 | 136 | """A cmd protocol factory.""" | ||
276 | 137 | # no init: pylint: disable-msg=W0232 | ||
277 | 138 | |||
278 | 139 | protocol = SyncStorageClient | ||
279 | 140 | |||
280 | 141 | @log_timing | ||
281 | 142 | def __init__(self, observer): | ||
282 | 143 | """Create the factory""" | ||
283 | 144 | self.observer = observer | ||
284 | 145 | self.current_protocol = None | ||
285 | 146 | |||
286 | 147 | @log_timing | ||
287 | 148 | def clientConnectionFailed(self, connector, reason): | ||
288 | 149 | """We failed at connecting.""" | ||
289 | 150 | self.current_protocol = None | ||
290 | 151 | self.observer.connection_failed(reason) | ||
291 | 152 | |||
292 | 153 | |||
293 | 154 | class UnsupportedOperationError(Exception): | ||
294 | 155 | """The operation is unsupported by the protocol version.""" | ||
295 | 156 | |||
296 | 157 | |||
297 | 158 | class ConnectionError(Exception): | ||
298 | 159 | """A connection error.""" | ||
299 | 160 | |||
300 | 161 | |||
301 | 162 | class AuthenticationError(Exception): | ||
302 | 163 | """An authentication error.""" | ||
303 | 164 | |||
304 | 165 | |||
305 | 166 | class NoSuchShareError(Exception): | ||
306 | 167 | """Error when there is no such share available.""" | ||
307 | 168 | |||
308 | 169 | |||
309 | 170 | class CapabilitiesError(Exception): | ||
310 | 171 | """A capabilities set/query related error.""" | ||
311 | 172 | |||
312 | 173 | class Client(object): | ||
313 | 174 | """U1 storage client facade.""" | ||
314 | 175 | required_caps = frozenset(["no-content", "fix462230"]) | ||
315 | 176 | |||
316 | 177 | def __init__(self, realm, reactor=reactor): | ||
317 | 178 | """Create the instance.""" | ||
318 | 179 | |||
319 | 180 | self.reactor = reactor | ||
320 | 181 | self.factory = SyncClientFactory(self) | ||
321 | 182 | |||
322 | 183 | self._status_lock = Lock() | ||
323 | 184 | self._status = "disconnected" | ||
324 | 185 | self._status_reason = None | ||
325 | 186 | self._status_waiting = [] | ||
326 | 187 | self._active_waiters = set() | ||
327 | 188 | self.consumer_key = CONSUMER_KEY | ||
328 | 189 | self.consumer_secret = "hammertime" | ||
329 | 190 | |||
330 | 191 | def force_shutdown(self): | ||
331 | 192 | """Forces the client to shut itself down.""" | ||
332 | 193 | with self._status_lock: | ||
333 | 194 | self._status = "forced_shutdown" | ||
334 | 195 | self._reason = None | ||
335 | 196 | for waiter in self._active_waiters: | ||
336 | 197 | waiter.wakeAndRaise((ForcedShutdown("Forced shutdown"), | ||
337 | 198 | None, None)) | ||
338 | 199 | self._active_waiters.clear() | ||
339 | 200 | |||
340 | 201 | def _get_waiter_locked(self): | ||
341 | 202 | """Gets a wait object for blocking waits. Should be called with the | ||
342 | 203 | status lock held. | ||
343 | 204 | """ | ||
344 | 205 | waiter = Waiter() | ||
345 | 206 | if self._status == "forced_shutdown": | ||
346 | 207 | raise ForcedShutdown("Forced shutdown") | ||
347 | 208 | self._active_waiters.add(waiter) | ||
348 | 209 | return waiter | ||
349 | 210 | |||
350 | 211 | def _get_waiter(self): | ||
351 | 212 | """Get a wait object for blocking waits. Acquires the status lock.""" | ||
352 | 213 | with self._status_lock: | ||
353 | 214 | return self._get_waiter_locked() | ||
354 | 215 | |||
355 | 216 | def _wait(self, waiter): | ||
356 | 217 | """Waits for the waiter.""" | ||
357 | 218 | try: | ||
358 | 219 | return waiter.wait() | ||
359 | 220 | finally: | ||
360 | 221 | with self._status_lock: | ||
361 | 222 | if waiter in self._active_waiters: | ||
362 | 223 | self._active_waiters.remove(waiter) | ||
363 | 224 | |||
364 | 225 | @log_timing | ||
365 | 226 | def _change_status(self, status, reason=None): | ||
366 | 227 | """Changes the client status. Usually called from the reactor | ||
367 | 228 | thread. | ||
368 | 229 | |||
369 | 230 | """ | ||
370 | 231 | with self._status_lock: | ||
371 | 232 | if self._status == "forced_shutdown": | ||
372 | 233 | return | ||
373 | 234 | self._status = status | ||
374 | 235 | self._status_reason = reason | ||
375 | 236 | waiting = self._status_waiting | ||
376 | 237 | self._status_waiting = [] | ||
377 | 238 | for waiter in waiting: | ||
378 | 239 | waiter.wake((status, reason)) | ||
379 | 240 | |||
380 | 241 | @log_timing | ||
381 | 242 | def _await_status_not(self, *ignore_statuses): | ||
382 | 243 | """Blocks until the client status changes, returning the new status. | ||
383 | 244 | Should never be called from the reactor thread. | ||
384 | 245 | |||
385 | 246 | """ | ||
386 | 247 | with self._status_lock: | ||
387 | 248 | status = self._status | ||
388 | 249 | reason = self._status_reason | ||
389 | 250 | while status in ignore_statuses: | ||
390 | 251 | waiter = self._get_waiter_locked() | ||
391 | 252 | self._status_waiting.append(waiter) | ||
392 | 253 | self._status_lock.release() | ||
393 | 254 | try: | ||
394 | 255 | status, reason = self._wait(waiter) | ||
395 | 256 | finally: | ||
396 | 257 | self._status_lock.acquire() | ||
397 | 258 | if status == "forced_shutdown": | ||
398 | 259 | raise ForcedShutdown("Forced shutdown.") | ||
399 | 260 | return (status, reason) | ||
400 | 261 | |||
401 | 262 | def connection_failed(self, reason): | ||
402 | 263 | """Notification that connection failed.""" | ||
403 | 264 | self._change_status("disconnected", reason) | ||
404 | 265 | |||
405 | 266 | def connected(self): | ||
406 | 267 | """Notification that connection succeeded.""" | ||
407 | 268 | self._change_status("connected") | ||
408 | 269 | |||
409 | 270 | def disconnected(self, reason): | ||
410 | 271 | """Notification that we were disconnected.""" | ||
411 | 272 | self._change_status("disconnected", reason) | ||
412 | 273 | |||
413 | 274 | def defer_from_thread(self, function, *args, **kwargs): | ||
414 | 275 | """Do twisted defer magic to get results and show exceptions.""" | ||
415 | 276 | waiter = self._get_waiter() | ||
416 | 277 | @log_timing | ||
417 | 278 | def runner(): | ||
418 | 279 | """inner.""" | ||
419 | 280 | # we do want to catch all | ||
420 | 281 | # no init: pylint: disable-msg=W0703 | ||
421 | 282 | try: | ||
422 | 283 | d = function(*args, **kwargs) | ||
423 | 284 | if isinstance(d, defer.Deferred): | ||
424 | 285 | d.addCallbacks(lambda r: waiter.wake((r, None, None)), | ||
425 | 286 | lambda f: waiter.wake((None, None, f))) | ||
426 | 287 | else: | ||
427 | 288 | waiter.wake((d, None, None)) | ||
428 | 289 | except Exception: | ||
429 | 290 | waiter.wake((None, sys.exc_info(), None)) | ||
430 | 291 | |||
431 | 292 | self.reactor.callFromThread(runner) | ||
432 | 293 | result, exc_info, failure = self._wait(waiter) | ||
433 | 294 | if exc_info: | ||
434 | 295 | try: | ||
435 | 296 | raise exc_info[0], exc_info[1], exc_info[2] | ||
436 | 297 | finally: | ||
437 | 298 | exc_info = None | ||
438 | 299 | elif failure: | ||
439 | 300 | failure.raiseException() | ||
440 | 301 | else: | ||
441 | 302 | return result | ||
442 | 303 | |||
443 | 304 | @log_timing | ||
444 | 305 | def connect(self, host, port): | ||
445 | 306 | """Connect to host/port.""" | ||
446 | 307 | def _connect(): | ||
447 | 308 | """Deferred part.""" | ||
448 | 309 | self.reactor.connectTCP(host, port, self.factory) | ||
449 | 310 | self._connect_inner(_connect) | ||
450 | 311 | |||
451 | 312 | @log_timing | ||
452 | 313 | def connect_ssl(self, host, port, no_verify): | ||
453 | 314 | """Connect to host/port using ssl.""" | ||
454 | 315 | def _connect(): | ||
455 | 316 | """deferred part.""" | ||
456 | 317 | ctx = get_ssl_context(no_verify) | ||
457 | 318 | self.reactor.connectSSL(host, port, self.factory, ctx) | ||
458 | 319 | self._connect_inner(_connect) | ||
459 | 320 | |||
460 | 321 | @log_timing | ||
461 | 322 | def _connect_inner(self, _connect): | ||
462 | 323 | """Helper function for connecting.""" | ||
463 | 324 | self._change_status("connecting") | ||
464 | 325 | self.reactor.callFromThread(_connect) | ||
465 | 326 | status, reason = self._await_status_not("connecting") | ||
466 | 327 | if status != "connected": | ||
467 | 328 | raise ConnectionError(reason.value) | ||
468 | 329 | |||
469 | 330 | @log_timing | ||
470 | 331 | def disconnect(self): | ||
471 | 332 | """Disconnect.""" | ||
472 | 333 | if self.factory.current_protocol is not None: | ||
473 | 334 | self.reactor.callFromThread( | ||
474 | 335 | self.factory.current_protocol.transport.loseConnection) | ||
475 | 336 | self._await_status_not("connecting", "connected", "authenticated") | ||
476 | 337 | |||
477 | 338 | @log_timing | ||
478 | 339 | def set_capabilities(self): | ||
479 | 340 | """Set the capabilities with the server""" | ||
480 | 341 | |||
481 | 342 | client = self.factory.current_protocol | ||
482 | 343 | @log_timing | ||
483 | 344 | def set_caps_callback(req): | ||
484 | 345 | "Caps query succeeded" | ||
485 | 346 | if not req.accepted: | ||
486 | 347 | de = defer.fail("The server denied setting %s capabilities" % \ | ||
487 | 348 | req.caps) | ||
488 | 349 | return de | ||
489 | 350 | |||
490 | 351 | @log_timing | ||
491 | 352 | def query_caps_callback(req): | ||
492 | 353 | "Caps query succeeded" | ||
493 | 354 | if req.accepted: | ||
494 | 355 | set_d = client.set_caps(self.required_caps) | ||
495 | 356 | set_d.addCallback(set_caps_callback) | ||
496 | 357 | return set_d | ||
497 | 358 | else: | ||
498 | 359 | # the server don't have the requested capabilities. | ||
499 | 360 | # return a failure for now, in the future we might want | ||
500 | 361 | # to reconnect to another server | ||
501 | 362 | de = defer.fail("The server don't have the requested" | ||
502 | 363 | " capabilities: %s" % str(req.caps)) | ||
503 | 364 | return de | ||
504 | 365 | |||
505 | 366 | @log_timing | ||
506 | 367 | def _wrapped_set_capabilities(): | ||
507 | 368 | """Wrapped set_capabilities """ | ||
508 | 369 | d = client.query_caps(self.required_caps) | ||
509 | 370 | d.addCallback(query_caps_callback) | ||
510 | 371 | return d | ||
511 | 372 | |||
512 | 373 | try: | ||
513 | 374 | self.defer_from_thread(_wrapped_set_capabilities) | ||
514 | 375 | except request.StorageProtocolError, e: | ||
515 | 376 | raise CapabilitiesError(e) | ||
516 | 377 | |||
517 | 378 | @log_timing | ||
518 | 379 | def get_root_info(self, volume_uuid): | ||
519 | 380 | """Returns the UUID of the applicable share root.""" | ||
520 | 381 | if volume_uuid is None: | ||
521 | 382 | _get_root = self.factory.current_protocol.get_root | ||
522 | 383 | root = self.defer_from_thread(_get_root) | ||
523 | 384 | return (uuid.UUID(root), True) | ||
524 | 385 | else: | ||
525 | 386 | str_volume_uuid = str(volume_uuid) | ||
526 | 387 | volume = self._match_volume(lambda v: \ | ||
527 | 388 | str(v.volume_id) == str_volume_uuid) | ||
528 | 389 | if isinstance(volume, volumes.ShareVolume): | ||
529 | 390 | modify = volume.access_level == "Modify" | ||
530 | 391 | if isinstance(volume, volumes.UDFVolume): | ||
531 | 392 | modify = True | ||
532 | 393 | return (uuid.UUID(str(volume.node_id)), modify) | ||
533 | 394 | |||
534 | 395 | @log_timing | ||
535 | 396 | def resolve_path(self, share_uuid, root_uuid, path): | ||
536 | 397 | """Resolve path relative to the given root node.""" | ||
537 | 398 | |||
538 | 399 | @inlineCallbacks | ||
539 | 400 | def _resolve_worker(): | ||
540 | 401 | """Path resolution worker.""" | ||
541 | 402 | node_uuid = root_uuid | ||
542 | 403 | local_path = path.strip('/') | ||
543 | 404 | |||
544 | 405 | while local_path != '': | ||
545 | 406 | local_path, name = os.path.split(local_path) | ||
546 | 407 | hashes = yield self._get_node_hashes(share_uuid, [root_uuid]) | ||
547 | 408 | content_hash = hashes.get(root_uuid, None) | ||
548 | 409 | if content_hash is None: | ||
549 | 410 | raise KeyError, "Content hash not available" | ||
550 | 411 | entries = yield self._get_raw_dir_entries(share_uuid, | ||
551 | 412 | root_uuid, | ||
552 | 413 | content_hash) | ||
553 | 414 | match_name = name.decode('utf-8') | ||
554 | 415 | match = None | ||
555 | 416 | for entry in entries: | ||
556 | 417 | if match_name == entry.name: | ||
557 | 418 | match = entry | ||
558 | 419 | break | ||
559 | 420 | |||
560 | 421 | if match is None: | ||
561 | 422 | raise KeyError, "Path not found" | ||
562 | 423 | |||
563 | 424 | node_uuid = uuid.UUID(match.node) | ||
564 | 425 | |||
565 | 426 | returnValue(node_uuid) | ||
566 | 427 | |||
567 | 428 | return self.defer_from_thread(_resolve_worker) | ||
568 | 429 | |||
569 | 430 | @log_timing | ||
570 | 431 | def oauth_from_token(self, token): | ||
571 | 432 | """Perform OAuth authorisation using an existing token.""" | ||
572 | 433 | |||
573 | 434 | consumer = OAuthConsumer(self.consumer_key, self.consumer_secret) | ||
574 | 435 | |||
575 | 436 | def _auth_successful(value): | ||
576 | 437 | """Callback for successful auth. Changes status to | ||
577 | 438 | authenticated.""" | ||
578 | 439 | self._change_status("authenticated") | ||
579 | 440 | return value | ||
580 | 441 | |||
581 | 442 | def _auth_failed(value): | ||
582 | 443 | """Callback for failed auth. Disconnects.""" | ||
583 | 444 | self.factory.current_protocol.transport.loseConnection() | ||
584 | 445 | return value | ||
585 | 446 | |||
586 | 447 | def _wrapped_authenticate(): | ||
587 | 448 | """Wrapped authenticate.""" | ||
588 | 449 | d = self.factory.current_protocol.oauth_authenticate(consumer, | ||
589 | 450 | token) | ||
590 | 451 | d.addCallbacks(_auth_successful, _auth_failed) | ||
591 | 452 | return d | ||
592 | 453 | |||
593 | 454 | try: | ||
594 | 455 | self.defer_from_thread(_wrapped_authenticate) | ||
595 | 456 | except request.StorageProtocolError, e: | ||
596 | 457 | raise AuthenticationError(e) | ||
597 | 458 | status, reason = self._await_status_not("connected") | ||
598 | 459 | if status != "authenticated": | ||
599 | 460 | raise AuthenticationError(reason.value) | ||
600 | 461 | |||
601 | 462 | @log_timing | ||
602 | 463 | def find_volume(self, volume_spec): | ||
603 | 464 | """Finds a share matching the given UUID. Looks at both share UUIDs | ||
604 | 465 | and root node UUIDs.""" | ||
605 | 466 | volume = self._match_volume(lambda s: \ | ||
606 | 467 | str(s.volume_id) == volume_spec or \ | ||
607 | 468 | str(s.node_id) == volume_spec) | ||
608 | 469 | return uuid.UUID(str(volume.volume_id)) | ||
609 | 470 | |||
610 | 471 | @log_timing | ||
611 | 472 | def _match_volume(self, predicate): | ||
612 | 473 | """Finds a volume matching the given predicate.""" | ||
613 | 474 | _list_shares = self.factory.current_protocol.list_volumes | ||
614 | 475 | r = self.defer_from_thread(_list_shares) | ||
615 | 476 | for volume in r.volumes: | ||
616 | 477 | if predicate(volume): | ||
617 | 478 | return volume | ||
618 | 479 | raise NoSuchShareError() | ||
619 | 480 | |||
620 | 481 | @log_timing | ||
621 | 482 | def build_tree(self, share_uuid, root_uuid): | ||
622 | 483 | """Builds and returns a tree representing the metadata for the given | ||
623 | 484 | subtree in the given share. | ||
624 | 485 | |||
625 | 486 | @param share_uuid: the share UUID or None for the user's volume | ||
626 | 487 | @param root_uuid: the root UUID of the subtree (must be a directory) | ||
627 | 488 | @return: a MergeNode tree | ||
628 | 489 | |||
629 | 490 | """ | ||
630 | 491 | root = MergeNode(node_type=DIRECTORY, uuid=root_uuid) | ||
631 | 492 | |||
632 | 493 | @log_timing | ||
633 | 494 | @inlineCallbacks | ||
634 | 495 | def _get_root_content_hash(): | ||
635 | 496 | """Obtain the content hash for the root node.""" | ||
636 | 497 | result = yield self._get_node_hashes(share_uuid, [root_uuid]) | ||
637 | 498 | returnValue(result.get(root_uuid, None)) | ||
638 | 499 | |||
639 | 500 | root.content_hash = self.defer_from_thread(_get_root_content_hash) | ||
640 | 501 | if root.content_hash is None: | ||
641 | 502 | raise ValueError("No content available for node %s" % root_uuid) | ||
642 | 503 | |||
643 | 504 | @log_timing | ||
644 | 505 | @inlineCallbacks | ||
645 | 506 | def _get_children(parent_uuid, parent_content_hash): | ||
646 | 507 | """Obtain a sequence of MergeNodes corresponding to a node's | ||
647 | 508 | immediate children. | ||
648 | 509 | |||
649 | 510 | """ | ||
650 | 511 | entries = yield self._get_raw_dir_entries(share_uuid, | ||
651 | 512 | parent_uuid, | ||
652 | 513 | parent_content_hash) | ||
653 | 514 | children = {} | ||
654 | 515 | for entry in entries: | ||
655 | 516 | if should_sync(entry.name): | ||
656 | 517 | child = MergeNode(node_type=entry.node_type, | ||
657 | 518 | uuid=uuid.UUID(entry.node)) | ||
658 | 519 | children[entry.name] = child | ||
659 | 520 | |||
660 | 521 | child_uuids = [child.uuid for child in children.itervalues()] | ||
661 | 522 | content_hashes = yield self._get_node_hashes(share_uuid, | ||
662 | 523 | child_uuids) | ||
663 | 524 | for child in children.itervalues(): | ||
664 | 525 | child.content_hash = content_hashes.get(child.uuid, None) | ||
665 | 526 | |||
666 | 527 | returnValue(children) | ||
667 | 528 | |||
668 | 529 | need_children = [root] | ||
669 | 530 | while need_children: | ||
670 | 531 | node = need_children.pop() | ||
671 | 532 | if node.content_hash is not None: | ||
672 | 533 | children = self.defer_from_thread(_get_children, node.uuid, | ||
673 | 534 | node.content_hash) | ||
674 | 535 | node.children = children | ||
675 | 536 | for child in children.itervalues(): | ||
676 | 537 | if child.node_type == DIRECTORY: | ||
677 | 538 | need_children.append(child) | ||
678 | 539 | |||
679 | 540 | return root | ||
680 | 541 | |||
681 | 542 | @log_timing | ||
682 | 543 | def _get_raw_dir_entries(self, share_uuid, node_uuid, content_hash): | ||
683 | 544 | """Gets raw dir entries for the given directory.""" | ||
684 | 545 | d = self.factory.current_protocol.get_content(share_str(share_uuid), | ||
685 | 546 | str(node_uuid), | ||
686 | 547 | content_hash) | ||
687 | 548 | d.addCallback(lambda c: zlib.decompress(c.data)) | ||
688 | 549 | |||
689 | 550 | def _parse_content(raw_content): | ||
690 | 551 | """Parses directory content into a list of entry objects.""" | ||
691 | 552 | unserialized_content = DirectoryContent() | ||
692 | 553 | unserialized_content.ParseFromString(raw_content) | ||
693 | 554 | return list(unserialized_content.entries) | ||
694 | 555 | |||
695 | 556 | d.addCallback(_parse_content) | ||
696 | 557 | return d | ||
697 | 558 | |||
698 | 559 | @log_timing | ||
699 | 560 | def download_string(self, share_uuid, node_uuid, content_hash): | ||
700 | 561 | """Reads a file from the server into a string.""" | ||
701 | 562 | output = StringIO() | ||
702 | 563 | self._download_inner(share_uuid=share_uuid, node_uuid=node_uuid, | ||
703 | 564 | content_hash=content_hash, output=output) | ||
704 | 565 | return output.getValue() | ||
705 | 566 | |||
706 | 567 | @log_timing | ||
707 | 568 | def download_file(self, share_uuid, node_uuid, content_hash, filename): | ||
708 | 569 | """Downloads a file from the server.""" | ||
709 | 570 | # file names have to be quoted to make sure that no issues occur when | ||
710 | 571 | # spaces exist | ||
711 | 572 | partial_filename = "%s.u1partial" % filename | ||
712 | 573 | output = open(partial_filename, "w") | ||
713 | 574 | |||
714 | 575 | @log_timing | ||
715 | 576 | def rename_file(): | ||
716 | 577 | """Renames the temporary file to the final name.""" | ||
717 | 578 | output.close() | ||
718 | 579 | print "Finished downloading %s" % filename | ||
719 | 580 | os.rename(partial_filename, filename) | ||
720 | 581 | |||
721 | 582 | @log_timing | ||
722 | 583 | def delete_file(): | ||
723 | 584 | """Deletes the temporary file.""" | ||
724 | 585 | output.close() | ||
725 | 586 | os.unlink(partial_filename) | ||
726 | 587 | |||
727 | 588 | self._download_inner(share_uuid=share_uuid, node_uuid=node_uuid, | ||
728 | 589 | content_hash=content_hash, output=output, | ||
729 | 590 | on_success=rename_file, on_failure=delete_file) | ||
730 | 591 | |||
731 | 592 | @log_timing | ||
732 | 593 | def _download_inner(self, share_uuid, node_uuid, content_hash, output, | ||
733 | 594 | on_success=lambda: None, on_failure=lambda: None): | ||
734 | 595 | """Helper function for content downloads.""" | ||
735 | 596 | dec = zlib.decompressobj() | ||
736 | 597 | |||
737 | 598 | @log_timing | ||
738 | 599 | def write_data(data): | ||
739 | 600 | """Helper which writes data to the output file.""" | ||
740 | 601 | uncompressed_data = dec.decompress(data) | ||
741 | 602 | output.write(uncompressed_data) | ||
742 | 603 | |||
743 | 604 | @log_timing | ||
744 | 605 | def finish_download(value): | ||
745 | 606 | """Helper which finishes the download.""" | ||
746 | 607 | uncompressed_data = dec.flush() | ||
747 | 608 | output.write(uncompressed_data) | ||
748 | 609 | on_success() | ||
749 | 610 | return value | ||
750 | 611 | |||
751 | 612 | @log_timing | ||
752 | 613 | def abort_download(value): | ||
753 | 614 | """Helper which aborts the download.""" | ||
754 | 615 | on_failure() | ||
755 | 616 | return value | ||
756 | 617 | |||
757 | 618 | @log_timing | ||
758 | 619 | def _download(): | ||
759 | 620 | """Async helper.""" | ||
760 | 621 | _get_content = self.factory.current_protocol.get_content | ||
761 | 622 | d = _get_content(share_str(share_uuid), str(node_uuid), | ||
762 | 623 | content_hash, callback=write_data) | ||
763 | 624 | d.addCallbacks(finish_download, abort_download) | ||
764 | 625 | return d | ||
765 | 626 | |||
766 | 627 | self.defer_from_thread(_download) | ||
767 | 628 | |||
768 | 629 | @log_timing | ||
769 | 630 | def create_directory(self, share_uuid, parent_uuid, name): | ||
770 | 631 | """Creates a directory on the server.""" | ||
771 | 632 | r = self.defer_from_thread(self.factory.current_protocol.make_dir, | ||
772 | 633 | share_str(share_uuid), str(parent_uuid), | ||
773 | 634 | name) | ||
774 | 635 | return uuid.UUID(r.new_id) | ||
775 | 636 | |||
776 | 637 | @log_timing | ||
777 | 638 | def create_file(self, share_uuid, parent_uuid, name): | ||
778 | 639 | """Creates a file on the server.""" | ||
779 | 640 | r = self.defer_from_thread(self.factory.current_protocol.make_file, | ||
780 | 641 | share_str(share_uuid), str(parent_uuid), | ||
781 | 642 | name) | ||
782 | 643 | return uuid.UUID(r.new_id) | ||
783 | 644 | |||
784 | 645 | @log_timing | ||
785 | 646 | def create_symlink(self, share_uuid, parent_uuid, name, target): | ||
786 | 647 | """Creates a symlink on the server.""" | ||
787 | 648 | raise UnsupportedOperationError("Protocol does not support symlinks") | ||
788 | 649 | |||
789 | 650 | @log_timing | ||
790 | 651 | def upload_string(self, share_uuid, node_uuid, old_content_hash, | ||
791 | 652 | content_hash, content): | ||
792 | 653 | """Uploads a string to the server as file content.""" | ||
793 | 654 | crc = crc32(content, 0) | ||
794 | 655 | compressed_content = zlib.compress(content, 9) | ||
795 | 656 | compressed = StringIO(compressed_content) | ||
796 | 657 | self.defer_from_thread(self.factory.current_protocol.put_content, | ||
797 | 658 | share_str(share_uuid), str(node_uuid), | ||
798 | 659 | old_content_hash, content_hash, | ||
799 | 660 | crc, len(content), len(compressed_content), | ||
800 | 661 | compressed) | ||
801 | 662 | |||
802 | 663 | @log_timing | ||
803 | 664 | def upload_file(self, share_uuid, node_uuid, old_content_hash, | ||
804 | 665 | content_hash, filename): | ||
805 | 666 | """Uploads a file to the server.""" | ||
806 | 667 | parent_dir = os.path.split(filename)[0] | ||
807 | 668 | unique_filename = os.path.join(parent_dir, "." + str(uuid.uuid4())) | ||
808 | 669 | |||
809 | 670 | |||
810 | 671 | class StagingFile(object): | ||
811 | 672 | """An object which tracks data being compressed for staging.""" | ||
812 | 673 | def __init__(self, stream): | ||
813 | 674 | """Initialize a compression object.""" | ||
814 | 675 | self.crc32 = 0 | ||
815 | 676 | self.enc = zlib.compressobj(9) | ||
816 | 677 | self.size = 0 | ||
817 | 678 | self.compressed_size = 0 | ||
818 | 679 | self.stream = stream | ||
819 | 680 | |||
820 | 681 | def write(self, bytes): | ||
821 | 682 | """Compress bytes, keeping track of length and crc32.""" | ||
822 | 683 | self.size += len(bytes) | ||
823 | 684 | self.crc32 = crc32(bytes, self.crc32) | ||
824 | 685 | compressed_bytes = self.enc.compress(bytes) | ||
825 | 686 | self.compressed_size += len(compressed_bytes) | ||
826 | 687 | self.stream.write(compressed_bytes) | ||
827 | 688 | |||
828 | 689 | def finish(self): | ||
829 | 690 | """Finish staging compressed data.""" | ||
830 | 691 | compressed_bytes = self.enc.flush() | ||
831 | 692 | self.compressed_size += len(compressed_bytes) | ||
832 | 693 | self.stream.write(compressed_bytes) | ||
833 | 694 | |||
834 | 695 | with open(unique_filename, "w+") as compressed: | ||
835 | 696 | os.unlink(unique_filename) | ||
836 | 697 | with open(filename, "r") as original: | ||
837 | 698 | staging = StagingFile(compressed) | ||
838 | 699 | shutil.copyfileobj(original, staging) | ||
839 | 700 | staging.finish() | ||
840 | 701 | compressed.seek(0) | ||
841 | 702 | self.defer_from_thread(self.factory.current_protocol.put_content, | ||
842 | 703 | share_str(share_uuid), str(node_uuid), | ||
843 | 704 | old_content_hash, content_hash, | ||
844 | 705 | staging.crc32, | ||
845 | 706 | staging.size, staging.compressed_size, | ||
846 | 707 | compressed) | ||
847 | 708 | |||
848 | 709 | @log_timing | ||
849 | 710 | def move(self, share_uuid, parent_uuid, name, node_uuid): | ||
850 | 711 | """Moves a file on the server.""" | ||
851 | 712 | self.defer_from_thread(self.factory.current_protocol.move, | ||
852 | 713 | share_str(share_uuid), str(node_uuid), | ||
853 | 714 | str(parent_uuid), name) | ||
854 | 715 | |||
855 | 716 | @log_timing | ||
856 | 717 | def unlink(self, share_uuid, node_uuid): | ||
857 | 718 | """Unlinks a file on the server.""" | ||
858 | 719 | self.defer_from_thread(self.factory.current_protocol.unlink, | ||
859 | 720 | share_str(share_uuid), str(node_uuid)) | ||
860 | 721 | |||
861 | 722 | @log_timing | ||
862 | 723 | def _get_node_hashes(self, share_uuid, node_uuids): | ||
863 | 724 | """Fetches hashes for the given nodes.""" | ||
864 | 725 | share = share_str(share_uuid) | ||
865 | 726 | queries = [(share, str(node_uuid), request.UNKNOWN_HASH) \ | ||
866 | 727 | for node_uuid in node_uuids] | ||
867 | 728 | d = self.factory.current_protocol.query(queries) | ||
868 | 729 | |||
869 | 730 | @log_timing | ||
870 | 731 | def _collect_hashes(multi_result): | ||
871 | 732 | """Accumulate hashes from query replies.""" | ||
872 | 733 | hashes = {} | ||
873 | 734 | for (success, value) in multi_result: | ||
874 | 735 | if success: | ||
875 | 736 | for node_state in value.response: | ||
876 | 737 | node_uuid = uuid.UUID(node_state.node) | ||
877 | 738 | hashes[node_uuid] = node_state.hash | ||
878 | 739 | return hashes | ||
879 | 740 | |||
880 | 741 | d.addCallback(_collect_hashes) | ||
881 | 742 | return d | ||
882 | 743 | |||
883 | 744 | @log_timing | ||
884 | 745 | def get_incoming_shares(self): | ||
885 | 746 | """Returns a list of incoming shares as (name, uuid, accepted) | ||
886 | 747 | tuples. | ||
887 | 748 | |||
888 | 749 | """ | ||
889 | 750 | _list_shares = self.factory.current_protocol.list_shares | ||
890 | 751 | r = self.defer_from_thread(_list_shares) | ||
891 | 752 | return [(s.name, s.id, s.other_visible_name, | ||
892 | 753 | s.accepted, s.access_level) \ | ||
893 | 754 | for s in r.shares if s.direction == "to_me"] | ||
894 | 0 | 755 | ||
895 | === added file 'src/u1sync/constants.py' | |||
896 | --- src/u1sync/constants.py 1970-01-01 00:00:00 +0000 | |||
897 | +++ src/u1sync/constants.py 2010-08-27 14:56:43 +0000 | |||
898 | @@ -0,0 +1,30 @@ | |||
899 | 1 | # ubuntuone.u1sync.constants | ||
900 | 2 | # | ||
901 | 3 | # u1sync constants | ||
902 | 4 | # | ||
903 | 5 | # Author: Tim Cole <tim.cole@canonical.com> | ||
904 | 6 | # | ||
905 | 7 | # Copyright 2009 Canonical Ltd. | ||
906 | 8 | # | ||
907 | 9 | # This program is free software: you can redistribute it and/or modify it | ||
908 | 10 | # under the terms of the GNU General Public License version 3, as published | ||
909 | 11 | # by the Free Software Foundation. | ||
910 | 12 | # | ||
911 | 13 | # This program is distributed in the hope that it will be useful, but | ||
912 | 14 | # WITHOUT ANY WARRANTY; without even the implied warranties of | ||
913 | 15 | # MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR | ||
914 | 16 | # PURPOSE. See the GNU General Public License for more details. | ||
915 | 17 | # | ||
916 | 18 | # You should have received a copy of the GNU General Public License along | ||
917 | 19 | # with this program. If not, see <http://www.gnu.org/licenses/>. | ||
918 | 20 | """Assorted constant definitions which don't fit anywhere else.""" | ||
919 | 21 | |||
920 | 22 | import re | ||
921 | 23 | |||
922 | 24 | # the name of the directory u1sync uses to keep metadata about a mirror | ||
923 | 25 | METADATA_DIR_NAME = u".ubuntuone-sync" | ||
924 | 26 | |||
925 | 27 | # filenames to ignore | ||
926 | 28 | SPECIAL_FILE_RE = re.compile(".*\\.(" | ||
927 | 29 | "(u1)?partial|part|" | ||
928 | 30 | "(u1)?conflict(\\.[0-9]+)?)$") | ||
929 | 0 | 31 | ||
930 | === added file 'src/u1sync/genericmerge.py' | |||
931 | --- src/u1sync/genericmerge.py 1970-01-01 00:00:00 +0000 | |||
932 | +++ src/u1sync/genericmerge.py 2010-08-27 14:56:43 +0000 | |||
933 | @@ -0,0 +1,88 @@ | |||
934 | 1 | # ubuntuone.u1sync.genericmerge | ||
935 | 2 | # | ||
936 | 3 | # Generic merge function | ||
937 | 4 | # | ||
938 | 5 | # Author: Tim Cole <tim.cole@canonical.com> | ||
939 | 6 | # | ||
940 | 7 | # Copyright 2009 Canonical Ltd. | ||
941 | 8 | # | ||
942 | 9 | # This program is free software: you can redistribute it and/or modify it | ||
943 | 10 | # under the terms of the GNU General Public License version 3, as published | ||
944 | 11 | # by the Free Software Foundation. | ||
945 | 12 | # | ||
946 | 13 | # This program is distributed in the hope that it will be useful, but | ||
947 | 14 | # WITHOUT ANY WARRANTY; without even the implied warranties of | ||
948 | 15 | # MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR | ||
949 | 16 | # PURPOSE. See the GNU General Public License for more details. | ||
950 | 17 | # | ||
951 | 18 | # You should have received a copy of the GNU General Public License along | ||
952 | 19 | # with this program. If not, see <http://www.gnu.org/licenses/>. | ||
953 | 20 | """A generic abstraction for merge operations on directory trees.""" | ||
954 | 21 | |||
955 | 22 | from itertools import chain | ||
956 | 23 | from ubuntuone.storageprotocol.dircontent_pb2 import DIRECTORY | ||
957 | 24 | |||
958 | 25 | class MergeNode(object): | ||
959 | 26 | """A filesystem node. Should generally be treated as immutable.""" | ||
960 | 27 | def __init__(self, node_type, content_hash=None, uuid=None, children=None, | ||
961 | 28 | conflict_info=None): | ||
962 | 29 | """Initializes a node instance.""" | ||
963 | 30 | self.node_type = node_type | ||
964 | 31 | self.children = children | ||
965 | 32 | self.uuid = uuid | ||
966 | 33 | self.content_hash = content_hash | ||
967 | 34 | self.conflict_info = conflict_info | ||
968 | 35 | |||
969 | 36 | def __eq__(self, other): | ||
970 | 37 | """Equality test.""" | ||
971 | 38 | if type(other) is not type(self): | ||
972 | 39 | return False | ||
973 | 40 | return self.node_type == other.node_type and \ | ||
974 | 41 | self.children == other.children and \ | ||
975 | 42 | self.uuid == other.uuid and \ | ||
976 | 43 | self.content_hash == other.content_hash and \ | ||
977 | 44 | self.conflict_info == other.conflict_info | ||
978 | 45 | |||
979 | 46 | def __ne__(self, other): | ||
980 | 47 | """Non-equality test.""" | ||
981 | 48 | return not self.__eq__(other) | ||
982 | 49 | |||
983 | 50 | |||
984 | 51 | def show_tree(tree, indent="", name="/"): | ||
985 | 52 | """Prints a tree.""" | ||
986 | 53 | # TODO: return string do not print | ||
987 | 54 | if tree.node_type == DIRECTORY: | ||
988 | 55 | type_str = "DIR " | ||
989 | 56 | else: | ||
990 | 57 | type_str = "FILE" | ||
991 | 58 | print "%s%-36s %s %s %s" % (indent, tree.uuid, type_str, name, | ||
992 | 59 | tree.content_hash) | ||
993 | 60 | if tree.node_type == DIRECTORY and tree.children is not None: | ||
994 | 61 | for name in sorted(tree.children.keys()): | ||
995 | 62 | subtree = tree.children[name] | ||
996 | 63 | show_tree(subtree, indent=" " + indent, name=name) | ||
997 | 64 | |||
998 | 65 | def generic_merge(trees, pre_merge, post_merge, partial_parent, name): | ||
999 | 66 | """Generic tree merging function.""" | ||
1000 | 67 | |||
1001 | 68 | partial_result = pre_merge(nodes=trees, name=name, | ||
1002 | 69 | partial_parent=partial_parent) | ||
1003 | 70 | |||
1004 | 71 | def tree_children(tree): | ||
1005 | 72 | """Returns children if tree is not None""" | ||
1006 | 73 | return tree.children if tree is not None else None | ||
1007 | 74 | |||
1008 | 75 | child_dicts = [tree_children(t) or {} for t in trees] | ||
1009 | 76 | child_names = set(chain(*[cs.iterkeys() for cs in child_dicts])) | ||
1010 | 77 | child_results = {} | ||
1011 | 78 | for child_name in child_names: | ||
1012 | 79 | subtrees = [cs.get(child_name, None) for cs in child_dicts] | ||
1013 | 80 | child_result = generic_merge(trees=subtrees, | ||
1014 | 81 | pre_merge=pre_merge, | ||
1015 | 82 | post_merge=post_merge, | ||
1016 | 83 | partial_parent=partial_result, | ||
1017 | 84 | name=child_name) | ||
1018 | 85 | child_results[child_name] = child_result | ||
1019 | 86 | |||
1020 | 87 | return post_merge(nodes=trees, partial_result=partial_result, | ||
1021 | 88 | child_results=child_results) | ||
1022 | 0 | 89 | ||
1023 | === added file 'src/u1sync/main.py' | |||
1024 | --- src/u1sync/main.py 1970-01-01 00:00:00 +0000 | |||
1025 | +++ src/u1sync/main.py 2010-08-27 14:56:43 +0000 | |||
1026 | @@ -0,0 +1,360 @@ | |||
1027 | 1 | # ubuntuone.u1sync.main | ||
1028 | 2 | # | ||
1029 | 3 | # Prototype directory sync client | ||
1030 | 4 | # | ||
1031 | 5 | # Author: Lucio Torre <lucio.torre@canonical.com> | ||
1032 | 6 | # Author: Tim Cole <tim.cole@canonical.com> | ||
1033 | 7 | # | ||
1034 | 8 | # Copyright 2009 Canonical Ltd. | ||
1035 | 9 | # | ||
1036 | 10 | # This program is free software: you can redistribute it and/or modify it | ||
1037 | 11 | # under the terms of the GNU General Public License version 3, as published | ||
1038 | 12 | # by the Free Software Foundation. | ||
1039 | 13 | # | ||
1040 | 14 | # This program is distributed in the hope that it will be useful, but | ||
1041 | 15 | # WITHOUT ANY WARRANTY; without even the implied warranties of | ||
1042 | 16 | # MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR | ||
1043 | 17 | # PURPOSE. See the GNU General Public License for more details. | ||
1044 | 18 | # | ||
1045 | 19 | # You should have received a copy of the GNU General Public License along | ||
1046 | 20 | # with this program. If not, see <http://www.gnu.org/licenses/>. | ||
1047 | 21 | """A prototype directory sync client.""" | ||
1048 | 22 | |||
1049 | 23 | from __future__ import with_statement | ||
1050 | 24 | |||
1051 | 25 | import os | ||
1052 | 26 | import sys | ||
1053 | 27 | import uuid | ||
1054 | 28 | import signal | ||
1055 | 29 | import logging | ||
1056 | 30 | from Queue import Queue | ||
1057 | 31 | from errno import EEXIST | ||
1058 | 32 | from twisted.internet import reactor | ||
1059 | 33 | # import the storeage protocol to allow the communication with the server | ||
1060 | 34 | import ubuntuone.storageprotocol.dircontent_pb2 as dircontent_pb2 | ||
1061 | 35 | from ubuntuone.storageprotocol.dircontent_pb2 import DIRECTORY, SYMLINK | ||
1062 | 36 | # import helper modules | ||
1063 | 37 | from u1sync.genericmerge import ( | ||
1064 | 38 | show_tree, generic_merge) | ||
1065 | 39 | from u1sync.client import ( | ||
1066 | 40 | ConnectionError, AuthenticationError, NoSuchShareError, | ||
1067 | 41 | ForcedShutdown, Client) | ||
1068 | 42 | from u1sync.scan import scan_directory | ||
1069 | 43 | from u1sync.merge import ( | ||
1070 | 44 | SyncMerge, ClobberServerMerge, ClobberLocalMerge, merge_trees) | ||
1071 | 45 | from u1sync.sync import download_tree, upload_tree | ||
1072 | 46 | from u1sync.utils import safe_mkdir | ||
1073 | 47 | from u1sync import metadata | ||
1074 | 48 | from u1sync.constants import METADATA_DIR_NAME | ||
1075 | 49 | from u1sync.ubuntuone_optparse import UbuntuOneOptionsParser | ||
1076 | 50 | |||
1077 | 51 | # pylint: disable-msg=W0212 | ||
1078 | 52 | NODE_TYPE_ENUM = dircontent_pb2._NODETYPE | ||
1079 | 53 | # pylint: enable-msg=W0212 | ||
1080 | 54 | def node_type_str(node_type): | ||
1081 | 55 | """Converts a numeric node type to a human-readable string.""" | ||
1082 | 56 | return NODE_TYPE_ENUM.values_by_number[node_type].name | ||
1083 | 57 | |||
1084 | 58 | |||
1085 | 59 | class ReadOnlyShareError(Exception): | ||
1086 | 60 | """Share is read-only.""" | ||
1087 | 61 | |||
1088 | 62 | |||
1089 | 63 | class DirectoryAlreadyInitializedError(Exception): | ||
1090 | 64 | """The directory has already been initialized.""" | ||
1091 | 65 | |||
1092 | 66 | |||
1093 | 67 | class DirectoryNotInitializedError(Exception): | ||
1094 | 68 | """The directory has not been initialized.""" | ||
1095 | 69 | |||
1096 | 70 | |||
1097 | 71 | class NoParentError(Exception): | ||
1098 | 72 | """A node has no parent.""" | ||
1099 | 73 | |||
1100 | 74 | |||
1101 | 75 | class TreesDiffer(Exception): | ||
1102 | 76 | """Raised when diff tree differs.""" | ||
1103 | 77 | def __init__(self, quiet): | ||
1104 | 78 | self.quiet = quiet | ||
1105 | 79 | |||
1106 | 80 | |||
1107 | 81 | MERGE_ACTIONS = { | ||
1108 | 82 | # action: (merge_class, should_upload, should_download) | ||
1109 | 83 | 'sync': (SyncMerge, True, True), | ||
1110 | 84 | 'clobber-server': (ClobberServerMerge, True, False), | ||
1111 | 85 | 'clobber-local': (ClobberLocalMerge, False, True), | ||
1112 | 86 | 'upload': (SyncMerge, True, False), | ||
1113 | 87 | 'download': (SyncMerge, False, True), | ||
1114 | 88 | 'auto': None # special case | ||
1115 | 89 | } | ||
1116 | 90 | |||
1117 | 91 | DEFAULT_MERGE_ACTION = 'auto' | ||
1118 | 92 | |||
1119 | 93 | def do_init(client, share_spec, directory, quiet, subtree_path, | ||
1120 | 94 | metadata=metadata): | ||
1121 | 95 | """Initializes a directory for syncing, and syncs it.""" | ||
1122 | 96 | info = metadata.Metadata() | ||
1123 | 97 | |||
1124 | 98 | if share_spec is not None: | ||
1125 | 99 | info.share_uuid = client.find_volume(share_spec) | ||
1126 | 100 | else: | ||
1127 | 101 | info.share_uuid = None | ||
1128 | 102 | |||
1129 | 103 | if subtree_path is not None: | ||
1130 | 104 | info.path = subtree_path | ||
1131 | 105 | else: | ||
1132 | 106 | info.path = "/" | ||
1133 | 107 | |||
1134 | 108 | logging.info("Initializing directory...") | ||
1135 | 109 | safe_mkdir(directory) | ||
1136 | 110 | |||
1137 | 111 | metadata_dir = os.path.join(directory, METADATA_DIR_NAME) | ||
1138 | 112 | try: | ||
1139 | 113 | os.mkdir(metadata_dir) | ||
1140 | 114 | except OSError, e: | ||
1141 | 115 | if e.errno == EEXIST: | ||
1142 | 116 | raise DirectoryAlreadyInitializedError(directory) | ||
1143 | 117 | else: | ||
1144 | 118 | raise | ||
1145 | 119 | |||
1146 | 120 | logging.info("Writing mirror metadata...") | ||
1147 | 121 | metadata.write(metadata_dir, info) | ||
1148 | 122 | |||
1149 | 123 | logging.info("Done.") | ||
1150 | 124 | |||
1151 | 125 | def do_sync(client, directory, action, dry_run, quiet): | ||
1152 | 126 | """Synchronizes a directory with the given share.""" | ||
1153 | 127 | absolute_path = os.path.abspath(directory) | ||
1154 | 128 | while True: | ||
1155 | 129 | metadata_dir = os.path.join(absolute_path, METADATA_DIR_NAME) | ||
1156 | 130 | if os.path.exists(metadata_dir): | ||
1157 | 131 | break | ||
1158 | 132 | if absolute_path == "/": | ||
1159 | 133 | raise DirectoryNotInitializedError(directory) | ||
1160 | 134 | absolute_path = os.path.split(absolute_path)[0] | ||
1161 | 135 | |||
1162 | 136 | logging.info("Reading mirror metadata...") | ||
1163 | 137 | info = metadata.read(metadata_dir) | ||
1164 | 138 | |||
1165 | 139 | top_uuid, writable = client.get_root_info(info.share_uuid) | ||
1166 | 140 | |||
1167 | 141 | if info.root_uuid is None: | ||
1168 | 142 | info.root_uuid = client.resolve_path(info.share_uuid, top_uuid, | ||
1169 | 143 | info.path) | ||
1170 | 144 | |||
1171 | 145 | if action == 'auto': | ||
1172 | 146 | if writable: | ||
1173 | 147 | action = 'sync' | ||
1174 | 148 | else: | ||
1175 | 149 | action = 'download' | ||
1176 | 150 | merge_type, should_upload, should_download = MERGE_ACTIONS[action] | ||
1177 | 151 | if should_upload and not writable: | ||
1178 | 152 | raise ReadOnlyShareError(info.share_uuid) | ||
1179 | 153 | |||
1180 | 154 | logging.info("Scanning directory...") | ||
1181 | 155 | |||
1182 | 156 | local_tree = scan_directory(absolute_path, quiet=quiet) | ||
1183 | 157 | |||
1184 | 158 | logging.info("Fetching metadata...") | ||
1185 | 159 | |||
1186 | 160 | remote_tree = client.build_tree(info.share_uuid, info.root_uuid) | ||
1187 | 161 | if not quiet: | ||
1188 | 162 | show_tree(remote_tree) | ||
1189 | 163 | |||
1190 | 164 | logging.info("Merging trees...") | ||
1191 | 165 | merged_tree = merge_trees(old_local_tree=info.local_tree, | ||
1192 | 166 | local_tree=local_tree, | ||
1193 | 167 | old_remote_tree=info.remote_tree, | ||
1194 | 168 | remote_tree=remote_tree, | ||
1195 | 169 | merge_action=merge_type()) | ||
1196 | 170 | if not quiet: | ||
1197 | 171 | show_tree(merged_tree) | ||
1198 | 172 | |||
1199 | 173 | logging.info("Syncing content...") | ||
1200 | 174 | if should_download: | ||
1201 | 175 | info.local_tree = download_tree(merged_tree=merged_tree, | ||
1202 | 176 | local_tree=local_tree, | ||
1203 | 177 | client=client, | ||
1204 | 178 | share_uuid=info.share_uuid, | ||
1205 | 179 | path=absolute_path, dry_run=dry_run, | ||
1206 | 180 | quiet=quiet) | ||
1207 | 181 | else: | ||
1208 | 182 | info.local_tree = local_tree | ||
1209 | 183 | if should_upload: | ||
1210 | 184 | info.remote_tree = upload_tree(merged_tree=merged_tree, | ||
1211 | 185 | remote_tree=remote_tree, | ||
1212 | 186 | client=client, | ||
1213 | 187 | share_uuid=info.share_uuid, | ||
1214 | 188 | path=absolute_path, dry_run=dry_run, | ||
1215 | 189 | quiet=quiet) | ||
1216 | 190 | else: | ||
1217 | 191 | info.remote_tree = remote_tree | ||
1218 | 192 | |||
1219 | 193 | if not dry_run: | ||
1220 | 194 | logging.info("Updating mirror metadata...") | ||
1221 | 195 | metadata.write(metadata_dir, info) | ||
1222 | 196 | |||
1223 | 197 | logging.info("Done.") | ||
1224 | 198 | |||
1225 | 199 | def do_list_shares(client): | ||
1226 | 200 | """Lists available (incoming) shares.""" | ||
1227 | 201 | shares = client.get_incoming_shares() | ||
1228 | 202 | for (name, id, user, accepted, access) in shares: | ||
1229 | 203 | if not accepted: | ||
1230 | 204 | status = " [not accepted]" | ||
1231 | 205 | else: | ||
1232 | 206 | status = "" | ||
1233 | 207 | name = name.encode("utf-8") | ||
1234 | 208 | user = user.encode("utf-8") | ||
1235 | 209 | print "%s %s (from %s) [%s]%s" % (id, name, user, access, status) | ||
1236 | 210 | |||
1237 | 211 | def do_diff(client, share_spec, directory, quiet, subtree_path, | ||
1238 | 212 | ignore_symlinks=True): | ||
1239 | 213 | """Diffs a local directory with the server.""" | ||
1240 | 214 | if share_spec is not None: | ||
1241 | 215 | share_uuid = client.find_volume(share_spec) | ||
1242 | 216 | else: | ||
1243 | 217 | share_uuid = None | ||
1244 | 218 | if subtree_path is None: | ||
1245 | 219 | subtree_path = '/' | ||
1246 | 220 | # pylint: disable-msg=W0612 | ||
1247 | 221 | root_uuid, writable = client.get_root_info(share_uuid) | ||
1248 | 222 | subtree_uuid = client.resolve_path(share_uuid, root_uuid, subtree_path) | ||
1249 | 223 | local_tree = scan_directory(directory, quiet=True) | ||
1250 | 224 | remote_tree = client.build_tree(share_uuid, subtree_uuid) | ||
1251 | 225 | |||
1252 | 226 | def pre_merge(nodes, name, partial_parent): | ||
1253 | 227 | """Compares nodes and prints differences.""" | ||
1254 | 228 | (local_node, remote_node) = nodes | ||
1255 | 229 | # pylint: disable-msg=W0612 | ||
1256 | 230 | (parent_display_path, parent_differs) = partial_parent | ||
1257 | 231 | display_path = os.path.join(parent_display_path, name.encode("UTF-8")) | ||
1258 | 232 | differs = True | ||
1259 | 233 | if local_node is None: | ||
1260 | 234 | logging.info("%s missing from client",display_path) | ||
1261 | 235 | elif remote_node is None: | ||
1262 | 236 | if ignore_symlinks and local_node.node_type == SYMLINK: | ||
1263 | 237 | differs = False | ||
1264 | 238 | logging.info("%s missing from server",display_path) | ||
1265 | 239 | elif local_node.node_type != remote_node.node_type: | ||
1266 | 240 | local_type = node_type_str(local_node.node_type) | ||
1267 | 241 | remote_type = node_type_str(remote_node.node_type) | ||
1268 | 242 | logging.info("%s node types differ (client: %s, server: %s)", | ||
1269 | 243 | display_path, local_type, remote_type) | ||
1270 | 244 | elif local_node.node_type != DIRECTORY and \ | ||
1271 | 245 | local_node.content_hash != remote_node.content_hash: | ||
1272 | 246 | local_content = local_node.content_hash | ||
1273 | 247 | remote_content = remote_node.content_hash | ||
1274 | 248 | logging.info("%s has different content (client: %s, server: %s)", | ||
1275 | 249 | display_path, local_content, remote_content) | ||
1276 | 250 | else: | ||
1277 | 251 | differs = False | ||
1278 | 252 | return (display_path, differs) | ||
1279 | 253 | |||
1280 | 254 | def post_merge(nodes, partial_result, child_results): | ||
1281 | 255 | """Aggregates 'differs' flags.""" | ||
1282 | 256 | # pylint: disable-msg=W0612 | ||
1283 | 257 | (display_path, differs) = partial_result | ||
1284 | 258 | return differs or any(child_results.itervalues()) | ||
1285 | 259 | |||
1286 | 260 | differs = generic_merge(trees=[local_tree, remote_tree], | ||
1287 | 261 | pre_merge=pre_merge, post_merge=post_merge, | ||
1288 | 262 | partial_parent=("", False), name=u"") | ||
1289 | 263 | if differs: | ||
1290 | 264 | raise TreesDiffer(quiet=quiet) | ||
1291 | 265 | |||
1292 | 266 | def do_main(argv, options_parser): | ||
1293 | 267 | """The main user-facing portion of the script.""" | ||
1294 | 268 | # pass the arguments to ensure that the code works correctly | ||
1295 | 269 | options_parser.get_options(argv) | ||
1296 | 270 | client = Client(realm=options_parser.options.realm, reactor=reactor) | ||
1297 | 271 | # set the logging level to info in the user passed quite to be false | ||
1298 | 272 | if options_parser.options.quiet: | ||
1299 | 273 | logging.basicConfig(level=logging.INFO) | ||
1300 | 274 | |||
1301 | 275 | signal.signal(signal.SIGINT, lambda s, f: client.force_shutdown()) | ||
1302 | 276 | signal.signal(signal.SIGTERM, lambda s, f: client.force_shutdown()) | ||
1303 | 277 | |||
1304 | 278 | def run_client(): | ||
1305 | 279 | """Run the blocking client.""" | ||
1306 | 280 | token = options_parser.options.token | ||
1307 | 281 | |||
1308 | 282 | client.connect_ssl(options_parser.options.host, | ||
1309 | 283 | int(options_parser.options.port), | ||
1310 | 284 | options_parser.options.no_ssl_verify) | ||
1311 | 285 | |||
1312 | 286 | try: | ||
1313 | 287 | client.set_capabilities() | ||
1314 | 288 | client.oauth_from_token(options_parser.options.token) | ||
1315 | 289 | |||
1316 | 290 | if options_parser.options.mode == "sync": | ||
1317 | 291 | do_sync(client=client, directory=options_parser.options.directory, | ||
1318 | 292 | action=options_parser.options.action, | ||
1319 | 293 | dry_run=options_parser.options.dry_run, | ||
1320 | 294 | quiet=options_parser.options.quiet) | ||
1321 | 295 | elif options_parser.options.mode == "init": | ||
1322 | 296 | do_init(client=client, share_spec=options_parser.options.share, | ||
1323 | 297 | directory=options_parser.options.directory, | ||
1324 | 298 | quiet=options_parser.options.quiet, subtree_path=options_parser.options.subtree) | ||
1325 | 299 | elif options.mode == "list-shares": | ||
1326 | 300 | do_list_shares(client=client) | ||
1327 | 301 | elif options.mode == "diff": | ||
1328 | 302 | do_diff(client=client, share_spec=options_parser.options.share, | ||
1329 | 303 | directory=directory, | ||
1330 | 304 | quiet=options_parser.options.quiet, | ||
1331 | 305 | subtree_path=options_parser.options.subtree, | ||
1332 | 306 | ignore_symlinks=False) | ||
1333 | 307 | elif options_parser.options.mode == "authorize": | ||
1334 | 308 | if not options_parser.options.quiet: | ||
1335 | 309 | print "Authorized." | ||
1336 | 310 | finally: | ||
1337 | 311 | client.disconnect() | ||
1338 | 312 | |||
1339 | 313 | def capture_exception(queue, func): | ||
1340 | 314 | """Capture the exception from calling func.""" | ||
1341 | 315 | try: | ||
1342 | 316 | func() | ||
1343 | 317 | except Exception: | ||
1344 | 318 | queue.put(sys.exc_info()) | ||
1345 | 319 | else: | ||
1346 | 320 | queue.put(None) | ||
1347 | 321 | finally: | ||
1348 | 322 | reactor.callWhenRunning(reactor.stop) | ||
1349 | 323 | |||
1350 | 324 | queue = Queue() | ||
1351 | 325 | reactor.callInThread(capture_exception, queue, run_client) | ||
1352 | 326 | reactor.run(installSignalHandlers=False) | ||
1353 | 327 | exc_info = queue.get(True, 0.1) | ||
1354 | 328 | if exc_info: | ||
1355 | 329 | raise exc_info[0], exc_info[1], exc_info[2] | ||
1356 | 330 | |||
1357 | 331 | def main(argv): | ||
1358 | 332 | """Top-level main function.""" | ||
1359 | 333 | try: | ||
1360 | 334 | do_main(argv, UbuntuOneOptionsParser()) | ||
1361 | 335 | except AuthenticationError, e: | ||
1362 | 336 | print "Authentication failed: %s" % e | ||
1363 | 337 | except ConnectionError, e: | ||
1364 | 338 | print "Connection failed: %s" % e | ||
1365 | 339 | except DirectoryNotInitializedError: | ||
1366 | 340 | print "Directory not initialized; " \ | ||
1367 | 341 | "use --init [DIRECTORY] to initialize it." | ||
1368 | 342 | except DirectoryAlreadyInitializedError: | ||
1369 | 343 | print "Directory already initialized." | ||
1370 | 344 | except NoSuchShareError: | ||
1371 | 345 | print "No matching share found." | ||
1372 | 346 | except ReadOnlyShareError: | ||
1373 | 347 | print "The selected action isn't possible on a read-only share." | ||
1374 | 348 | except (ForcedShutdown, KeyboardInterrupt): | ||
1375 | 349 | print "Interrupted!" | ||
1376 | 350 | except TreesDiffer, e: | ||
1377 | 351 | if not e.quiet: | ||
1378 | 352 | print "Trees differ." | ||
1379 | 353 | else: | ||
1380 | 354 | return 0 | ||
1381 | 355 | return 1 | ||
1382 | 356 | |||
1383 | 357 | if __name__ == "__main__": | ||
1384 | 358 | # set the name of the process, this just works on windows | ||
1385 | 359 | sys.argv[0] = "Ubuntuone" | ||
1386 | 360 | main(sys.argv[1:]) | ||
1387 | 0 | \ No newline at end of file | 361 | \ No newline at end of file |
1388 | 1 | 362 | ||
1389 | === added file 'src/u1sync/merge.py' | |||
1390 | --- src/u1sync/merge.py 1970-01-01 00:00:00 +0000 | |||
1391 | +++ src/u1sync/merge.py 2010-08-27 14:56:43 +0000 | |||
1392 | @@ -0,0 +1,186 @@ | |||
1393 | 1 | # ubuntuone.u1sync.merge | ||
1394 | 2 | # | ||
1395 | 3 | # Tree state merging | ||
1396 | 4 | # | ||
1397 | 5 | # Author: Tim Cole <tim.cole@canonical.com> | ||
1398 | 6 | # | ||
1399 | 7 | # Copyright 2009 Canonical Ltd. | ||
1400 | 8 | # | ||
1401 | 9 | # This program is free software: you can redistribute it and/or modify it | ||
1402 | 10 | # under the terms of the GNU General Public License version 3, as published | ||
1403 | 11 | # by the Free Software Foundation. | ||
1404 | 12 | # | ||
1405 | 13 | # This program is distributed in the hope that it will be useful, but | ||
1406 | 14 | # WITHOUT ANY WARRANTY; without even the implied warranties of | ||
1407 | 15 | # MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR | ||
1408 | 16 | # PURPOSE. See the GNU General Public License for more details. | ||
1409 | 17 | # | ||
1410 | 18 | # You should have received a copy of the GNU General Public License along | ||
1411 | 19 | # with this program. If not, see <http://www.gnu.org/licenses/>. | ||
1412 | 20 | """Code for merging changes between modified trees.""" | ||
1413 | 21 | |||
1414 | 22 | from __future__ import with_statement | ||
1415 | 23 | |||
1416 | 24 | import os | ||
1417 | 25 | |||
1418 | 26 | from ubuntuone.storageprotocol.dircontent_pb2 import DIRECTORY | ||
1419 | 27 | from u1sync.genericmerge import ( | ||
1420 | 28 | MergeNode, generic_merge) | ||
1421 | 29 | import uuid | ||
1422 | 30 | |||
1423 | 31 | |||
1424 | 32 | class NodeTypeMismatchError(Exception): | ||
1425 | 33 | """Node types don't match.""" | ||
1426 | 34 | |||
1427 | 35 | |||
1428 | 36 | def merge_trees(old_local_tree, local_tree, old_remote_tree, remote_tree, | ||
1429 | 37 | merge_action): | ||
1430 | 38 | """Performs a tree merge using the given merge action.""" | ||
1431 | 39 | |||
1432 | 40 | def pre_merge(nodes, name, partial_parent): | ||
1433 | 41 | """Accumulates path and determines merged node type.""" | ||
1434 | 42 | old_local_node, local_node, old_remote_node, remote_node = nodes | ||
1435 | 43 | # pylint: disable-msg=W0612 | ||
1436 | 44 | (parent_path, parent_type) = partial_parent | ||
1437 | 45 | path = os.path.join(parent_path, name.encode("utf-8")) | ||
1438 | 46 | node_type = merge_action.get_node_type(old_local_node=old_local_node, | ||
1439 | 47 | local_node=local_node, | ||
1440 | 48 | old_remote_node=old_remote_node, | ||
1441 | 49 | remote_node=remote_node, | ||
1442 | 50 | path=path) | ||
1443 | 51 | return (path, node_type) | ||
1444 | 52 | |||
1445 | 53 | def post_merge(nodes, partial_result, child_results): | ||
1446 | 54 | """Drops deleted children and merges node.""" | ||
1447 | 55 | old_local_node, local_node, old_remote_node, remote_node = nodes | ||
1448 | 56 | # pylint: disable-msg=W0612 | ||
1449 | 57 | (path, node_type) = partial_result | ||
1450 | 58 | if node_type == DIRECTORY: | ||
1451 | 59 | merged_children = dict([(name, child) for (name, child) | ||
1452 | 60 | in child_results.iteritems() | ||
1453 | 61 | if child is not None]) | ||
1454 | 62 | else: | ||
1455 | 63 | merged_children = None | ||
1456 | 64 | return merge_action.merge_node(old_local_node=old_local_node, | ||
1457 | 65 | local_node=local_node, | ||
1458 | 66 | old_remote_node=old_remote_node, | ||
1459 | 67 | remote_node=remote_node, | ||
1460 | 68 | node_type=node_type, | ||
1461 | 69 | merged_children=merged_children) | ||
1462 | 70 | |||
1463 | 71 | return generic_merge(trees=[old_local_tree, local_tree, | ||
1464 | 72 | old_remote_tree, remote_tree], | ||
1465 | 73 | pre_merge=pre_merge, post_merge=post_merge, | ||
1466 | 74 | name=u"", partial_parent=("", None)) | ||
1467 | 75 | |||
1468 | 76 | |||
1469 | 77 | class SyncMerge(object): | ||
1470 | 78 | """Performs a bidirectional sync merge.""" | ||
1471 | 79 | |||
1472 | 80 | def get_node_type(self, old_local_node, local_node, | ||
1473 | 81 | old_remote_node, remote_node, path): | ||
1474 | 82 | """Requires that all node types match.""" | ||
1475 | 83 | node_type = None | ||
1476 | 84 | for node in (old_local_node, local_node, remote_node): | ||
1477 | 85 | if node is not None: | ||
1478 | 86 | if node_type is not None: | ||
1479 | 87 | if node.node_type != node_type: | ||
1480 | 88 | message = "Node types don't match for %s" % path | ||
1481 | 89 | raise NodeTypeMismatchError(message) | ||
1482 | 90 | else: | ||
1483 | 91 | node_type = node.node_type | ||
1484 | 92 | return node_type | ||
1485 | 93 | |||
1486 | 94 | def merge_node(self, old_local_node, local_node, | ||
1487 | 95 | old_remote_node, remote_node, node_type, merged_children): | ||
1488 | 96 | """Performs bidirectional merge of node state.""" | ||
1489 | 97 | |||
1490 | 98 | def node_content_hash(node): | ||
1491 | 99 | """Returns node content hash if node is not None""" | ||
1492 | 100 | return node.content_hash if node is not None else None | ||
1493 | 101 | |||
1494 | 102 | old_local_content_hash = node_content_hash(old_local_node) | ||
1495 | 103 | local_content_hash = node_content_hash(local_node) | ||
1496 | 104 | old_remote_content_hash = node_content_hash(old_remote_node) | ||
1497 | 105 | remote_content_hash = node_content_hash(remote_node) | ||
1498 | 106 | |||
1499 | 107 | locally_deleted = old_local_node is not None and local_node is None | ||
1500 | 108 | deleted_on_server = old_remote_node is not None and remote_node is None | ||
1501 | 109 | # updated means modified or created | ||
1502 | 110 | locally_updated = not locally_deleted and \ | ||
1503 | 111 | old_local_content_hash != local_content_hash | ||
1504 | 112 | updated_on_server = not deleted_on_server and \ | ||
1505 | 113 | old_remote_content_hash != remote_content_hash | ||
1506 | 114 | |||
1507 | 115 | has_merged_children = merged_children is not None and \ | ||
1508 | 116 | len(merged_children) > 0 | ||
1509 | 117 | |||
1510 | 118 | either_node_exists = local_node is not None or remote_node is not None | ||
1511 | 119 | should_delete = (locally_deleted and not updated_on_server) or \ | ||
1512 | 120 | (deleted_on_server and not locally_updated) | ||
1513 | 121 | |||
1514 | 122 | if (either_node_exists and not should_delete) or has_merged_children: | ||
1515 | 123 | if node_type != DIRECTORY and \ | ||
1516 | 124 | locally_updated and updated_on_server and \ | ||
1517 | 125 | local_content_hash != remote_content_hash: | ||
1518 | 126 | # local_content_hash will become the merged content_hash; | ||
1519 | 127 | # save remote_content_hash in conflict info | ||
1520 | 128 | conflict_info = (str(uuid.uuid4()), remote_content_hash) | ||
1521 | 129 | else: | ||
1522 | 130 | conflict_info = None | ||
1523 | 131 | node_uuid = remote_node.uuid if remote_node is not None else None | ||
1524 | 132 | if locally_updated: | ||
1525 | 133 | content_hash = local_content_hash or remote_content_hash | ||
1526 | 134 | else: | ||
1527 | 135 | content_hash = remote_content_hash or local_content_hash | ||
1528 | 136 | return MergeNode(node_type=node_type, uuid=node_uuid, | ||
1529 | 137 | children=merged_children, content_hash=content_hash, | ||
1530 | 138 | conflict_info=conflict_info) | ||
1531 | 139 | else: | ||
1532 | 140 | return None | ||
1533 | 141 | |||
1534 | 142 | |||
1535 | 143 | class ClobberServerMerge(object): | ||
1536 | 144 | """Clobber server to match local state.""" | ||
1537 | 145 | |||
1538 | 146 | def get_node_type(self, old_local_node, local_node, | ||
1539 | 147 | old_remote_node, remote_node, path): | ||
1540 | 148 | """Return local node type.""" | ||
1541 | 149 | if local_node is not None: | ||
1542 | 150 | return local_node.node_type | ||
1543 | 151 | else: | ||
1544 | 152 | return None | ||
1545 | 153 | |||
1546 | 154 | def merge_node(self, old_local_node, local_node, | ||
1547 | 155 | old_remote_node, remote_node, node_type, merged_children): | ||
1548 | 156 | """Copy local node and associate with remote uuid (if applicable).""" | ||
1549 | 157 | if local_node is None: | ||
1550 | 158 | return None | ||
1551 | 159 | if remote_node is not None: | ||
1552 | 160 | node_uuid = remote_node.uuid | ||
1553 | 161 | else: | ||
1554 | 162 | node_uuid = None | ||
1555 | 163 | return MergeNode(node_type=local_node.node_type, uuid=node_uuid, | ||
1556 | 164 | content_hash=local_node.content_hash, | ||
1557 | 165 | children=merged_children) | ||
1558 | 166 | |||
1559 | 167 | |||
1560 | 168 | class ClobberLocalMerge(object): | ||
1561 | 169 | """Clobber local state to match server.""" | ||
1562 | 170 | |||
1563 | 171 | def get_node_type(self, old_local_node, local_node, | ||
1564 | 172 | old_remote_node, remote_node, path): | ||
1565 | 173 | """Return remote node type.""" | ||
1566 | 174 | if remote_node is not None: | ||
1567 | 175 | return remote_node.node_type | ||
1568 | 176 | else: | ||
1569 | 177 | return None | ||
1570 | 178 | |||
1571 | 179 | def merge_node(self, old_local_node, local_node, | ||
1572 | 180 | old_remote_node, remote_node, node_type, merged_children): | ||
1573 | 181 | """Copy the remote node.""" | ||
1574 | 182 | if remote_node is None: | ||
1575 | 183 | return None | ||
1576 | 184 | return MergeNode(node_type=node_type, uuid=remote_node.uuid, | ||
1577 | 185 | content_hash=remote_node.content_hash, | ||
1578 | 186 | children=merged_children) | ||
1579 | 0 | 187 | ||
1580 | === added file 'src/u1sync/metadata.py' | |||
1581 | --- src/u1sync/metadata.py 1970-01-01 00:00:00 +0000 | |||
1582 | +++ src/u1sync/metadata.py 2010-08-27 14:56:43 +0000 | |||
1583 | @@ -0,0 +1,145 @@ | |||
1584 | 1 | # ubuntuone.u1sync.metadata | ||
1585 | 2 | # | ||
1586 | 3 | # u1sync metadata routines | ||
1587 | 4 | # | ||
1588 | 5 | # Author: Tim Cole <tim.cole@canonical.com> | ||
1589 | 6 | # | ||
1590 | 7 | # Copyright 2009 Canonical Ltd. | ||
1591 | 8 | # | ||
1592 | 9 | # This program is free software: you can redistribute it and/or modify it | ||
1593 | 10 | # under the terms of the GNU General Public License version 3, as published | ||
1594 | 11 | # by the Free Software Foundation. | ||
1595 | 12 | # | ||
1596 | 13 | # This program is distributed in the hope that it will be useful, but | ||
1597 | 14 | # WITHOUT ANY WARRANTY; without even the implied warranties of | ||
1598 | 15 | # MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR | ||
1599 | 16 | # PURPOSE. See the GNU General Public License for more details. | ||
1600 | 17 | # | ||
1601 | 18 | # You should have received a copy of the GNU General Public License along | ||
1602 | 19 | # with this program. If not, see <http://www.gnu.org/licenses/>. | ||
1603 | 20 | """Routines for loading/storing u1sync mirror metadata.""" | ||
1604 | 21 | |||
1605 | 22 | from __future__ import with_statement | ||
1606 | 23 | |||
1607 | 24 | import os | ||
1608 | 25 | import cPickle as pickle | ||
1609 | 26 | from errno import ENOENT | ||
1610 | 27 | from contextlib import contextmanager | ||
1611 | 28 | from ubuntuone.storageprotocol.dircontent_pb2 import DIRECTORY | ||
1612 | 29 | from u1sync.merge import MergeNode | ||
1613 | 30 | from u1sync.utils import safe_unlink | ||
1614 | 31 | import uuid | ||
1615 | 32 | |||
1616 | 33 | class Metadata(object): | ||
1617 | 34 | """Object representing mirror metadata.""" | ||
1618 | 35 | def __init__(self, local_tree=None, remote_tree=None, share_uuid=None, | ||
1619 | 36 | root_uuid=None, path=None): | ||
1620 | 37 | """Populate fields.""" | ||
1621 | 38 | self.local_tree = local_tree | ||
1622 | 39 | self.remote_tree = remote_tree | ||
1623 | 40 | self.share_uuid = share_uuid | ||
1624 | 41 | self.root_uuid = root_uuid | ||
1625 | 42 | self.path = path | ||
1626 | 43 | |||
1627 | 44 | def read(metadata_dir): | ||
1628 | 45 | """Read metadata for a mirror rooted at directory.""" | ||
1629 | 46 | index_file = os.path.join(metadata_dir, "local-index") | ||
1630 | 47 | share_uuid_file = os.path.join(metadata_dir, "share-uuid") | ||
1631 | 48 | root_uuid_file = os.path.join(metadata_dir, "root-uuid") | ||
1632 | 49 | path_file = os.path.join(metadata_dir, "path") | ||
1633 | 50 | |||
1634 | 51 | index = read_pickle_file(index_file, {}) | ||
1635 | 52 | share_uuid = read_uuid_file(share_uuid_file) | ||
1636 | 53 | root_uuid = read_uuid_file(root_uuid_file) | ||
1637 | 54 | path = read_string_file(path_file, '/') | ||
1638 | 55 | |||
1639 | 56 | local_tree = index.get("tree", None) | ||
1640 | 57 | remote_tree = index.get("remote_tree", None) | ||
1641 | 58 | |||
1642 | 59 | if local_tree is None: | ||
1643 | 60 | local_tree = MergeNode(node_type=DIRECTORY, children={}) | ||
1644 | 61 | if remote_tree is None: | ||
1645 | 62 | remote_tree = MergeNode(node_type=DIRECTORY, children={}) | ||
1646 | 63 | |||
1647 | 64 | return Metadata(local_tree=local_tree, remote_tree=remote_tree, | ||
1648 | 65 | share_uuid=share_uuid, root_uuid=root_uuid, | ||
1649 | 66 | path=path) | ||
1650 | 67 | |||
1651 | 68 | def write(metadata_dir, info): | ||
1652 | 69 | """Writes all metadata for the mirror rooted at directory.""" | ||
1653 | 70 | share_uuid_file = os.path.join(metadata_dir, "share-uuid") | ||
1654 | 71 | root_uuid_file = os.path.join(metadata_dir, "root-uuid") | ||
1655 | 72 | index_file = os.path.join(metadata_dir, "local-index") | ||
1656 | 73 | path_file = os.path.join(metadata_dir, "path") | ||
1657 | 74 | if info.share_uuid is not None: | ||
1658 | 75 | write_uuid_file(share_uuid_file, info.share_uuid) | ||
1659 | 76 | else: | ||
1660 | 77 | safe_unlink(share_uuid_file) | ||
1661 | 78 | if info.root_uuid is not None: | ||
1662 | 79 | write_uuid_file(root_uuid_file, info.root_uuid) | ||
1663 | 80 | else: | ||
1664 | 81 | safe_unlink(root_uuid_file) | ||
1665 | 82 | write_string_file(path_file, info.path) | ||
1666 | 83 | write_pickle_file(index_file, {"tree": info.local_tree, | ||
1667 | 84 | "remote_tree": info.remote_tree}) | ||
1668 | 85 | |||
1669 | 86 | def write_pickle_file(filename, value): | ||
1670 | 87 | """Writes a pickled python object to a file.""" | ||
1671 | 88 | with atomic_update_file(filename) as stream: | ||
1672 | 89 | pickle.dump(value, stream, 2) | ||
1673 | 90 | |||
1674 | 91 | def write_string_file(filename, value): | ||
1675 | 92 | """Writes a string to a file with an added line feed, or | ||
1676 | 93 | deletes the file if value is None. | ||
1677 | 94 | """ | ||
1678 | 95 | if value is not None: | ||
1679 | 96 | with atomic_update_file(filename) as stream: | ||
1680 | 97 | stream.write(value) | ||
1681 | 98 | stream.write('\n') | ||
1682 | 99 | else: | ||
1683 | 100 | safe_unlink(filename) | ||
1684 | 101 | |||
1685 | 102 | def write_uuid_file(filename, value): | ||
1686 | 103 | """Writes a UUID to a file.""" | ||
1687 | 104 | write_string_file(filename, str(value)) | ||
1688 | 105 | |||
1689 | 106 | def read_pickle_file(filename, default_value=None): | ||
1690 | 107 | """Reads a pickled python object from a file.""" | ||
1691 | 108 | try: | ||
1692 | 109 | with open(filename, "rb") as stream: | ||
1693 | 110 | return pickle.load(stream) | ||
1694 | 111 | except IOError, e: | ||
1695 | 112 | if e.errno != ENOENT: | ||
1696 | 113 | raise | ||
1697 | 114 | return default_value | ||
1698 | 115 | |||
1699 | 116 | def read_string_file(filename, default_value=None): | ||
1700 | 117 | """Reads a string from a file, discarding the final character.""" | ||
1701 | 118 | try: | ||
1702 | 119 | with open(filename, "r") as stream: | ||
1703 | 120 | return stream.read()[:-1] | ||
1704 | 121 | except IOError, e: | ||
1705 | 122 | if e.errno != ENOENT: | ||
1706 | 123 | raise | ||
1707 | 124 | return default_value | ||
1708 | 125 | |||
1709 | 126 | def read_uuid_file(filename, default_value=None): | ||
1710 | 127 | """Reads a UUID from a file.""" | ||
1711 | 128 | try: | ||
1712 | 129 | with open(filename, "r") as stream: | ||
1713 | 130 | return uuid.UUID(stream.read()[:-1]) | ||
1714 | 131 | except IOError, e: | ||
1715 | 132 | if e.errno != ENOENT: | ||
1716 | 133 | raise | ||
1717 | 134 | return default_value | ||
1718 | 135 | |||
1719 | 136 | @contextmanager | ||
1720 | 137 | def atomic_update_file(filename): | ||
1721 | 138 | """Returns a context manager for atomically updating a file.""" | ||
1722 | 139 | temp_filename = "%s.%s" % (filename, uuid.uuid4()) | ||
1723 | 140 | try: | ||
1724 | 141 | with open(temp_filename, "w") as stream: | ||
1725 | 142 | yield stream | ||
1726 | 143 | os.rename(temp_filename, filename) | ||
1727 | 144 | finally: | ||
1728 | 145 | safe_unlink(temp_filename) | ||
1729 | 0 | 146 | ||
1730 | === added file 'src/u1sync/scan.py' | |||
1731 | --- src/u1sync/scan.py 1970-01-01 00:00:00 +0000 | |||
1732 | +++ src/u1sync/scan.py 2010-08-27 14:56:43 +0000 | |||
1733 | @@ -0,0 +1,102 @@ | |||
1734 | 1 | # ubuntuone.u1sync.scan | ||
1735 | 2 | # | ||
1736 | 3 | # Directory scanning | ||
1737 | 4 | # | ||
1738 | 5 | # Author: Tim Cole <tim.cole@canonical.com> | ||
1739 | 6 | # | ||
1740 | 7 | # Copyright 2009 Canonical Ltd. | ||
1741 | 8 | # | ||
1742 | 9 | # This program is free software: you can redistribute it and/or modify it | ||
1743 | 10 | # under the terms of the GNU General Public License version 3, as published | ||
1744 | 11 | # by the Free Software Foundation. | ||
1745 | 12 | # | ||
1746 | 13 | # This program is distributed in the hope that it will be useful, but | ||
1747 | 14 | # WITHOUT ANY WARRANTY; without even the implied warranties of | ||
1748 | 15 | # MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR | ||
1749 | 16 | # PURPOSE. See the GNU General Public License for more details. | ||
1750 | 17 | # | ||
1751 | 18 | # You should have received a copy of the GNU General Public License along | ||
1752 | 19 | # with this program. If not, see <http://www.gnu.org/licenses/>. | ||
1753 | 20 | """Code for scanning local directory state.""" | ||
1754 | 21 | |||
1755 | 22 | from __future__ import with_statement | ||
1756 | 23 | |||
1757 | 24 | import os | ||
1758 | 25 | import hashlib | ||
1759 | 26 | import shutil | ||
1760 | 27 | from errno import ENOTDIR, EINVAL | ||
1761 | 28 | import sys | ||
1762 | 29 | |||
1763 | 30 | EMPTY_HASH = "sha1:%s" % hashlib.sha1().hexdigest() | ||
1764 | 31 | |||
1765 | 32 | from ubuntuone.storageprotocol.dircontent_pb2 import \ | ||
1766 | 33 | DIRECTORY, FILE, SYMLINK | ||
1767 | 34 | from u1sync.genericmerge import MergeNode | ||
1768 | 35 | from u1sync.utils import should_sync | ||
1769 | 36 | |||
1770 | 37 | def scan_directory(path, display_path="", quiet=False): | ||
1771 | 38 | """Scans a local directory and builds an in-memory tree from it.""" | ||
1772 | 39 | if display_path != "" and not quiet: | ||
1773 | 40 | print display_path | ||
1774 | 41 | |||
1775 | 42 | link_target = None | ||
1776 | 43 | child_names = None | ||
1777 | 44 | try: | ||
1778 | 45 | print "Path is " + str(path) | ||
1779 | 46 | if sys.platform == "win32": | ||
1780 | 47 | if path.endswith(".lnk") or path.endswith(".url"): | ||
1781 | 48 | import win32com.client | ||
1782 | 49 | import pythoncom | ||
1783 | 50 | pythoncom.CoInitialize() | ||
1784 | 51 | shell = win32com.client.Dispatch("WScript.Shell") | ||
1785 | 52 | shortcut = shell.CreateShortCut(path) | ||
1786 | 53 | print(shortcut.Targetpath) | ||
1787 | 54 | link_target = shortcut.Targetpath | ||
1788 | 55 | else: | ||
1789 | 56 | link_target = None | ||
1790 | 57 | if os.path.isdir(path): | ||
1791 | 58 | child_names = os.listdir(path) | ||
1792 | 59 | else: | ||
1793 | 60 | link_target = os.readlink(path) | ||
1794 | 61 | except OSError, e: | ||
1795 | 62 | if e.errno != EINVAL: | ||
1796 | 63 | raise | ||
1797 | 64 | try: | ||
1798 | 65 | child_names = os.listdir(path) | ||
1799 | 66 | except OSError, e: | ||
1800 | 67 | if e.errno != ENOTDIR: | ||
1801 | 68 | raise | ||
1802 | 69 | |||
1803 | 70 | if link_target is not None: | ||
1804 | 71 | # symlink | ||
1805 | 72 | sum = hashlib.sha1() | ||
1806 | 73 | sum.update(link_target) | ||
1807 | 74 | content_hash = "sha1:%s" % sum.hexdigest() | ||
1808 | 75 | return MergeNode(node_type=SYMLINK, content_hash=content_hash) | ||
1809 | 76 | elif child_names is not None: | ||
1810 | 77 | # directory | ||
1811 | 78 | child_names = [n for n in child_names if should_sync(n.decode("utf-8"))] | ||
1812 | 79 | child_paths = [(os.path.join(path, child_name), | ||
1813 | 80 | os.path.join(display_path, child_name)) \ | ||
1814 | 81 | for child_name in child_names] | ||
1815 | 82 | children = [scan_directory(child_path, child_display_path, quiet) \ | ||
1816 | 83 | for (child_path, child_display_path) in child_paths] | ||
1817 | 84 | unicode_child_names = [n.decode("utf-8") for n in child_names] | ||
1818 | 85 | children = dict(zip(unicode_child_names, children)) | ||
1819 | 86 | return MergeNode(node_type=DIRECTORY, children=children) | ||
1820 | 87 | else: | ||
1821 | 88 | # regular file | ||
1822 | 89 | sum = hashlib.sha1() | ||
1823 | 90 | |||
1824 | 91 | |||
1825 | 92 | class HashStream(object): | ||
1826 | 93 | """Stream that computes hashes.""" | ||
1827 | 94 | def write(self, bytes): | ||
1828 | 95 | """Accumulate bytes.""" | ||
1829 | 96 | sum.update(bytes) | ||
1830 | 97 | |||
1831 | 98 | |||
1832 | 99 | with open(path, "r") as stream: | ||
1833 | 100 | shutil.copyfileobj(stream, HashStream()) | ||
1834 | 101 | content_hash = "sha1:%s" % sum.hexdigest() | ||
1835 | 102 | return MergeNode(node_type=FILE, content_hash=content_hash) | ||
1836 | 0 | 103 | ||
1837 | === added file 'src/u1sync/sync.py' | |||
1838 | --- src/u1sync/sync.py 1970-01-01 00:00:00 +0000 | |||
1839 | +++ src/u1sync/sync.py 2010-08-27 14:56:43 +0000 | |||
1840 | @@ -0,0 +1,384 @@ | |||
1841 | 1 | # ubuntuone.u1sync.sync | ||
1842 | 2 | # | ||
1843 | 3 | # State update | ||
1844 | 4 | # | ||
1845 | 5 | # Author: Tim Cole <tim.cole@canonical.com> | ||
1846 | 6 | # | ||
1847 | 7 | # Copyright 2009 Canonical Ltd. | ||
1848 | 8 | # | ||
1849 | 9 | # This program is free software: you can redistribute it and/or modify it | ||
1850 | 10 | # under the terms of the GNU General Public License version 3, as published | ||
1851 | 11 | # by the Free Software Foundation. | ||
1852 | 12 | # | ||
1853 | 13 | # This program is distributed in the hope that it will be useful, but | ||
1854 | 14 | # WITHOUT ANY WARRANTY; without even the implied warranties of | ||
1855 | 15 | # MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR | ||
1856 | 16 | # PURPOSE. See the GNU General Public License for more details. | ||
1857 | 17 | # | ||
1858 | 18 | # You should have received a copy of the GNU General Public License along | ||
1859 | 19 | # with this program. If not, see <http://www.gnu.org/licenses/>. | ||
1860 | 20 | """After merging, these routines are used to synchronize state locally and on | ||
1861 | 21 | the server to correspond to the merged result.""" | ||
1862 | 22 | |||
1863 | 23 | from __future__ import with_statement | ||
1864 | 24 | |||
1865 | 25 | import os | ||
1866 | 26 | |||
1867 | 27 | EMPTY_HASH = "" | ||
1868 | 28 | UPLOAD_SYMBOL = u"\u25b2".encode("utf-8") | ||
1869 | 29 | DOWNLOAD_SYMBOL = u"\u25bc".encode("utf-8") | ||
1870 | 30 | CONFLICT_SYMBOL = "!" | ||
1871 | 31 | DELETE_SYMBOL = "X" | ||
1872 | 32 | |||
1873 | 33 | from ubuntuone.storageprotocol import request | ||
1874 | 34 | from ubuntuone.storageprotocol.dircontent_pb2 import ( | ||
1875 | 35 | DIRECTORY, SYMLINK) | ||
1876 | 36 | from u1sync.genericmerge import ( | ||
1877 | 37 | MergeNode, generic_merge) | ||
1878 | 38 | from u1sync.utils import safe_mkdir | ||
1879 | 39 | from u1sync.client import UnsupportedOperationError | ||
1880 | 40 | |||
1881 | 41 | def get_conflict_path(path, conflict_info): | ||
1882 | 42 | """Returns path for conflict file corresponding to path.""" | ||
1883 | 43 | dir, name = os.path.split(path) | ||
1884 | 44 | unique_id = conflict_info[0] | ||
1885 | 45 | return os.path.join(dir, "conflict-%s-%s" % (unique_id, name)) | ||
1886 | 46 | |||
1887 | 47 | def name_from_path(path): | ||
1888 | 48 | """Returns unicode name from last path component.""" | ||
1889 | 49 | return os.path.split(path)[1].decode("UTF-8") | ||
1890 | 50 | |||
1891 | 51 | |||
1892 | 52 | class NodeSyncError(Exception): | ||
1893 | 53 | """Error syncing node.""" | ||
1894 | 54 | |||
1895 | 55 | |||
1896 | 56 | class NodeCreateError(NodeSyncError): | ||
1897 | 57 | """Error creating node.""" | ||
1898 | 58 | |||
1899 | 59 | |||
1900 | 60 | class NodeUpdateError(NodeSyncError): | ||
1901 | 61 | """Error updating node.""" | ||
1902 | 62 | |||
1903 | 63 | |||
1904 | 64 | class NodeDeleteError(NodeSyncError): | ||
1905 | 65 | """Error deleting node.""" | ||
1906 | 66 | |||
1907 | 67 | |||
1908 | 68 | def sync_tree(merged_tree, original_tree, sync_mode, path, quiet): | ||
1909 | 69 | """Performs actual synchronization.""" | ||
1910 | 70 | |||
1911 | 71 | def pre_merge(nodes, name, partial_parent): | ||
1912 | 72 | """Create nodes and write content as required.""" | ||
1913 | 73 | (merged_node, original_node) = nodes | ||
1914 | 74 | # pylint: disable-msg=W0612 | ||
1915 | 75 | (parent_path, parent_display_path, parent_uuid, parent_synced) \ | ||
1916 | 76 | = partial_parent | ||
1917 | 77 | |||
1918 | 78 | utf8_name = name.encode("utf-8") | ||
1919 | 79 | path = os.path.join(parent_path, utf8_name) | ||
1920 | 80 | display_path = os.path.join(parent_display_path, utf8_name) | ||
1921 | 81 | node_uuid = None | ||
1922 | 82 | |||
1923 | 83 | synced = False | ||
1924 | 84 | if merged_node is not None: | ||
1925 | 85 | if merged_node.node_type == DIRECTORY: | ||
1926 | 86 | if original_node is not None: | ||
1927 | 87 | synced = True | ||
1928 | 88 | node_uuid = original_node.uuid | ||
1929 | 89 | else: | ||
1930 | 90 | if not quiet: | ||
1931 | 91 | print "%s %s" % (sync_mode.symbol, display_path) | ||
1932 | 92 | try: | ||
1933 | 93 | create_dir = sync_mode.create_directory | ||
1934 | 94 | node_uuid = create_dir(parent_uuid=parent_uuid, | ||
1935 | 95 | path=path) | ||
1936 | 96 | synced = True | ||
1937 | 97 | except NodeCreateError, e: | ||
1938 | 98 | print e | ||
1939 | 99 | elif merged_node.content_hash is None: | ||
1940 | 100 | if not quiet: | ||
1941 | 101 | print "? %s" % display_path | ||
1942 | 102 | elif original_node is None or \ | ||
1943 | 103 | original_node.content_hash != merged_node.content_hash or \ | ||
1944 | 104 | merged_node.conflict_info is not None: | ||
1945 | 105 | conflict_info = merged_node.conflict_info | ||
1946 | 106 | if conflict_info is not None: | ||
1947 | 107 | conflict_symbol = CONFLICT_SYMBOL | ||
1948 | 108 | else: | ||
1949 | 109 | conflict_symbol = " " | ||
1950 | 110 | if not quiet: | ||
1951 | 111 | print "%s %s %s" % (sync_mode.symbol, conflict_symbol, | ||
1952 | 112 | display_path) | ||
1953 | 113 | if original_node is not None: | ||
1954 | 114 | node_uuid = original_node.uuid or merged_node.uuid | ||
1955 | 115 | original_hash = original_node.content_hash or EMPTY_HASH | ||
1956 | 116 | else: | ||
1957 | 117 | node_uuid = merged_node.uuid | ||
1958 | 118 | original_hash = EMPTY_HASH | ||
1959 | 119 | try: | ||
1960 | 120 | sync_mode.write_file(node_uuid=node_uuid, | ||
1961 | 121 | content_hash= | ||
1962 | 122 | merged_node.content_hash, | ||
1963 | 123 | old_content_hash=original_hash, | ||
1964 | 124 | path=path, | ||
1965 | 125 | parent_uuid=parent_uuid, | ||
1966 | 126 | conflict_info=conflict_info, | ||
1967 | 127 | node_type=merged_node.node_type) | ||
1968 | 128 | synced = True | ||
1969 | 129 | except NodeSyncError, e: | ||
1970 | 130 | print e | ||
1971 | 131 | else: | ||
1972 | 132 | synced = True | ||
1973 | 133 | |||
1974 | 134 | return (path, display_path, node_uuid, synced) | ||
1975 | 135 | |||
1976 | 136 | def post_merge(nodes, partial_result, child_results): | ||
1977 | 137 | """Delete nodes.""" | ||
1978 | 138 | (merged_node, original_node) = nodes | ||
1979 | 139 | # pylint: disable-msg=W0612 | ||
1980 | 140 | (path, display_path, node_uuid, synced) = partial_result | ||
1981 | 141 | |||
1982 | 142 | if merged_node is None: | ||
1983 | 143 | assert original_node is not None | ||
1984 | 144 | if not quiet: | ||
1985 | 145 | print "%s %s %s" % (sync_mode.symbol, DELETE_SYMBOL, | ||
1986 | 146 | display_path) | ||
1987 | 147 | try: | ||
1988 | 148 | if original_node.node_type == DIRECTORY: | ||
1989 | 149 | sync_mode.delete_directory(node_uuid=original_node.uuid, | ||
1990 | 150 | path=path) | ||
1991 | 151 | else: | ||
1992 | 152 | # files or symlinks | ||
1993 | 153 | sync_mode.delete_file(node_uuid=original_node.uuid, | ||
1994 | 154 | path=path) | ||
1995 | 155 | synced = True | ||
1996 | 156 | except NodeDeleteError, e: | ||
1997 | 157 | print e | ||
1998 | 158 | |||
1999 | 159 | if synced: | ||
2000 | 160 | model_node = merged_node | ||
2001 | 161 | else: | ||
2002 | 162 | model_node = original_node | ||
2003 | 163 | |||
2004 | 164 | if model_node is not None: | ||
2005 | 165 | if model_node.node_type == DIRECTORY: | ||
2006 | 166 | child_iter = child_results.iteritems() | ||
2007 | 167 | merged_children = dict([(name, child) for (name, child) | ||
2008 | 168 | in child_iter | ||
2009 | 169 | if child is not None]) | ||
2010 | 170 | else: | ||
2011 | 171 | # if there are children here it's because they failed to delete | ||
2012 | 172 | merged_children = None | ||
2013 | 173 | return MergeNode(node_type=model_node.node_type, | ||
2014 | 174 | uuid=model_node.uuid, | ||
2015 | 175 | children=merged_children, | ||
2016 | 176 | content_hash=model_node.content_hash) | ||
2017 | 177 | else: | ||
2018 | 178 | return None | ||
2019 | 179 | |||
2020 | 180 | return generic_merge(trees=[merged_tree, original_tree], | ||
2021 | 181 | pre_merge=pre_merge, post_merge=post_merge, | ||
2022 | 182 | partial_parent=(path, "", None, True), name=u"") | ||
2023 | 183 | |||
2024 | 184 | def download_tree(merged_tree, local_tree, client, share_uuid, path, dry_run, | ||
2025 | 185 | quiet): | ||
2026 | 186 | """Downloads a directory.""" | ||
2027 | 187 | if dry_run: | ||
2028 | 188 | downloader = DryRun(symbol=DOWNLOAD_SYMBOL) | ||
2029 | 189 | else: | ||
2030 | 190 | downloader = Downloader(client=client, share_uuid=share_uuid) | ||
2031 | 191 | return sync_tree(merged_tree=merged_tree, original_tree=local_tree, | ||
2032 | 192 | sync_mode=downloader, path=path, quiet=quiet) | ||
2033 | 193 | |||
2034 | 194 | def upload_tree(merged_tree, remote_tree, client, share_uuid, path, dry_run, | ||
2035 | 195 | quiet): | ||
2036 | 196 | """Uploads a directory.""" | ||
2037 | 197 | if dry_run: | ||
2038 | 198 | uploader = DryRun(symbol=UPLOAD_SYMBOL) | ||
2039 | 199 | else: | ||
2040 | 200 | uploader = Uploader(client=client, share_uuid=share_uuid) | ||
2041 | 201 | return sync_tree(merged_tree=merged_tree, original_tree=remote_tree, | ||
2042 | 202 | sync_mode=uploader, path=path, quiet=quiet) | ||
2043 | 203 | |||
2044 | 204 | |||
2045 | 205 | class DryRun(object): | ||
2046 | 206 | """A class which implements the sync interface but does nothing.""" | ||
2047 | 207 | def __init__(self, symbol): | ||
2048 | 208 | """Initializes a DryRun instance.""" | ||
2049 | 209 | self.symbol = symbol | ||
2050 | 210 | |||
2051 | 211 | def create_directory(self, parent_uuid, path): | ||
2052 | 212 | """Doesn't create a directory.""" | ||
2053 | 213 | return None | ||
2054 | 214 | |||
2055 | 215 | def write_file(self, node_uuid, old_content_hash, content_hash, | ||
2056 | 216 | parent_uuid, path, conflict_info, node_type): | ||
2057 | 217 | """Doesn't write a file.""" | ||
2058 | 218 | return None | ||
2059 | 219 | |||
2060 | 220 | def delete_directory(self, node_uuid, path): | ||
2061 | 221 | """Doesn't delete a directory.""" | ||
2062 | 222 | |||
2063 | 223 | def delete_file(self, node_uuid, path): | ||
2064 | 224 | """Doesn't delete a file.""" | ||
2065 | 225 | |||
2066 | 226 | |||
2067 | 227 | class Downloader(object): | ||
2068 | 228 | """A class which implements the download half of syncing.""" | ||
2069 | 229 | def __init__(self, client, share_uuid): | ||
2070 | 230 | """Initializes a Downloader instance.""" | ||
2071 | 231 | self.client = client | ||
2072 | 232 | self.share_uuid = share_uuid | ||
2073 | 233 | self.symbol = DOWNLOAD_SYMBOL | ||
2074 | 234 | |||
2075 | 235 | def create_directory(self, parent_uuid, path): | ||
2076 | 236 | """Creates a directory.""" | ||
2077 | 237 | try: | ||
2078 | 238 | safe_mkdir(path) | ||
2079 | 239 | except OSError, e: | ||
2080 | 240 | raise NodeCreateError("Error creating local directory %s: %s" % \ | ||
2081 | 241 | (path, e)) | ||
2082 | 242 | return None | ||
2083 | 243 | |||
2084 | 244 | def write_file(self, node_uuid, old_content_hash, content_hash, | ||
2085 | 245 | parent_uuid, path, conflict_info, node_type): | ||
2086 | 246 | """Creates a file and downloads new content for it.""" | ||
2087 | 247 | if conflict_info: | ||
2088 | 248 | # download to conflict file rather than overwriting local changes | ||
2089 | 249 | path = get_conflict_path(path, conflict_info) | ||
2090 | 250 | content_hash = conflict_info[1] | ||
2091 | 251 | try: | ||
2092 | 252 | if node_type == SYMLINK: | ||
2093 | 253 | self.client.download_string(share_uuid= | ||
2094 | 254 | self.share_uuid, | ||
2095 | 255 | node_uuid=node_uuid, | ||
2096 | 256 | content_hash=content_hash) | ||
2097 | 257 | else: | ||
2098 | 258 | self.client.download_file(share_uuid=self.share_uuid, | ||
2099 | 259 | node_uuid=node_uuid, | ||
2100 | 260 | content_hash=content_hash, | ||
2101 | 261 | filename=path) | ||
2102 | 262 | except (request.StorageRequestError, UnsupportedOperationError), e: | ||
2103 | 263 | if os.path.exists(path): | ||
2104 | 264 | raise NodeUpdateError("Error downloading content for %s: %s" %\ | ||
2105 | 265 | (path, e)) | ||
2106 | 266 | else: | ||
2107 | 267 | raise NodeCreateError("Error locally creating %s: %s" % \ | ||
2108 | 268 | (path, e)) | ||
2109 | 269 | |||
2110 | 270 | def delete_directory(self, node_uuid, path): | ||
2111 | 271 | """Deletes a directory.""" | ||
2112 | 272 | try: | ||
2113 | 273 | os.rmdir(path) | ||
2114 | 274 | except OSError, e: | ||
2115 | 275 | raise NodeDeleteError("Error locally deleting %s: %s" % (path, e)) | ||
2116 | 276 | |||
2117 | 277 | def delete_file(self, node_uuid, path): | ||
2118 | 278 | """Deletes a file.""" | ||
2119 | 279 | try: | ||
2120 | 280 | os.unlink(path) | ||
2121 | 281 | except OSError, e: | ||
2122 | 282 | raise NodeDeleteError("Error locally deleting %s: %s" % (path, e)) | ||
2123 | 283 | |||
2124 | 284 | |||
2125 | 285 | class Uploader(object): | ||
2126 | 286 | """A class which implements the upload half of syncing.""" | ||
2127 | 287 | def __init__(self, client, share_uuid): | ||
2128 | 288 | """Initializes an uploader instance.""" | ||
2129 | 289 | self.client = client | ||
2130 | 290 | self.share_uuid = share_uuid | ||
2131 | 291 | self.symbol = UPLOAD_SYMBOL | ||
2132 | 292 | |||
2133 | 293 | def create_directory(self, parent_uuid, path): | ||
2134 | 294 | """Creates a directory on the server.""" | ||
2135 | 295 | name = name_from_path(path) | ||
2136 | 296 | try: | ||
2137 | 297 | return self.client.create_directory(share_uuid=self.share_uuid, | ||
2138 | 298 | parent_uuid=parent_uuid, | ||
2139 | 299 | name=name) | ||
2140 | 300 | except (request.StorageRequestError, UnsupportedOperationError), e: | ||
2141 | 301 | raise NodeCreateError("Error remotely creating %s: %s" % \ | ||
2142 | 302 | (path, e)) | ||
2143 | 303 | |||
2144 | 304 | def write_file(self, node_uuid, old_content_hash, content_hash, | ||
2145 | 305 | parent_uuid, path, conflict_info, node_type): | ||
2146 | 306 | """Creates a file on the server and uploads new content for it.""" | ||
2147 | 307 | |||
2148 | 308 | if conflict_info: | ||
2149 | 309 | # move conflicting file out of the way on the server | ||
2150 | 310 | conflict_path = get_conflict_path(path, conflict_info) | ||
2151 | 311 | conflict_name = name_from_path(conflict_path) | ||
2152 | 312 | try: | ||
2153 | 313 | self.client.move(share_uuid=self.share_uuid, | ||
2154 | 314 | parent_uuid=parent_uuid, | ||
2155 | 315 | name=conflict_name, | ||
2156 | 316 | node_uuid=node_uuid) | ||
2157 | 317 | except (request.StorageRequestError, UnsupportedOperationError), e: | ||
2158 | 318 | raise NodeUpdateError("Error remotely renaming %s to %s: %s" %\ | ||
2159 | 319 | (path, conflict_path, e)) | ||
2160 | 320 | node_uuid = None | ||
2161 | 321 | old_content_hash = EMPTY_HASH | ||
2162 | 322 | |||
2163 | 323 | if node_type == SYMLINK: | ||
2164 | 324 | try: | ||
2165 | 325 | target = os.readlink(path) | ||
2166 | 326 | except OSError, e: | ||
2167 | 327 | raise NodeCreateError("Error retrieving link target " \ | ||
2168 | 328 | "for %s: %s" % (path, e)) | ||
2169 | 329 | else: | ||
2170 | 330 | target = None | ||
2171 | 331 | |||
2172 | 332 | name = name_from_path(path) | ||
2173 | 333 | if node_uuid is None: | ||
2174 | 334 | try: | ||
2175 | 335 | if node_type == SYMLINK: | ||
2176 | 336 | node_uuid = self.client.create_symlink(share_uuid= | ||
2177 | 337 | self.share_uuid, | ||
2178 | 338 | parent_uuid= | ||
2179 | 339 | parent_uuid, | ||
2180 | 340 | name=name, | ||
2181 | 341 | target=target) | ||
2182 | 342 | old_content_hash = content_hash | ||
2183 | 343 | else: | ||
2184 | 344 | node_uuid = self.client.create_file(share_uuid= | ||
2185 | 345 | self.share_uuid, | ||
2186 | 346 | parent_uuid= | ||
2187 | 347 | parent_uuid, | ||
2188 | 348 | name=name) | ||
2189 | 349 | except (request.StorageRequestError, UnsupportedOperationError), e: | ||
2190 | 350 | raise NodeCreateError("Error remotely creating %s: %s" % \ | ||
2191 | 351 | (path, e)) | ||
2192 | 352 | |||
2193 | 353 | if old_content_hash != content_hash: | ||
2194 | 354 | try: | ||
2195 | 355 | if node_type == SYMLINK: | ||
2196 | 356 | self.client.upload_string(share_uuid=self.share_uuid, | ||
2197 | 357 | node_uuid=node_uuid, | ||
2198 | 358 | content_hash=content_hash, | ||
2199 | 359 | old_content_hash= | ||
2200 | 360 | old_content_hash, | ||
2201 | 361 | content=target) | ||
2202 | 362 | else: | ||
2203 | 363 | self.client.upload_file(share_uuid=self.share_uuid, | ||
2204 | 364 | node_uuid=node_uuid, | ||
2205 | 365 | content_hash=content_hash, | ||
2206 | 366 | old_content_hash=old_content_hash, | ||
2207 | 367 | filename=path) | ||
2208 | 368 | except (request.StorageRequestError, UnsupportedOperationError), e: | ||
2209 | 369 | raise NodeUpdateError("Error uploading content for %s: %s" % \ | ||
2210 | 370 | (path, e)) | ||
2211 | 371 | |||
2212 | 372 | def delete_directory(self, node_uuid, path): | ||
2213 | 373 | """Deletes a directory.""" | ||
2214 | 374 | try: | ||
2215 | 375 | self.client.unlink(share_uuid=self.share_uuid, node_uuid=node_uuid) | ||
2216 | 376 | except (request.StorageRequestError, UnsupportedOperationError), e: | ||
2217 | 377 | raise NodeDeleteError("Error remotely deleting %s: %s" % (path, e)) | ||
2218 | 378 | |||
2219 | 379 | def delete_file(self, node_uuid, path): | ||
2220 | 380 | """Deletes a file.""" | ||
2221 | 381 | try: | ||
2222 | 382 | self.client.unlink(share_uuid=self.share_uuid, node_uuid=node_uuid) | ||
2223 | 383 | except (request.StorageRequestError, UnsupportedOperationError), e: | ||
2224 | 384 | raise NodeDeleteError("Error remotely deleting %s: %s" % (path, e)) | ||
2225 | 0 | 385 | ||
2226 | === added file 'src/u1sync/ubuntuone_optparse.py' | |||
2227 | --- src/u1sync/ubuntuone_optparse.py 1970-01-01 00:00:00 +0000 | |||
2228 | +++ src/u1sync/ubuntuone_optparse.py 2010-08-27 14:56:43 +0000 | |||
2229 | @@ -0,0 +1,202 @@ | |||
2230 | 1 | # ubuntuone.u1sync.ubuntuone_optparse | ||
2231 | 2 | # | ||
2232 | 3 | # Prototype directory sync client | ||
2233 | 4 | # | ||
2234 | 5 | # Author: Manuel de la Pena <manuel.delapena@canonical.com> | ||
2235 | 6 | # | ||
2236 | 7 | # Copyright 2010 Canonical Ltd. | ||
2237 | 8 | # | ||
2238 | 9 | # This program is free software: you can redistribute it and/or modify it | ||
2239 | 10 | # under the terms of the GNU General Public License version 3, as published | ||
2240 | 11 | # by the Free Software Foundation. | ||
2241 | 12 | # | ||
2242 | 13 | # This program is distributed in the hope that it will be useful, but | ||
2243 | 14 | # WITHOUT ANY WARRANTY; without even the implied warranties of | ||
2244 | 15 | # MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR | ||
2245 | 16 | # PURPOSE. See the GNU General Public License for more details. | ||
2246 | 17 | # | ||
2247 | 18 | # You should have received a copy of the GNU General Public License along | ||
2248 | 19 | # with this program. If not, see <http://www.gnu.org/licenses/>. | ||
2249 | 20 | import uuid | ||
2250 | 21 | from oauth.oauth import OAuthToken | ||
2251 | 22 | from optparse import OptionParser, SUPPRESS_HELP | ||
2252 | 23 | from u1sync.merge import ( | ||
2253 | 24 | SyncMerge, ClobberServerMerge, ClobberLocalMerge) | ||
2254 | 25 | |||
2255 | 26 | MERGE_ACTIONS = { | ||
2256 | 27 | # action: (merge_class, should_upload, should_download) | ||
2257 | 28 | 'sync': (SyncMerge, True, True), | ||
2258 | 29 | 'clobber-server': (ClobberServerMerge, True, False), | ||
2259 | 30 | 'clobber-local': (ClobberLocalMerge, False, True), | ||
2260 | 31 | 'upload': (SyncMerge, True, False), | ||
2261 | 32 | 'download': (SyncMerge, False, True), | ||
2262 | 33 | 'auto': None # special case | ||
2263 | 34 | } | ||
2264 | 35 | |||
2265 | 36 | DEFAULT_MERGE_ACTION = 'auto' | ||
2266 | 37 | |||
2267 | 38 | class NotParsedOptionsError(Exception): | ||
2268 | 39 | """Exception thrown when there options have not been parsed.""" | ||
2269 | 40 | |||
2270 | 41 | class NotValidatedOptionsError(Exception): | ||
2271 | 42 | """Exception thrown when the options have not been validated.""" | ||
2272 | 43 | |||
2273 | 44 | class UbuntuOneOptionsParser(OptionParser): | ||
2274 | 45 | """Parse for the options passed in the command line""" | ||
2275 | 46 | |||
2276 | 47 | def __init__(self): | ||
2277 | 48 | usage = "Usage: %prog [options] [DIRECTORY]\n" \ | ||
2278 | 49 | " %prog --authorize [options]\n" \ | ||
2279 | 50 | " %prog --list-shares [options]\n" \ | ||
2280 | 51 | " %prog --init [--share=SHARE_UUID] [options] DIRECTORY\n" \ | ||
2281 | 52 | " %prog --diff [--share=SHARE_UUID] [options] DIRECTORY" | ||
2282 | 53 | OptionParser.__init__(self, usage=usage) | ||
2283 | 54 | self._was_validated = False | ||
2284 | 55 | self._args = None | ||
2285 | 56 | # add the different options to be used | ||
2286 | 57 | self.add_option("--port", dest="port", metavar="PORT", | ||
2287 | 58 | default=443, | ||
2288 | 59 | help="The port on which to connect to the server") | ||
2289 | 60 | self.add_option("--host", dest="host", metavar="HOST", | ||
2290 | 61 | default='fs-1.one.ubuntu.com', | ||
2291 | 62 | help="The server address") | ||
2292 | 63 | self.add_option("--realm", dest="realm", metavar="REALM", | ||
2293 | 64 | default='https://ubuntuone.com', | ||
2294 | 65 | help="The oauth realm") | ||
2295 | 66 | self.add_option("--oauth", dest="oauth", metavar="KEY:SECRET", | ||
2296 | 67 | default=None, | ||
2297 | 68 | help="Explicitly provide OAuth credentials " | ||
2298 | 69 | "(default is to query keyring)") | ||
2299 | 70 | |||
2300 | 71 | action_list = ", ".join(sorted(MERGE_ACTIONS.keys())) | ||
2301 | 72 | self.add_option("--action", dest="action", metavar="ACTION", | ||
2302 | 73 | default=None, | ||
2303 | 74 | help="Select a sync action (%s; default is %s)" % \ | ||
2304 | 75 | (action_list, DEFAULT_MERGE_ACTION)) | ||
2305 | 76 | |||
2306 | 77 | self.add_option("--dry-run", action="store_true", dest="dry_run", | ||
2307 | 78 | default=False, help="Do a dry run without actually " | ||
2308 | 79 | "making changes") | ||
2309 | 80 | self.add_option("--quiet", action="store_true", dest="quiet", | ||
2310 | 81 | default=False, help="Produces less output") | ||
2311 | 82 | self.add_option("--authorize", action="store_const", dest="mode", | ||
2312 | 83 | const="authorize", | ||
2313 | 84 | help="Authorize this machine") | ||
2314 | 85 | self.add_option("--list-shares", action="store_const", dest="mode", | ||
2315 | 86 | const="list-shares", default="sync", | ||
2316 | 87 | help="List available shares") | ||
2317 | 88 | self.add_option("--init", action="store_const", dest="mode", | ||
2318 | 89 | const="init", | ||
2319 | 90 | help="Initialize a local directory for syncing") | ||
2320 | 91 | self.add_option("--no-ssl-verify", action="store_true", | ||
2321 | 92 | dest="no_ssl_verify", | ||
2322 | 93 | default=False, help=SUPPRESS_HELP) | ||
2323 | 94 | self.add_option("--diff", action="store_const", dest="mode", | ||
2324 | 95 | const="diff", | ||
2325 | 96 | help="Compare tree on server with local tree " \ | ||
2326 | 97 | "(does not require previous --init)") | ||
2327 | 98 | self.add_option("--share", dest="share", metavar="SHARE_UUID", | ||
2328 | 99 | default=None, | ||
2329 | 100 | help="Sync the directory with a share rather than the " \ | ||
2330 | 101 | "user's own volume") | ||
2331 | 102 | self.add_option("--subtree", dest="subtree", metavar="PATH", | ||
2332 | 103 | default=None, | ||
2333 | 104 | help="Mirror a subset of the share or volume") | ||
2334 | 105 | |||
2335 | 106 | def get_options(self, arguments): | ||
2336 | 107 | """Parses the arguments to from the command line.""" | ||
2337 | 108 | (self.options, self._args) = \ | ||
2338 | 109 | self.parse_args(arguments) | ||
2339 | 110 | self._validate_args() | ||
2340 | 111 | |||
2341 | 112 | def _validate_args(self): | ||
2342 | 113 | """Validates the args that have been parsed.""" | ||
2343 | 114 | self._is_only_share() | ||
2344 | 115 | self._get_directory() | ||
2345 | 116 | self._validate_action_usage() | ||
2346 | 117 | self._validate_authorize_usage() | ||
2347 | 118 | self._validate_subtree_usage() | ||
2348 | 119 | self._validate_action() | ||
2349 | 120 | self._validate_oauth() | ||
2350 | 121 | |||
2351 | 122 | def _is_only_share(self): | ||
2352 | 123 | """Ensures that the share options is not convined with any other.""" | ||
2353 | 124 | if self.options.share is not None and \ | ||
2354 | 125 | self.options.mode != "init" and \ | ||
2355 | 126 | self.options.mode != "diff": | ||
2356 | 127 | self.error("--share is only valid with --init or --diff") | ||
2357 | 128 | |||
2358 | 129 | def _get_directory(self): | ||
2359 | 130 | """Gets the directory to be used according to the paramenters.""" | ||
2360 | 131 | print self._args | ||
2361 | 132 | if self.options.mode == "sync" or self.options.mode == "init" or \ | ||
2362 | 133 | self.options.mode == "diff": | ||
2363 | 134 | if len(self._args) > 2: | ||
2364 | 135 | self.error("Too many arguments") | ||
2365 | 136 | elif len(self._args) < 1: | ||
2366 | 137 | if self.options.mode == "init" or self.options.mode == "diff": | ||
2367 | 138 | self.error("--%s requires a directory to " | ||
2368 | 139 | "be specified" % self.options.mode) | ||
2369 | 140 | else: | ||
2370 | 141 | self.options.directory = "." | ||
2371 | 142 | else: | ||
2372 | 143 | self.options.directory = self._args[0] | ||
2373 | 144 | |||
2374 | 145 | def _validate_action_usage(self): | ||
2375 | 146 | """Ensures that the --action option is correctly used""" | ||
2376 | 147 | if self.options.mode == "init" or \ | ||
2377 | 148 | self.options.mode == "list-shares" or \ | ||
2378 | 149 | self.options.mode == "diff" or \ | ||
2379 | 150 | self.options.mode == "authorize": | ||
2380 | 151 | if self.options.action is not None: | ||
2381 | 152 | self.error("--%s does not take the --action parameter" % \ | ||
2382 | 153 | self.options.mode) | ||
2383 | 154 | if self.options.dry_run: | ||
2384 | 155 | self.error("--%s does not take the --dry-run parameter" % \ | ||
2385 | 156 | self.options.mode) | ||
2386 | 157 | |||
2387 | 158 | def _validate_authorize_usage(self): | ||
2388 | 159 | """Validates the usage of the authorize option.""" | ||
2389 | 160 | if self.options.mode == "authorize": | ||
2390 | 161 | if self.options.oauth is not None: | ||
2391 | 162 | self.error("--authorize does not take the --oauth parameter") | ||
2392 | 163 | if self.options.mode == "list-shares" or \ | ||
2393 | 164 | self.options.mode == "authorize": | ||
2394 | 165 | if len(self._args) != 0: | ||
2395 | 166 | self.error("--list-shares does not take a directory") | ||
2396 | 167 | |||
2397 | 168 | def _validate_subtree_usage(self): | ||
2398 | 169 | """Validates the usage of the subtree option""" | ||
2399 | 170 | if self.options.mode != "init" and self.options.mode != "diff": | ||
2400 | 171 | if self.options.subtree is not None: | ||
2401 | 172 | self.error("--%s does not take the --subtree parameter" % \ | ||
2402 | 173 | self.options.mode) | ||
2403 | 174 | |||
2404 | 175 | def _validate_action(self): | ||
2405 | 176 | """Validates the actions passed to the options.""" | ||
2406 | 177 | if self.options.action is not None and \ | ||
2407 | 178 | self.options.action not in MERGE_ACTIONS: | ||
2408 | 179 | self.error("--action: Unknown action %s" % self.options.action) | ||
2409 | 180 | |||
2410 | 181 | if self.options.action is None: | ||
2411 | 182 | self.options.action = DEFAULT_MERGE_ACTION | ||
2412 | 183 | |||
2413 | 184 | def _validate_oauth(self): | ||
2414 | 185 | """Validates that the oatuh was passed.""" | ||
2415 | 186 | if self.options.oauth is None: | ||
2416 | 187 | self.error("--oauth is currently compulsery.") | ||
2417 | 188 | else: | ||
2418 | 189 | try: | ||
2419 | 190 | (key, secret) = self.options.oauth.split(':', 2) | ||
2420 | 191 | except ValueError: | ||
2421 | 192 | self.error("--oauth requires a key and secret together in the " | ||
2422 | 193 | " form KEY:SECRET") | ||
2423 | 194 | self.options.token = OAuthToken(key, secret) | ||
2424 | 195 | |||
2425 | 196 | def _validate_share(self): | ||
2426 | 197 | """Validates the share option""" | ||
2427 | 198 | if self.options.share is not None: | ||
2428 | 199 | try: | ||
2429 | 200 | uuid.UUID(self.options.share) | ||
2430 | 201 | except ValueError, e: | ||
2431 | 202 | self.error("Invalid --share argument: %s" % e) | ||
2432 | 0 | \ No newline at end of file | 203 | \ No newline at end of file |
2433 | 1 | 204 | ||
2434 | === added file 'src/u1sync/utils.py' | |||
2435 | --- src/u1sync/utils.py 1970-01-01 00:00:00 +0000 | |||
2436 | +++ src/u1sync/utils.py 2010-08-27 14:56:43 +0000 | |||
2437 | @@ -0,0 +1,50 @@ | |||
2438 | 1 | # ubuntuone.u1sync.utils | ||
2439 | 2 | # | ||
2440 | 3 | # Miscellaneous utility functions | ||
2441 | 4 | # | ||
2442 | 5 | # Author: Tim Cole <tim.cole@canonical.com> | ||
2443 | 6 | # | ||
2444 | 7 | # Copyright 2009 Canonical Ltd. | ||
2445 | 8 | # | ||
2446 | 9 | # This program is free software: you can redistribute it and/or modify it | ||
2447 | 10 | # under the terms of the GNU General Public License version 3, as published | ||
2448 | 11 | # by the Free Software Foundation. | ||
2449 | 12 | # | ||
2450 | 13 | # This program is distributed in the hope that it will be useful, but | ||
2451 | 14 | # WITHOUT ANY WARRANTY; without even the implied warranties of | ||
2452 | 15 | # MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR | ||
2453 | 16 | # PURPOSE. See the GNU General Public License for more details. | ||
2454 | 17 | # | ||
2455 | 18 | # You should have received a copy of the GNU General Public License along | ||
2456 | 19 | # with this program. If not, see <http://www.gnu.org/licenses/>. | ||
2457 | 20 | """Miscellaneous utility functions.""" | ||
2458 | 21 | |||
2459 | 22 | import os | ||
2460 | 23 | from errno import EEXIST, ENOENT | ||
2461 | 24 | from u1sync.constants import ( | ||
2462 | 25 | METADATA_DIR_NAME, SPECIAL_FILE_RE) | ||
2463 | 26 | |||
2464 | 27 | def should_sync(filename): | ||
2465 | 28 | """Returns True if the filename should be synced. | ||
2466 | 29 | |||
2467 | 30 | @param filename: a unicode filename | ||
2468 | 31 | |||
2469 | 32 | """ | ||
2470 | 33 | return filename != METADATA_DIR_NAME and \ | ||
2471 | 34 | not SPECIAL_FILE_RE.match(filename) | ||
2472 | 35 | |||
2473 | 36 | def safe_mkdir(path): | ||
2474 | 37 | """Creates a directory iff it does not already exist.""" | ||
2475 | 38 | try: | ||
2476 | 39 | os.mkdir(path) | ||
2477 | 40 | except OSError, e: | ||
2478 | 41 | if e.errno != EEXIST: | ||
2479 | 42 | raise | ||
2480 | 43 | |||
2481 | 44 | def safe_unlink(path): | ||
2482 | 45 | """Unlinks a file iff it exists.""" | ||
2483 | 46 | try: | ||
2484 | 47 | os.unlink(path) | ||
2485 | 48 | except OSError, e: | ||
2486 | 49 | if e.errno != ENOENT: | ||
2487 | 50 | raise |
Tests Ok, build Ok!