Merge lp:~mandel/ubuntuone-windows-installer/add_python_u1sync into lp:ubuntuone-windows-installer/beta
- add_python_u1sync
- Merge into beta
Proposed by
Manuel de la Peña
Status: | Merged |
---|---|
Approved by: | John Lenton |
Approved revision: | 56 |
Merged at revision: | 72 |
Proposed branch: | lp:~mandel/ubuntuone-windows-installer/add_python_u1sync |
Merge into: | lp:ubuntuone-windows-installer/beta |
Diff against target: |
2487 lines (+2397/-4) 14 files modified
README.txt (+15/-2) main.build (+11/-2) src/setup.py (+56/-0) src/u1sync/__init__.py (+14/-0) src/u1sync/client.py (+754/-0) src/u1sync/constants.py (+30/-0) src/u1sync/genericmerge.py (+88/-0) src/u1sync/main.py (+360/-0) src/u1sync/merge.py (+186/-0) src/u1sync/metadata.py (+145/-0) src/u1sync/scan.py (+102/-0) src/u1sync/sync.py (+384/-0) src/u1sync/ubuntuone_optparse.py (+202/-0) src/u1sync/utils.py (+50/-0) |
To merge this branch: | bzr merge lp:~mandel/ubuntuone-windows-installer/add_python_u1sync |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Rick McBride (community) | Approve | ||
Vincenzo Di Somma (community) | Approve | ||
Review via email: mp+33914@code.launchpad.net |
Commit message
Description of the change
Modified u1sync to work on windows and added its packaging with py2exe. This provides a temp solution until we refactor certain parts on the syncdaemon.
To post a comment you must log in.
Revision history for this message
Rick McBride (rmcbride) : | # |
review:
Approve
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'README.txt' |
2 | --- README.txt 2010-08-19 11:13:02 +0000 |
3 | +++ README.txt 2010-08-27 14:56:43 +0000 |
4 | @@ -7,8 +7,21 @@ |
5 | * UbuntuOne Windows Service: Windows service that allows to start, stop, pause and resume the sync daemon. |
6 | * UbuntuOne Windows Service Brodcaster: Provides a WCF service hosted in a windows service that allows to .Net languages |
7 | to communicate with the sync daemon and interact with it. |
8 | - |
9 | -2. Build |
10 | + |
11 | +2. Enviroment setup |
12 | + |
13 | +The ubuntuone windows port solution provides a port of the python code that is used on Linux to perform the u1 sync operations. Due to the fact that |
14 | +we did not want user to have to be hunting down the python runtime plus the differen modules that have been used. To simplify the live of the |
15 | +windows users we have opted to use py2exe to create an executable that will carry all the different python dependencies of the code. As you |
16 | +may already know py2exe is not perfect and does not support egg files. In order to make sure that you easy_install does extra the eggs files after |
17 | +the packages are installed please use the following command: |
18 | + |
19 | +easy_install -Z %package_name% |
20 | + |
21 | +In order to be able to build the solution you will need to have python win32, the win32 python extensions and the ubuntu one storage protocol in your system.: |
22 | + |
23 | + |
24 | +3. Build |
25 | |
26 | In order to simplify the buil as much as possible, all the required tools for the compilation of the project are provided in the soruce tree. The |
27 | compilation uses a nant project that allows to run the following targets: |
28 | |
29 | === modified file 'main.build' |
30 | --- main.build 2010-08-24 18:32:48 +0000 |
31 | +++ main.build 2010-08-27 14:56:43 +0000 |
32 | @@ -176,10 +176,19 @@ |
33 | program="nunit-console.exe" |
34 | commandline="UbuntuOneClient.Tests.dll /xml=../../../../test-results/UbuntuOneClient.Tests-Result.xml" /> |
35 | </target> |
36 | - |
37 | + <target name="package_python" |
38 | + description="Creates the exe binary that embeds the python libs that will be used to perform the sync operation in the windows platform"> |
39 | + |
40 | + <exec basedir="${python_path}" |
41 | + managed="true" |
42 | + workingdir="src" |
43 | + program="python.exe" |
44 | + commandline="setup.py py2exe" /> |
45 | + </target> |
46 | + |
47 | <target name="installer" |
48 | description="Compiles the solution and create a merge installer that allows to install the solution and other related apps." |
49 | - depends="tests"> |
50 | + depends="tests, package_python"> |
51 | |
52 | <mkdir dir="${build_results}" /> |
53 | |
54 | |
55 | === added file 'src/setup.py' |
56 | --- src/setup.py 1970-01-01 00:00:00 +0000 |
57 | +++ src/setup.py 2010-08-27 14:56:43 +0000 |
58 | @@ -0,0 +1,56 @@ |
59 | +#!/usr/bin/env python |
60 | +# Copyright (C) 2010 Canonical - All Rights Reserved |
61 | + |
62 | +""" """ |
63 | +import sys |
64 | + |
65 | +# ModuleFinder can't handle runtime changes to __path__, but win32com uses them |
66 | +try: |
67 | + # py2exe 0.6.4 introduced a replacement modulefinder. |
68 | + # This means we have to add package paths there, not to the built-in |
69 | + # one. If this new modulefinder gets integrated into Python, then |
70 | + # we might be able to revert this some day. |
71 | + # if this doesn't work, try import modulefinder |
72 | + try: |
73 | + import py2exe.mf as modulefinder |
74 | + except ImportError: |
75 | + import modulefinder |
76 | + import win32com |
77 | + for p in win32com.__path__[1:]: |
78 | + modulefinder.AddPackagePath("win32com", p) |
79 | + for extra in ["win32com.shell"]: #,"win32com.mapi" |
80 | + __import__(extra) |
81 | + m = sys.modules[extra] |
82 | + for p in m.__path__[1:]: |
83 | + modulefinder.AddPackagePath(extra, p) |
84 | +except ImportError: |
85 | + # no build path setup, no worries. |
86 | + pass |
87 | + |
88 | +from distutils.core import setup |
89 | +import py2exe |
90 | + |
91 | +if __name__ == '__main__': |
92 | + |
93 | + setup( |
94 | + options = { |
95 | + "py2exe": { |
96 | + "compressed": 1, |
97 | + "optimize": 2}}, |
98 | + name='u1sync', |
99 | + version='0.0.1', |
100 | + author = "Canonical Online Services Hackers", |
101 | + description="""u1sync is a utility of Ubuntu One. |
102 | + Ubuntu One is a suite of on-line |
103 | + services. This package contains the a synchronization client for the |
104 | + Ubuntu One file sharing service.""", |
105 | + license='GPLv3', |
106 | + console=['u1sync\\main.py'], |
107 | + requires=[ |
108 | + 'python (>= 2.5)', |
109 | + 'oauth', |
110 | + 'twisted.names', |
111 | + 'twisted.web', |
112 | + 'ubuntuone.storageprotocol (>= 1.3.0)', |
113 | + ], |
114 | + ) |
115 | |
116 | === added directory 'src/u1sync' |
117 | === added file 'src/u1sync/__init__.py' |
118 | --- src/u1sync/__init__.py 1970-01-01 00:00:00 +0000 |
119 | +++ src/u1sync/__init__.py 2010-08-27 14:56:43 +0000 |
120 | @@ -0,0 +1,14 @@ |
121 | +# Copyright 2009 Canonical Ltd. |
122 | +# |
123 | +# This program is free software: you can redistribute it and/or modify it |
124 | +# under the terms of the GNU General Public License version 3, as published |
125 | +# by the Free Software Foundation. |
126 | +# |
127 | +# This program is distributed in the hope that it will be useful, but |
128 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
129 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
130 | +# PURPOSE. See the GNU General Public License for more details. |
131 | +# |
132 | +# You should have received a copy of the GNU General Public License along |
133 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
134 | +"""The guts of the u1sync tool.""" |
135 | |
136 | === added file 'src/u1sync/client.py' |
137 | --- src/u1sync/client.py 1970-01-01 00:00:00 +0000 |
138 | +++ src/u1sync/client.py 2010-08-27 14:56:43 +0000 |
139 | @@ -0,0 +1,754 @@ |
140 | +# ubuntuone.u1sync.client |
141 | +# |
142 | +# Client/protocol end of u1sync |
143 | +# |
144 | +# Author: Lucio Torre <lucio.torre@canonical.com> |
145 | +# Author: Tim Cole <tim.cole@canonical.com> |
146 | +# |
147 | +# Copyright 2009 Canonical Ltd. |
148 | +# |
149 | +# This program is free software: you can redistribute it and/or modify it |
150 | +# under the terms of the GNU General Public License version 3, as published |
151 | +# by the Free Software Foundation. |
152 | +# |
153 | +# This program is distributed in the hope that it will be useful, but |
154 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
155 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
156 | +# PURPOSE. See the GNU General Public License for more details. |
157 | +# |
158 | +# You should have received a copy of the GNU General Public License along |
159 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
160 | +"""Pretty API for protocol client.""" |
161 | + |
162 | +from __future__ import with_statement |
163 | + |
164 | +import os |
165 | +import sys |
166 | +import shutil |
167 | +from Queue import Queue |
168 | +from threading import Lock |
169 | +import zlib |
170 | +import urlparse |
171 | +import ConfigParser |
172 | +from cStringIO import StringIO |
173 | +from twisted.internet import reactor, defer |
174 | +from twisted.internet.defer import inlineCallbacks, returnValue |
175 | +from ubuntuone.logger import LOGFOLDER |
176 | +from ubuntuone.storageprotocol.content_hash import crc32 |
177 | +from ubuntuone.storageprotocol.context import get_ssl_context |
178 | +from u1sync.genericmerge import MergeNode |
179 | +from u1sync.utils import should_sync |
180 | + |
181 | +CONSUMER_KEY = "ubuntuone" |
182 | + |
183 | +from oauth.oauth import OAuthConsumer |
184 | +from ubuntuone.storageprotocol.client import ( |
185 | + StorageClientFactory, StorageClient) |
186 | +from ubuntuone.storageprotocol import request, volumes |
187 | +from ubuntuone.storageprotocol.dircontent_pb2 import \ |
188 | + DirectoryContent, DIRECTORY |
189 | +import uuid |
190 | +import logging |
191 | +from logging.handlers import RotatingFileHandler |
192 | +import time |
193 | + |
194 | +def share_str(share_uuid): |
195 | + """Converts a share UUID to a form the protocol likes.""" |
196 | + return str(share_uuid) if share_uuid is not None else request.ROOT |
197 | + |
198 | +LOGFILENAME = os.path.join(LOGFOLDER, 'u1sync.log') |
199 | +u1_logger = logging.getLogger("u1sync.timing.log") |
200 | +handler = RotatingFileHandler(LOGFILENAME) |
201 | +u1_logger.addHandler(handler) |
202 | + |
203 | +def log_timing(func): |
204 | + def wrapper(*arg, **kwargs): |
205 | + start = time.time() |
206 | + ent = func(*arg, **kwargs) |
207 | + stop = time.time() |
208 | + u1_logger.debug('for %s %0.5f ms elapsed' % (func.func_name, \ |
209 | + (stop-start)*1000.0)) |
210 | + return ent |
211 | + return wrapper |
212 | + |
213 | + |
214 | +class ForcedShutdown(Exception): |
215 | + """Client shutdown forced.""" |
216 | + |
217 | + |
218 | +class Waiter(object): |
219 | + """Wait object for blocking waits.""" |
220 | + |
221 | + def __init__(self): |
222 | + """Initializes the wait object.""" |
223 | + self.queue = Queue() |
224 | + |
225 | + def wake(self, result): |
226 | + """Wakes the waiter with a result.""" |
227 | + self.queue.put((result, None)) |
228 | + |
229 | + def wakeAndRaise(self, exc_info): |
230 | + """Wakes the waiter, raising the given exception in it.""" |
231 | + self.queue.put((None, exc_info)) |
232 | + |
233 | + def wakeWithResult(self, func, *args, **kw): |
234 | + """Wakes the waiter with the result of the given function.""" |
235 | + try: |
236 | + result = func(*args, **kw) |
237 | + except Exception: |
238 | + self.wakeAndRaise(sys.exc_info()) |
239 | + else: |
240 | + self.wake(result) |
241 | + |
242 | + def wait(self): |
243 | + """Waits for wakeup.""" |
244 | + (result, exc_info) = self.queue.get() |
245 | + if exc_info: |
246 | + try: |
247 | + raise exc_info[0], exc_info[1], exc_info[2] |
248 | + finally: |
249 | + exc_info = None |
250 | + else: |
251 | + return result |
252 | + |
253 | + |
254 | +class SyncStorageClient(StorageClient): |
255 | + """Simple client that calls a callback on connection.""" |
256 | + |
257 | + @log_timing |
258 | + def connectionMade(self): |
259 | + """Setup and call callback.""" |
260 | + StorageClient.connectionMade(self) |
261 | + if self.factory.current_protocol not in (None, self): |
262 | + self.factory.current_protocol.transport.loseConnection() |
263 | + self.factory.current_protocol = self |
264 | + self.factory.observer.connected() |
265 | + |
266 | + @log_timing |
267 | + def connectionLost(self, reason=None): |
268 | + """Callback for established connection lost""" |
269 | + if self.factory.current_protocol is self: |
270 | + self.factory.current_protocol = None |
271 | + self.factory.observer.disconnected(reason) |
272 | + |
273 | + |
274 | +class SyncClientFactory(StorageClientFactory): |
275 | + """A cmd protocol factory.""" |
276 | + # no init: pylint: disable-msg=W0232 |
277 | + |
278 | + protocol = SyncStorageClient |
279 | + |
280 | + @log_timing |
281 | + def __init__(self, observer): |
282 | + """Create the factory""" |
283 | + self.observer = observer |
284 | + self.current_protocol = None |
285 | + |
286 | + @log_timing |
287 | + def clientConnectionFailed(self, connector, reason): |
288 | + """We failed at connecting.""" |
289 | + self.current_protocol = None |
290 | + self.observer.connection_failed(reason) |
291 | + |
292 | + |
293 | +class UnsupportedOperationError(Exception): |
294 | + """The operation is unsupported by the protocol version.""" |
295 | + |
296 | + |
297 | +class ConnectionError(Exception): |
298 | + """A connection error.""" |
299 | + |
300 | + |
301 | +class AuthenticationError(Exception): |
302 | + """An authentication error.""" |
303 | + |
304 | + |
305 | +class NoSuchShareError(Exception): |
306 | + """Error when there is no such share available.""" |
307 | + |
308 | + |
309 | +class CapabilitiesError(Exception): |
310 | + """A capabilities set/query related error.""" |
311 | + |
312 | +class Client(object): |
313 | + """U1 storage client facade.""" |
314 | + required_caps = frozenset(["no-content", "fix462230"]) |
315 | + |
316 | + def __init__(self, realm, reactor=reactor): |
317 | + """Create the instance.""" |
318 | + |
319 | + self.reactor = reactor |
320 | + self.factory = SyncClientFactory(self) |
321 | + |
322 | + self._status_lock = Lock() |
323 | + self._status = "disconnected" |
324 | + self._status_reason = None |
325 | + self._status_waiting = [] |
326 | + self._active_waiters = set() |
327 | + self.consumer_key = CONSUMER_KEY |
328 | + self.consumer_secret = "hammertime" |
329 | + |
330 | + def force_shutdown(self): |
331 | + """Forces the client to shut itself down.""" |
332 | + with self._status_lock: |
333 | + self._status = "forced_shutdown" |
334 | + self._reason = None |
335 | + for waiter in self._active_waiters: |
336 | + waiter.wakeAndRaise((ForcedShutdown("Forced shutdown"), |
337 | + None, None)) |
338 | + self._active_waiters.clear() |
339 | + |
340 | + def _get_waiter_locked(self): |
341 | + """Gets a wait object for blocking waits. Should be called with the |
342 | + status lock held. |
343 | + """ |
344 | + waiter = Waiter() |
345 | + if self._status == "forced_shutdown": |
346 | + raise ForcedShutdown("Forced shutdown") |
347 | + self._active_waiters.add(waiter) |
348 | + return waiter |
349 | + |
350 | + def _get_waiter(self): |
351 | + """Get a wait object for blocking waits. Acquires the status lock.""" |
352 | + with self._status_lock: |
353 | + return self._get_waiter_locked() |
354 | + |
355 | + def _wait(self, waiter): |
356 | + """Waits for the waiter.""" |
357 | + try: |
358 | + return waiter.wait() |
359 | + finally: |
360 | + with self._status_lock: |
361 | + if waiter in self._active_waiters: |
362 | + self._active_waiters.remove(waiter) |
363 | + |
364 | + @log_timing |
365 | + def _change_status(self, status, reason=None): |
366 | + """Changes the client status. Usually called from the reactor |
367 | + thread. |
368 | + |
369 | + """ |
370 | + with self._status_lock: |
371 | + if self._status == "forced_shutdown": |
372 | + return |
373 | + self._status = status |
374 | + self._status_reason = reason |
375 | + waiting = self._status_waiting |
376 | + self._status_waiting = [] |
377 | + for waiter in waiting: |
378 | + waiter.wake((status, reason)) |
379 | + |
380 | + @log_timing |
381 | + def _await_status_not(self, *ignore_statuses): |
382 | + """Blocks until the client status changes, returning the new status. |
383 | + Should never be called from the reactor thread. |
384 | + |
385 | + """ |
386 | + with self._status_lock: |
387 | + status = self._status |
388 | + reason = self._status_reason |
389 | + while status in ignore_statuses: |
390 | + waiter = self._get_waiter_locked() |
391 | + self._status_waiting.append(waiter) |
392 | + self._status_lock.release() |
393 | + try: |
394 | + status, reason = self._wait(waiter) |
395 | + finally: |
396 | + self._status_lock.acquire() |
397 | + if status == "forced_shutdown": |
398 | + raise ForcedShutdown("Forced shutdown.") |
399 | + return (status, reason) |
400 | + |
401 | + def connection_failed(self, reason): |
402 | + """Notification that connection failed.""" |
403 | + self._change_status("disconnected", reason) |
404 | + |
405 | + def connected(self): |
406 | + """Notification that connection succeeded.""" |
407 | + self._change_status("connected") |
408 | + |
409 | + def disconnected(self, reason): |
410 | + """Notification that we were disconnected.""" |
411 | + self._change_status("disconnected", reason) |
412 | + |
413 | + def defer_from_thread(self, function, *args, **kwargs): |
414 | + """Do twisted defer magic to get results and show exceptions.""" |
415 | + waiter = self._get_waiter() |
416 | + @log_timing |
417 | + def runner(): |
418 | + """inner.""" |
419 | + # we do want to catch all |
420 | + # no init: pylint: disable-msg=W0703 |
421 | + try: |
422 | + d = function(*args, **kwargs) |
423 | + if isinstance(d, defer.Deferred): |
424 | + d.addCallbacks(lambda r: waiter.wake((r, None, None)), |
425 | + lambda f: waiter.wake((None, None, f))) |
426 | + else: |
427 | + waiter.wake((d, None, None)) |
428 | + except Exception: |
429 | + waiter.wake((None, sys.exc_info(), None)) |
430 | + |
431 | + self.reactor.callFromThread(runner) |
432 | + result, exc_info, failure = self._wait(waiter) |
433 | + if exc_info: |
434 | + try: |
435 | + raise exc_info[0], exc_info[1], exc_info[2] |
436 | + finally: |
437 | + exc_info = None |
438 | + elif failure: |
439 | + failure.raiseException() |
440 | + else: |
441 | + return result |
442 | + |
443 | + @log_timing |
444 | + def connect(self, host, port): |
445 | + """Connect to host/port.""" |
446 | + def _connect(): |
447 | + """Deferred part.""" |
448 | + self.reactor.connectTCP(host, port, self.factory) |
449 | + self._connect_inner(_connect) |
450 | + |
451 | + @log_timing |
452 | + def connect_ssl(self, host, port, no_verify): |
453 | + """Connect to host/port using ssl.""" |
454 | + def _connect(): |
455 | + """deferred part.""" |
456 | + ctx = get_ssl_context(no_verify) |
457 | + self.reactor.connectSSL(host, port, self.factory, ctx) |
458 | + self._connect_inner(_connect) |
459 | + |
460 | + @log_timing |
461 | + def _connect_inner(self, _connect): |
462 | + """Helper function for connecting.""" |
463 | + self._change_status("connecting") |
464 | + self.reactor.callFromThread(_connect) |
465 | + status, reason = self._await_status_not("connecting") |
466 | + if status != "connected": |
467 | + raise ConnectionError(reason.value) |
468 | + |
469 | + @log_timing |
470 | + def disconnect(self): |
471 | + """Disconnect.""" |
472 | + if self.factory.current_protocol is not None: |
473 | + self.reactor.callFromThread( |
474 | + self.factory.current_protocol.transport.loseConnection) |
475 | + self._await_status_not("connecting", "connected", "authenticated") |
476 | + |
477 | + @log_timing |
478 | + def set_capabilities(self): |
479 | + """Set the capabilities with the server""" |
480 | + |
481 | + client = self.factory.current_protocol |
482 | + @log_timing |
483 | + def set_caps_callback(req): |
484 | + "Caps query succeeded" |
485 | + if not req.accepted: |
486 | + de = defer.fail("The server denied setting %s capabilities" % \ |
487 | + req.caps) |
488 | + return de |
489 | + |
490 | + @log_timing |
491 | + def query_caps_callback(req): |
492 | + "Caps query succeeded" |
493 | + if req.accepted: |
494 | + set_d = client.set_caps(self.required_caps) |
495 | + set_d.addCallback(set_caps_callback) |
496 | + return set_d |
497 | + else: |
498 | + # the server don't have the requested capabilities. |
499 | + # return a failure for now, in the future we might want |
500 | + # to reconnect to another server |
501 | + de = defer.fail("The server don't have the requested" |
502 | + " capabilities: %s" % str(req.caps)) |
503 | + return de |
504 | + |
505 | + @log_timing |
506 | + def _wrapped_set_capabilities(): |
507 | + """Wrapped set_capabilities """ |
508 | + d = client.query_caps(self.required_caps) |
509 | + d.addCallback(query_caps_callback) |
510 | + return d |
511 | + |
512 | + try: |
513 | + self.defer_from_thread(_wrapped_set_capabilities) |
514 | + except request.StorageProtocolError, e: |
515 | + raise CapabilitiesError(e) |
516 | + |
517 | + @log_timing |
518 | + def get_root_info(self, volume_uuid): |
519 | + """Returns the UUID of the applicable share root.""" |
520 | + if volume_uuid is None: |
521 | + _get_root = self.factory.current_protocol.get_root |
522 | + root = self.defer_from_thread(_get_root) |
523 | + return (uuid.UUID(root), True) |
524 | + else: |
525 | + str_volume_uuid = str(volume_uuid) |
526 | + volume = self._match_volume(lambda v: \ |
527 | + str(v.volume_id) == str_volume_uuid) |
528 | + if isinstance(volume, volumes.ShareVolume): |
529 | + modify = volume.access_level == "Modify" |
530 | + if isinstance(volume, volumes.UDFVolume): |
531 | + modify = True |
532 | + return (uuid.UUID(str(volume.node_id)), modify) |
533 | + |
534 | + @log_timing |
535 | + def resolve_path(self, share_uuid, root_uuid, path): |
536 | + """Resolve path relative to the given root node.""" |
537 | + |
538 | + @inlineCallbacks |
539 | + def _resolve_worker(): |
540 | + """Path resolution worker.""" |
541 | + node_uuid = root_uuid |
542 | + local_path = path.strip('/') |
543 | + |
544 | + while local_path != '': |
545 | + local_path, name = os.path.split(local_path) |
546 | + hashes = yield self._get_node_hashes(share_uuid, [root_uuid]) |
547 | + content_hash = hashes.get(root_uuid, None) |
548 | + if content_hash is None: |
549 | + raise KeyError, "Content hash not available" |
550 | + entries = yield self._get_raw_dir_entries(share_uuid, |
551 | + root_uuid, |
552 | + content_hash) |
553 | + match_name = name.decode('utf-8') |
554 | + match = None |
555 | + for entry in entries: |
556 | + if match_name == entry.name: |
557 | + match = entry |
558 | + break |
559 | + |
560 | + if match is None: |
561 | + raise KeyError, "Path not found" |
562 | + |
563 | + node_uuid = uuid.UUID(match.node) |
564 | + |
565 | + returnValue(node_uuid) |
566 | + |
567 | + return self.defer_from_thread(_resolve_worker) |
568 | + |
569 | + @log_timing |
570 | + def oauth_from_token(self, token): |
571 | + """Perform OAuth authorisation using an existing token.""" |
572 | + |
573 | + consumer = OAuthConsumer(self.consumer_key, self.consumer_secret) |
574 | + |
575 | + def _auth_successful(value): |
576 | + """Callback for successful auth. Changes status to |
577 | + authenticated.""" |
578 | + self._change_status("authenticated") |
579 | + return value |
580 | + |
581 | + def _auth_failed(value): |
582 | + """Callback for failed auth. Disconnects.""" |
583 | + self.factory.current_protocol.transport.loseConnection() |
584 | + return value |
585 | + |
586 | + def _wrapped_authenticate(): |
587 | + """Wrapped authenticate.""" |
588 | + d = self.factory.current_protocol.oauth_authenticate(consumer, |
589 | + token) |
590 | + d.addCallbacks(_auth_successful, _auth_failed) |
591 | + return d |
592 | + |
593 | + try: |
594 | + self.defer_from_thread(_wrapped_authenticate) |
595 | + except request.StorageProtocolError, e: |
596 | + raise AuthenticationError(e) |
597 | + status, reason = self._await_status_not("connected") |
598 | + if status != "authenticated": |
599 | + raise AuthenticationError(reason.value) |
600 | + |
601 | + @log_timing |
602 | + def find_volume(self, volume_spec): |
603 | + """Finds a share matching the given UUID. Looks at both share UUIDs |
604 | + and root node UUIDs.""" |
605 | + volume = self._match_volume(lambda s: \ |
606 | + str(s.volume_id) == volume_spec or \ |
607 | + str(s.node_id) == volume_spec) |
608 | + return uuid.UUID(str(volume.volume_id)) |
609 | + |
610 | + @log_timing |
611 | + def _match_volume(self, predicate): |
612 | + """Finds a volume matching the given predicate.""" |
613 | + _list_shares = self.factory.current_protocol.list_volumes |
614 | + r = self.defer_from_thread(_list_shares) |
615 | + for volume in r.volumes: |
616 | + if predicate(volume): |
617 | + return volume |
618 | + raise NoSuchShareError() |
619 | + |
620 | + @log_timing |
621 | + def build_tree(self, share_uuid, root_uuid): |
622 | + """Builds and returns a tree representing the metadata for the given |
623 | + subtree in the given share. |
624 | + |
625 | + @param share_uuid: the share UUID or None for the user's volume |
626 | + @param root_uuid: the root UUID of the subtree (must be a directory) |
627 | + @return: a MergeNode tree |
628 | + |
629 | + """ |
630 | + root = MergeNode(node_type=DIRECTORY, uuid=root_uuid) |
631 | + |
632 | + @log_timing |
633 | + @inlineCallbacks |
634 | + def _get_root_content_hash(): |
635 | + """Obtain the content hash for the root node.""" |
636 | + result = yield self._get_node_hashes(share_uuid, [root_uuid]) |
637 | + returnValue(result.get(root_uuid, None)) |
638 | + |
639 | + root.content_hash = self.defer_from_thread(_get_root_content_hash) |
640 | + if root.content_hash is None: |
641 | + raise ValueError("No content available for node %s" % root_uuid) |
642 | + |
643 | + @log_timing |
644 | + @inlineCallbacks |
645 | + def _get_children(parent_uuid, parent_content_hash): |
646 | + """Obtain a sequence of MergeNodes corresponding to a node's |
647 | + immediate children. |
648 | + |
649 | + """ |
650 | + entries = yield self._get_raw_dir_entries(share_uuid, |
651 | + parent_uuid, |
652 | + parent_content_hash) |
653 | + children = {} |
654 | + for entry in entries: |
655 | + if should_sync(entry.name): |
656 | + child = MergeNode(node_type=entry.node_type, |
657 | + uuid=uuid.UUID(entry.node)) |
658 | + children[entry.name] = child |
659 | + |
660 | + child_uuids = [child.uuid for child in children.itervalues()] |
661 | + content_hashes = yield self._get_node_hashes(share_uuid, |
662 | + child_uuids) |
663 | + for child in children.itervalues(): |
664 | + child.content_hash = content_hashes.get(child.uuid, None) |
665 | + |
666 | + returnValue(children) |
667 | + |
668 | + need_children = [root] |
669 | + while need_children: |
670 | + node = need_children.pop() |
671 | + if node.content_hash is not None: |
672 | + children = self.defer_from_thread(_get_children, node.uuid, |
673 | + node.content_hash) |
674 | + node.children = children |
675 | + for child in children.itervalues(): |
676 | + if child.node_type == DIRECTORY: |
677 | + need_children.append(child) |
678 | + |
679 | + return root |
680 | + |
681 | + @log_timing |
682 | + def _get_raw_dir_entries(self, share_uuid, node_uuid, content_hash): |
683 | + """Gets raw dir entries for the given directory.""" |
684 | + d = self.factory.current_protocol.get_content(share_str(share_uuid), |
685 | + str(node_uuid), |
686 | + content_hash) |
687 | + d.addCallback(lambda c: zlib.decompress(c.data)) |
688 | + |
689 | + def _parse_content(raw_content): |
690 | + """Parses directory content into a list of entry objects.""" |
691 | + unserialized_content = DirectoryContent() |
692 | + unserialized_content.ParseFromString(raw_content) |
693 | + return list(unserialized_content.entries) |
694 | + |
695 | + d.addCallback(_parse_content) |
696 | + return d |
697 | + |
698 | + @log_timing |
699 | + def download_string(self, share_uuid, node_uuid, content_hash): |
700 | + """Reads a file from the server into a string.""" |
701 | + output = StringIO() |
702 | + self._download_inner(share_uuid=share_uuid, node_uuid=node_uuid, |
703 | + content_hash=content_hash, output=output) |
704 | + return output.getValue() |
705 | + |
706 | + @log_timing |
707 | + def download_file(self, share_uuid, node_uuid, content_hash, filename): |
708 | + """Downloads a file from the server.""" |
709 | + # file names have to be quoted to make sure that no issues occur when |
710 | + # spaces exist |
711 | + partial_filename = "%s.u1partial" % filename |
712 | + output = open(partial_filename, "w") |
713 | + |
714 | + @log_timing |
715 | + def rename_file(): |
716 | + """Renames the temporary file to the final name.""" |
717 | + output.close() |
718 | + print "Finished downloading %s" % filename |
719 | + os.rename(partial_filename, filename) |
720 | + |
721 | + @log_timing |
722 | + def delete_file(): |
723 | + """Deletes the temporary file.""" |
724 | + output.close() |
725 | + os.unlink(partial_filename) |
726 | + |
727 | + self._download_inner(share_uuid=share_uuid, node_uuid=node_uuid, |
728 | + content_hash=content_hash, output=output, |
729 | + on_success=rename_file, on_failure=delete_file) |
730 | + |
731 | + @log_timing |
732 | + def _download_inner(self, share_uuid, node_uuid, content_hash, output, |
733 | + on_success=lambda: None, on_failure=lambda: None): |
734 | + """Helper function for content downloads.""" |
735 | + dec = zlib.decompressobj() |
736 | + |
737 | + @log_timing |
738 | + def write_data(data): |
739 | + """Helper which writes data to the output file.""" |
740 | + uncompressed_data = dec.decompress(data) |
741 | + output.write(uncompressed_data) |
742 | + |
743 | + @log_timing |
744 | + def finish_download(value): |
745 | + """Helper which finishes the download.""" |
746 | + uncompressed_data = dec.flush() |
747 | + output.write(uncompressed_data) |
748 | + on_success() |
749 | + return value |
750 | + |
751 | + @log_timing |
752 | + def abort_download(value): |
753 | + """Helper which aborts the download.""" |
754 | + on_failure() |
755 | + return value |
756 | + |
757 | + @log_timing |
758 | + def _download(): |
759 | + """Async helper.""" |
760 | + _get_content = self.factory.current_protocol.get_content |
761 | + d = _get_content(share_str(share_uuid), str(node_uuid), |
762 | + content_hash, callback=write_data) |
763 | + d.addCallbacks(finish_download, abort_download) |
764 | + return d |
765 | + |
766 | + self.defer_from_thread(_download) |
767 | + |
768 | + @log_timing |
769 | + def create_directory(self, share_uuid, parent_uuid, name): |
770 | + """Creates a directory on the server.""" |
771 | + r = self.defer_from_thread(self.factory.current_protocol.make_dir, |
772 | + share_str(share_uuid), str(parent_uuid), |
773 | + name) |
774 | + return uuid.UUID(r.new_id) |
775 | + |
776 | + @log_timing |
777 | + def create_file(self, share_uuid, parent_uuid, name): |
778 | + """Creates a file on the server.""" |
779 | + r = self.defer_from_thread(self.factory.current_protocol.make_file, |
780 | + share_str(share_uuid), str(parent_uuid), |
781 | + name) |
782 | + return uuid.UUID(r.new_id) |
783 | + |
784 | + @log_timing |
785 | + def create_symlink(self, share_uuid, parent_uuid, name, target): |
786 | + """Creates a symlink on the server.""" |
787 | + raise UnsupportedOperationError("Protocol does not support symlinks") |
788 | + |
789 | + @log_timing |
790 | + def upload_string(self, share_uuid, node_uuid, old_content_hash, |
791 | + content_hash, content): |
792 | + """Uploads a string to the server as file content.""" |
793 | + crc = crc32(content, 0) |
794 | + compressed_content = zlib.compress(content, 9) |
795 | + compressed = StringIO(compressed_content) |
796 | + self.defer_from_thread(self.factory.current_protocol.put_content, |
797 | + share_str(share_uuid), str(node_uuid), |
798 | + old_content_hash, content_hash, |
799 | + crc, len(content), len(compressed_content), |
800 | + compressed) |
801 | + |
802 | + @log_timing |
803 | + def upload_file(self, share_uuid, node_uuid, old_content_hash, |
804 | + content_hash, filename): |
805 | + """Uploads a file to the server.""" |
806 | + parent_dir = os.path.split(filename)[0] |
807 | + unique_filename = os.path.join(parent_dir, "." + str(uuid.uuid4())) |
808 | + |
809 | + |
810 | + class StagingFile(object): |
811 | + """An object which tracks data being compressed for staging.""" |
812 | + def __init__(self, stream): |
813 | + """Initialize a compression object.""" |
814 | + self.crc32 = 0 |
815 | + self.enc = zlib.compressobj(9) |
816 | + self.size = 0 |
817 | + self.compressed_size = 0 |
818 | + self.stream = stream |
819 | + |
820 | + def write(self, bytes): |
821 | + """Compress bytes, keeping track of length and crc32.""" |
822 | + self.size += len(bytes) |
823 | + self.crc32 = crc32(bytes, self.crc32) |
824 | + compressed_bytes = self.enc.compress(bytes) |
825 | + self.compressed_size += len(compressed_bytes) |
826 | + self.stream.write(compressed_bytes) |
827 | + |
828 | + def finish(self): |
829 | + """Finish staging compressed data.""" |
830 | + compressed_bytes = self.enc.flush() |
831 | + self.compressed_size += len(compressed_bytes) |
832 | + self.stream.write(compressed_bytes) |
833 | + |
834 | + with open(unique_filename, "w+") as compressed: |
835 | + os.unlink(unique_filename) |
836 | + with open(filename, "r") as original: |
837 | + staging = StagingFile(compressed) |
838 | + shutil.copyfileobj(original, staging) |
839 | + staging.finish() |
840 | + compressed.seek(0) |
841 | + self.defer_from_thread(self.factory.current_protocol.put_content, |
842 | + share_str(share_uuid), str(node_uuid), |
843 | + old_content_hash, content_hash, |
844 | + staging.crc32, |
845 | + staging.size, staging.compressed_size, |
846 | + compressed) |
847 | + |
848 | + @log_timing |
849 | + def move(self, share_uuid, parent_uuid, name, node_uuid): |
850 | + """Moves a file on the server.""" |
851 | + self.defer_from_thread(self.factory.current_protocol.move, |
852 | + share_str(share_uuid), str(node_uuid), |
853 | + str(parent_uuid), name) |
854 | + |
855 | + @log_timing |
856 | + def unlink(self, share_uuid, node_uuid): |
857 | + """Unlinks a file on the server.""" |
858 | + self.defer_from_thread(self.factory.current_protocol.unlink, |
859 | + share_str(share_uuid), str(node_uuid)) |
860 | + |
861 | + @log_timing |
862 | + def _get_node_hashes(self, share_uuid, node_uuids): |
863 | + """Fetches hashes for the given nodes.""" |
864 | + share = share_str(share_uuid) |
865 | + queries = [(share, str(node_uuid), request.UNKNOWN_HASH) \ |
866 | + for node_uuid in node_uuids] |
867 | + d = self.factory.current_protocol.query(queries) |
868 | + |
869 | + @log_timing |
870 | + def _collect_hashes(multi_result): |
871 | + """Accumulate hashes from query replies.""" |
872 | + hashes = {} |
873 | + for (success, value) in multi_result: |
874 | + if success: |
875 | + for node_state in value.response: |
876 | + node_uuid = uuid.UUID(node_state.node) |
877 | + hashes[node_uuid] = node_state.hash |
878 | + return hashes |
879 | + |
880 | + d.addCallback(_collect_hashes) |
881 | + return d |
882 | + |
883 | + @log_timing |
884 | + def get_incoming_shares(self): |
885 | + """Returns a list of incoming shares as (name, uuid, accepted) |
886 | + tuples. |
887 | + |
888 | + """ |
889 | + _list_shares = self.factory.current_protocol.list_shares |
890 | + r = self.defer_from_thread(_list_shares) |
891 | + return [(s.name, s.id, s.other_visible_name, |
892 | + s.accepted, s.access_level) \ |
893 | + for s in r.shares if s.direction == "to_me"] |
894 | |
895 | === added file 'src/u1sync/constants.py' |
896 | --- src/u1sync/constants.py 1970-01-01 00:00:00 +0000 |
897 | +++ src/u1sync/constants.py 2010-08-27 14:56:43 +0000 |
898 | @@ -0,0 +1,30 @@ |
899 | +# ubuntuone.u1sync.constants |
900 | +# |
901 | +# u1sync constants |
902 | +# |
903 | +# Author: Tim Cole <tim.cole@canonical.com> |
904 | +# |
905 | +# Copyright 2009 Canonical Ltd. |
906 | +# |
907 | +# This program is free software: you can redistribute it and/or modify it |
908 | +# under the terms of the GNU General Public License version 3, as published |
909 | +# by the Free Software Foundation. |
910 | +# |
911 | +# This program is distributed in the hope that it will be useful, but |
912 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
913 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
914 | +# PURPOSE. See the GNU General Public License for more details. |
915 | +# |
916 | +# You should have received a copy of the GNU General Public License along |
917 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
918 | +"""Assorted constant definitions which don't fit anywhere else.""" |
919 | + |
920 | +import re |
921 | + |
922 | +# the name of the directory u1sync uses to keep metadata about a mirror |
923 | +METADATA_DIR_NAME = u".ubuntuone-sync" |
924 | + |
925 | +# filenames to ignore |
926 | +SPECIAL_FILE_RE = re.compile(".*\\.(" |
927 | + "(u1)?partial|part|" |
928 | + "(u1)?conflict(\\.[0-9]+)?)$") |
929 | |
930 | === added file 'src/u1sync/genericmerge.py' |
931 | --- src/u1sync/genericmerge.py 1970-01-01 00:00:00 +0000 |
932 | +++ src/u1sync/genericmerge.py 2010-08-27 14:56:43 +0000 |
933 | @@ -0,0 +1,88 @@ |
934 | +# ubuntuone.u1sync.genericmerge |
935 | +# |
936 | +# Generic merge function |
937 | +# |
938 | +# Author: Tim Cole <tim.cole@canonical.com> |
939 | +# |
940 | +# Copyright 2009 Canonical Ltd. |
941 | +# |
942 | +# This program is free software: you can redistribute it and/or modify it |
943 | +# under the terms of the GNU General Public License version 3, as published |
944 | +# by the Free Software Foundation. |
945 | +# |
946 | +# This program is distributed in the hope that it will be useful, but |
947 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
948 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
949 | +# PURPOSE. See the GNU General Public License for more details. |
950 | +# |
951 | +# You should have received a copy of the GNU General Public License along |
952 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
953 | +"""A generic abstraction for merge operations on directory trees.""" |
954 | + |
955 | +from itertools import chain |
956 | +from ubuntuone.storageprotocol.dircontent_pb2 import DIRECTORY |
957 | + |
958 | +class MergeNode(object): |
959 | + """A filesystem node. Should generally be treated as immutable.""" |
960 | + def __init__(self, node_type, content_hash=None, uuid=None, children=None, |
961 | + conflict_info=None): |
962 | + """Initializes a node instance.""" |
963 | + self.node_type = node_type |
964 | + self.children = children |
965 | + self.uuid = uuid |
966 | + self.content_hash = content_hash |
967 | + self.conflict_info = conflict_info |
968 | + |
969 | + def __eq__(self, other): |
970 | + """Equality test.""" |
971 | + if type(other) is not type(self): |
972 | + return False |
973 | + return self.node_type == other.node_type and \ |
974 | + self.children == other.children and \ |
975 | + self.uuid == other.uuid and \ |
976 | + self.content_hash == other.content_hash and \ |
977 | + self.conflict_info == other.conflict_info |
978 | + |
979 | + def __ne__(self, other): |
980 | + """Non-equality test.""" |
981 | + return not self.__eq__(other) |
982 | + |
983 | + |
984 | +def show_tree(tree, indent="", name="/"): |
985 | + """Prints a tree.""" |
986 | + # TODO: return string do not print |
987 | + if tree.node_type == DIRECTORY: |
988 | + type_str = "DIR " |
989 | + else: |
990 | + type_str = "FILE" |
991 | + print "%s%-36s %s %s %s" % (indent, tree.uuid, type_str, name, |
992 | + tree.content_hash) |
993 | + if tree.node_type == DIRECTORY and tree.children is not None: |
994 | + for name in sorted(tree.children.keys()): |
995 | + subtree = tree.children[name] |
996 | + show_tree(subtree, indent=" " + indent, name=name) |
997 | + |
998 | +def generic_merge(trees, pre_merge, post_merge, partial_parent, name): |
999 | + """Generic tree merging function.""" |
1000 | + |
1001 | + partial_result = pre_merge(nodes=trees, name=name, |
1002 | + partial_parent=partial_parent) |
1003 | + |
1004 | + def tree_children(tree): |
1005 | + """Returns children if tree is not None""" |
1006 | + return tree.children if tree is not None else None |
1007 | + |
1008 | + child_dicts = [tree_children(t) or {} for t in trees] |
1009 | + child_names = set(chain(*[cs.iterkeys() for cs in child_dicts])) |
1010 | + child_results = {} |
1011 | + for child_name in child_names: |
1012 | + subtrees = [cs.get(child_name, None) for cs in child_dicts] |
1013 | + child_result = generic_merge(trees=subtrees, |
1014 | + pre_merge=pre_merge, |
1015 | + post_merge=post_merge, |
1016 | + partial_parent=partial_result, |
1017 | + name=child_name) |
1018 | + child_results[child_name] = child_result |
1019 | + |
1020 | + return post_merge(nodes=trees, partial_result=partial_result, |
1021 | + child_results=child_results) |
1022 | |
1023 | === added file 'src/u1sync/main.py' |
1024 | --- src/u1sync/main.py 1970-01-01 00:00:00 +0000 |
1025 | +++ src/u1sync/main.py 2010-08-27 14:56:43 +0000 |
1026 | @@ -0,0 +1,360 @@ |
1027 | +# ubuntuone.u1sync.main |
1028 | +# |
1029 | +# Prototype directory sync client |
1030 | +# |
1031 | +# Author: Lucio Torre <lucio.torre@canonical.com> |
1032 | +# Author: Tim Cole <tim.cole@canonical.com> |
1033 | +# |
1034 | +# Copyright 2009 Canonical Ltd. |
1035 | +# |
1036 | +# This program is free software: you can redistribute it and/or modify it |
1037 | +# under the terms of the GNU General Public License version 3, as published |
1038 | +# by the Free Software Foundation. |
1039 | +# |
1040 | +# This program is distributed in the hope that it will be useful, but |
1041 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
1042 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
1043 | +# PURPOSE. See the GNU General Public License for more details. |
1044 | +# |
1045 | +# You should have received a copy of the GNU General Public License along |
1046 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
1047 | +"""A prototype directory sync client.""" |
1048 | + |
1049 | +from __future__ import with_statement |
1050 | + |
1051 | +import os |
1052 | +import sys |
1053 | +import uuid |
1054 | +import signal |
1055 | +import logging |
1056 | +from Queue import Queue |
1057 | +from errno import EEXIST |
1058 | +from twisted.internet import reactor |
1059 | +# import the storeage protocol to allow the communication with the server |
1060 | +import ubuntuone.storageprotocol.dircontent_pb2 as dircontent_pb2 |
1061 | +from ubuntuone.storageprotocol.dircontent_pb2 import DIRECTORY, SYMLINK |
1062 | +# import helper modules |
1063 | +from u1sync.genericmerge import ( |
1064 | + show_tree, generic_merge) |
1065 | +from u1sync.client import ( |
1066 | + ConnectionError, AuthenticationError, NoSuchShareError, |
1067 | + ForcedShutdown, Client) |
1068 | +from u1sync.scan import scan_directory |
1069 | +from u1sync.merge import ( |
1070 | + SyncMerge, ClobberServerMerge, ClobberLocalMerge, merge_trees) |
1071 | +from u1sync.sync import download_tree, upload_tree |
1072 | +from u1sync.utils import safe_mkdir |
1073 | +from u1sync import metadata |
1074 | +from u1sync.constants import METADATA_DIR_NAME |
1075 | +from u1sync.ubuntuone_optparse import UbuntuOneOptionsParser |
1076 | + |
1077 | +# pylint: disable-msg=W0212 |
1078 | +NODE_TYPE_ENUM = dircontent_pb2._NODETYPE |
1079 | +# pylint: enable-msg=W0212 |
1080 | +def node_type_str(node_type): |
1081 | + """Converts a numeric node type to a human-readable string.""" |
1082 | + return NODE_TYPE_ENUM.values_by_number[node_type].name |
1083 | + |
1084 | + |
1085 | +class ReadOnlyShareError(Exception): |
1086 | + """Share is read-only.""" |
1087 | + |
1088 | + |
1089 | +class DirectoryAlreadyInitializedError(Exception): |
1090 | + """The directory has already been initialized.""" |
1091 | + |
1092 | + |
1093 | +class DirectoryNotInitializedError(Exception): |
1094 | + """The directory has not been initialized.""" |
1095 | + |
1096 | + |
1097 | +class NoParentError(Exception): |
1098 | + """A node has no parent.""" |
1099 | + |
1100 | + |
1101 | +class TreesDiffer(Exception): |
1102 | + """Raised when diff tree differs.""" |
1103 | + def __init__(self, quiet): |
1104 | + self.quiet = quiet |
1105 | + |
1106 | + |
1107 | +MERGE_ACTIONS = { |
1108 | + # action: (merge_class, should_upload, should_download) |
1109 | + 'sync': (SyncMerge, True, True), |
1110 | + 'clobber-server': (ClobberServerMerge, True, False), |
1111 | + 'clobber-local': (ClobberLocalMerge, False, True), |
1112 | + 'upload': (SyncMerge, True, False), |
1113 | + 'download': (SyncMerge, False, True), |
1114 | + 'auto': None # special case |
1115 | +} |
1116 | + |
1117 | +DEFAULT_MERGE_ACTION = 'auto' |
1118 | + |
1119 | +def do_init(client, share_spec, directory, quiet, subtree_path, |
1120 | + metadata=metadata): |
1121 | + """Initializes a directory for syncing, and syncs it.""" |
1122 | + info = metadata.Metadata() |
1123 | + |
1124 | + if share_spec is not None: |
1125 | + info.share_uuid = client.find_volume(share_spec) |
1126 | + else: |
1127 | + info.share_uuid = None |
1128 | + |
1129 | + if subtree_path is not None: |
1130 | + info.path = subtree_path |
1131 | + else: |
1132 | + info.path = "/" |
1133 | + |
1134 | + logging.info("Initializing directory...") |
1135 | + safe_mkdir(directory) |
1136 | + |
1137 | + metadata_dir = os.path.join(directory, METADATA_DIR_NAME) |
1138 | + try: |
1139 | + os.mkdir(metadata_dir) |
1140 | + except OSError, e: |
1141 | + if e.errno == EEXIST: |
1142 | + raise DirectoryAlreadyInitializedError(directory) |
1143 | + else: |
1144 | + raise |
1145 | + |
1146 | + logging.info("Writing mirror metadata...") |
1147 | + metadata.write(metadata_dir, info) |
1148 | + |
1149 | + logging.info("Done.") |
1150 | + |
1151 | +def do_sync(client, directory, action, dry_run, quiet): |
1152 | + """Synchronizes a directory with the given share.""" |
1153 | + absolute_path = os.path.abspath(directory) |
1154 | + while True: |
1155 | + metadata_dir = os.path.join(absolute_path, METADATA_DIR_NAME) |
1156 | + if os.path.exists(metadata_dir): |
1157 | + break |
1158 | + if absolute_path == "/": |
1159 | + raise DirectoryNotInitializedError(directory) |
1160 | + absolute_path = os.path.split(absolute_path)[0] |
1161 | + |
1162 | + logging.info("Reading mirror metadata...") |
1163 | + info = metadata.read(metadata_dir) |
1164 | + |
1165 | + top_uuid, writable = client.get_root_info(info.share_uuid) |
1166 | + |
1167 | + if info.root_uuid is None: |
1168 | + info.root_uuid = client.resolve_path(info.share_uuid, top_uuid, |
1169 | + info.path) |
1170 | + |
1171 | + if action == 'auto': |
1172 | + if writable: |
1173 | + action = 'sync' |
1174 | + else: |
1175 | + action = 'download' |
1176 | + merge_type, should_upload, should_download = MERGE_ACTIONS[action] |
1177 | + if should_upload and not writable: |
1178 | + raise ReadOnlyShareError(info.share_uuid) |
1179 | + |
1180 | + logging.info("Scanning directory...") |
1181 | + |
1182 | + local_tree = scan_directory(absolute_path, quiet=quiet) |
1183 | + |
1184 | + logging.info("Fetching metadata...") |
1185 | + |
1186 | + remote_tree = client.build_tree(info.share_uuid, info.root_uuid) |
1187 | + if not quiet: |
1188 | + show_tree(remote_tree) |
1189 | + |
1190 | + logging.info("Merging trees...") |
1191 | + merged_tree = merge_trees(old_local_tree=info.local_tree, |
1192 | + local_tree=local_tree, |
1193 | + old_remote_tree=info.remote_tree, |
1194 | + remote_tree=remote_tree, |
1195 | + merge_action=merge_type()) |
1196 | + if not quiet: |
1197 | + show_tree(merged_tree) |
1198 | + |
1199 | + logging.info("Syncing content...") |
1200 | + if should_download: |
1201 | + info.local_tree = download_tree(merged_tree=merged_tree, |
1202 | + local_tree=local_tree, |
1203 | + client=client, |
1204 | + share_uuid=info.share_uuid, |
1205 | + path=absolute_path, dry_run=dry_run, |
1206 | + quiet=quiet) |
1207 | + else: |
1208 | + info.local_tree = local_tree |
1209 | + if should_upload: |
1210 | + info.remote_tree = upload_tree(merged_tree=merged_tree, |
1211 | + remote_tree=remote_tree, |
1212 | + client=client, |
1213 | + share_uuid=info.share_uuid, |
1214 | + path=absolute_path, dry_run=dry_run, |
1215 | + quiet=quiet) |
1216 | + else: |
1217 | + info.remote_tree = remote_tree |
1218 | + |
1219 | + if not dry_run: |
1220 | + logging.info("Updating mirror metadata...") |
1221 | + metadata.write(metadata_dir, info) |
1222 | + |
1223 | + logging.info("Done.") |
1224 | + |
1225 | +def do_list_shares(client): |
1226 | + """Lists available (incoming) shares.""" |
1227 | + shares = client.get_incoming_shares() |
1228 | + for (name, id, user, accepted, access) in shares: |
1229 | + if not accepted: |
1230 | + status = " [not accepted]" |
1231 | + else: |
1232 | + status = "" |
1233 | + name = name.encode("utf-8") |
1234 | + user = user.encode("utf-8") |
1235 | + print "%s %s (from %s) [%s]%s" % (id, name, user, access, status) |
1236 | + |
1237 | +def do_diff(client, share_spec, directory, quiet, subtree_path, |
1238 | + ignore_symlinks=True): |
1239 | + """Diffs a local directory with the server.""" |
1240 | + if share_spec is not None: |
1241 | + share_uuid = client.find_volume(share_spec) |
1242 | + else: |
1243 | + share_uuid = None |
1244 | + if subtree_path is None: |
1245 | + subtree_path = '/' |
1246 | + # pylint: disable-msg=W0612 |
1247 | + root_uuid, writable = client.get_root_info(share_uuid) |
1248 | + subtree_uuid = client.resolve_path(share_uuid, root_uuid, subtree_path) |
1249 | + local_tree = scan_directory(directory, quiet=True) |
1250 | + remote_tree = client.build_tree(share_uuid, subtree_uuid) |
1251 | + |
1252 | + def pre_merge(nodes, name, partial_parent): |
1253 | + """Compares nodes and prints differences.""" |
1254 | + (local_node, remote_node) = nodes |
1255 | + # pylint: disable-msg=W0612 |
1256 | + (parent_display_path, parent_differs) = partial_parent |
1257 | + display_path = os.path.join(parent_display_path, name.encode("UTF-8")) |
1258 | + differs = True |
1259 | + if local_node is None: |
1260 | + logging.info("%s missing from client",display_path) |
1261 | + elif remote_node is None: |
1262 | + if ignore_symlinks and local_node.node_type == SYMLINK: |
1263 | + differs = False |
1264 | + logging.info("%s missing from server",display_path) |
1265 | + elif local_node.node_type != remote_node.node_type: |
1266 | + local_type = node_type_str(local_node.node_type) |
1267 | + remote_type = node_type_str(remote_node.node_type) |
1268 | + logging.info("%s node types differ (client: %s, server: %s)", |
1269 | + display_path, local_type, remote_type) |
1270 | + elif local_node.node_type != DIRECTORY and \ |
1271 | + local_node.content_hash != remote_node.content_hash: |
1272 | + local_content = local_node.content_hash |
1273 | + remote_content = remote_node.content_hash |
1274 | + logging.info("%s has different content (client: %s, server: %s)", |
1275 | + display_path, local_content, remote_content) |
1276 | + else: |
1277 | + differs = False |
1278 | + return (display_path, differs) |
1279 | + |
1280 | + def post_merge(nodes, partial_result, child_results): |
1281 | + """Aggregates 'differs' flags.""" |
1282 | + # pylint: disable-msg=W0612 |
1283 | + (display_path, differs) = partial_result |
1284 | + return differs or any(child_results.itervalues()) |
1285 | + |
1286 | + differs = generic_merge(trees=[local_tree, remote_tree], |
1287 | + pre_merge=pre_merge, post_merge=post_merge, |
1288 | + partial_parent=("", False), name=u"") |
1289 | + if differs: |
1290 | + raise TreesDiffer(quiet=quiet) |
1291 | + |
1292 | +def do_main(argv, options_parser): |
1293 | + """The main user-facing portion of the script.""" |
1294 | + # pass the arguments to ensure that the code works correctly |
1295 | + options_parser.get_options(argv) |
1296 | + client = Client(realm=options_parser.options.realm, reactor=reactor) |
1297 | + # set the logging level to info in the user passed quite to be false |
1298 | + if options_parser.options.quiet: |
1299 | + logging.basicConfig(level=logging.INFO) |
1300 | + |
1301 | + signal.signal(signal.SIGINT, lambda s, f: client.force_shutdown()) |
1302 | + signal.signal(signal.SIGTERM, lambda s, f: client.force_shutdown()) |
1303 | + |
1304 | + def run_client(): |
1305 | + """Run the blocking client.""" |
1306 | + token = options_parser.options.token |
1307 | + |
1308 | + client.connect_ssl(options_parser.options.host, |
1309 | + int(options_parser.options.port), |
1310 | + options_parser.options.no_ssl_verify) |
1311 | + |
1312 | + try: |
1313 | + client.set_capabilities() |
1314 | + client.oauth_from_token(options_parser.options.token) |
1315 | + |
1316 | + if options_parser.options.mode == "sync": |
1317 | + do_sync(client=client, directory=options_parser.options.directory, |
1318 | + action=options_parser.options.action, |
1319 | + dry_run=options_parser.options.dry_run, |
1320 | + quiet=options_parser.options.quiet) |
1321 | + elif options_parser.options.mode == "init": |
1322 | + do_init(client=client, share_spec=options_parser.options.share, |
1323 | + directory=options_parser.options.directory, |
1324 | + quiet=options_parser.options.quiet, subtree_path=options_parser.options.subtree) |
1325 | + elif options.mode == "list-shares": |
1326 | + do_list_shares(client=client) |
1327 | + elif options.mode == "diff": |
1328 | + do_diff(client=client, share_spec=options_parser.options.share, |
1329 | + directory=directory, |
1330 | + quiet=options_parser.options.quiet, |
1331 | + subtree_path=options_parser.options.subtree, |
1332 | + ignore_symlinks=False) |
1333 | + elif options_parser.options.mode == "authorize": |
1334 | + if not options_parser.options.quiet: |
1335 | + print "Authorized." |
1336 | + finally: |
1337 | + client.disconnect() |
1338 | + |
1339 | + def capture_exception(queue, func): |
1340 | + """Capture the exception from calling func.""" |
1341 | + try: |
1342 | + func() |
1343 | + except Exception: |
1344 | + queue.put(sys.exc_info()) |
1345 | + else: |
1346 | + queue.put(None) |
1347 | + finally: |
1348 | + reactor.callWhenRunning(reactor.stop) |
1349 | + |
1350 | + queue = Queue() |
1351 | + reactor.callInThread(capture_exception, queue, run_client) |
1352 | + reactor.run(installSignalHandlers=False) |
1353 | + exc_info = queue.get(True, 0.1) |
1354 | + if exc_info: |
1355 | + raise exc_info[0], exc_info[1], exc_info[2] |
1356 | + |
1357 | +def main(argv): |
1358 | + """Top-level main function.""" |
1359 | + try: |
1360 | + do_main(argv, UbuntuOneOptionsParser()) |
1361 | + except AuthenticationError, e: |
1362 | + print "Authentication failed: %s" % e |
1363 | + except ConnectionError, e: |
1364 | + print "Connection failed: %s" % e |
1365 | + except DirectoryNotInitializedError: |
1366 | + print "Directory not initialized; " \ |
1367 | + "use --init [DIRECTORY] to initialize it." |
1368 | + except DirectoryAlreadyInitializedError: |
1369 | + print "Directory already initialized." |
1370 | + except NoSuchShareError: |
1371 | + print "No matching share found." |
1372 | + except ReadOnlyShareError: |
1373 | + print "The selected action isn't possible on a read-only share." |
1374 | + except (ForcedShutdown, KeyboardInterrupt): |
1375 | + print "Interrupted!" |
1376 | + except TreesDiffer, e: |
1377 | + if not e.quiet: |
1378 | + print "Trees differ." |
1379 | + else: |
1380 | + return 0 |
1381 | + return 1 |
1382 | + |
1383 | +if __name__ == "__main__": |
1384 | + # set the name of the process, this just works on windows |
1385 | + sys.argv[0] = "Ubuntuone" |
1386 | + main(sys.argv[1:]) |
1387 | \ No newline at end of file |
1388 | |
1389 | === added file 'src/u1sync/merge.py' |
1390 | --- src/u1sync/merge.py 1970-01-01 00:00:00 +0000 |
1391 | +++ src/u1sync/merge.py 2010-08-27 14:56:43 +0000 |
1392 | @@ -0,0 +1,186 @@ |
1393 | +# ubuntuone.u1sync.merge |
1394 | +# |
1395 | +# Tree state merging |
1396 | +# |
1397 | +# Author: Tim Cole <tim.cole@canonical.com> |
1398 | +# |
1399 | +# Copyright 2009 Canonical Ltd. |
1400 | +# |
1401 | +# This program is free software: you can redistribute it and/or modify it |
1402 | +# under the terms of the GNU General Public License version 3, as published |
1403 | +# by the Free Software Foundation. |
1404 | +# |
1405 | +# This program is distributed in the hope that it will be useful, but |
1406 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
1407 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
1408 | +# PURPOSE. See the GNU General Public License for more details. |
1409 | +# |
1410 | +# You should have received a copy of the GNU General Public License along |
1411 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
1412 | +"""Code for merging changes between modified trees.""" |
1413 | + |
1414 | +from __future__ import with_statement |
1415 | + |
1416 | +import os |
1417 | + |
1418 | +from ubuntuone.storageprotocol.dircontent_pb2 import DIRECTORY |
1419 | +from u1sync.genericmerge import ( |
1420 | + MergeNode, generic_merge) |
1421 | +import uuid |
1422 | + |
1423 | + |
1424 | +class NodeTypeMismatchError(Exception): |
1425 | + """Node types don't match.""" |
1426 | + |
1427 | + |
1428 | +def merge_trees(old_local_tree, local_tree, old_remote_tree, remote_tree, |
1429 | + merge_action): |
1430 | + """Performs a tree merge using the given merge action.""" |
1431 | + |
1432 | + def pre_merge(nodes, name, partial_parent): |
1433 | + """Accumulates path and determines merged node type.""" |
1434 | + old_local_node, local_node, old_remote_node, remote_node = nodes |
1435 | + # pylint: disable-msg=W0612 |
1436 | + (parent_path, parent_type) = partial_parent |
1437 | + path = os.path.join(parent_path, name.encode("utf-8")) |
1438 | + node_type = merge_action.get_node_type(old_local_node=old_local_node, |
1439 | + local_node=local_node, |
1440 | + old_remote_node=old_remote_node, |
1441 | + remote_node=remote_node, |
1442 | + path=path) |
1443 | + return (path, node_type) |
1444 | + |
1445 | + def post_merge(nodes, partial_result, child_results): |
1446 | + """Drops deleted children and merges node.""" |
1447 | + old_local_node, local_node, old_remote_node, remote_node = nodes |
1448 | + # pylint: disable-msg=W0612 |
1449 | + (path, node_type) = partial_result |
1450 | + if node_type == DIRECTORY: |
1451 | + merged_children = dict([(name, child) for (name, child) |
1452 | + in child_results.iteritems() |
1453 | + if child is not None]) |
1454 | + else: |
1455 | + merged_children = None |
1456 | + return merge_action.merge_node(old_local_node=old_local_node, |
1457 | + local_node=local_node, |
1458 | + old_remote_node=old_remote_node, |
1459 | + remote_node=remote_node, |
1460 | + node_type=node_type, |
1461 | + merged_children=merged_children) |
1462 | + |
1463 | + return generic_merge(trees=[old_local_tree, local_tree, |
1464 | + old_remote_tree, remote_tree], |
1465 | + pre_merge=pre_merge, post_merge=post_merge, |
1466 | + name=u"", partial_parent=("", None)) |
1467 | + |
1468 | + |
1469 | +class SyncMerge(object): |
1470 | + """Performs a bidirectional sync merge.""" |
1471 | + |
1472 | + def get_node_type(self, old_local_node, local_node, |
1473 | + old_remote_node, remote_node, path): |
1474 | + """Requires that all node types match.""" |
1475 | + node_type = None |
1476 | + for node in (old_local_node, local_node, remote_node): |
1477 | + if node is not None: |
1478 | + if node_type is not None: |
1479 | + if node.node_type != node_type: |
1480 | + message = "Node types don't match for %s" % path |
1481 | + raise NodeTypeMismatchError(message) |
1482 | + else: |
1483 | + node_type = node.node_type |
1484 | + return node_type |
1485 | + |
1486 | + def merge_node(self, old_local_node, local_node, |
1487 | + old_remote_node, remote_node, node_type, merged_children): |
1488 | + """Performs bidirectional merge of node state.""" |
1489 | + |
1490 | + def node_content_hash(node): |
1491 | + """Returns node content hash if node is not None""" |
1492 | + return node.content_hash if node is not None else None |
1493 | + |
1494 | + old_local_content_hash = node_content_hash(old_local_node) |
1495 | + local_content_hash = node_content_hash(local_node) |
1496 | + old_remote_content_hash = node_content_hash(old_remote_node) |
1497 | + remote_content_hash = node_content_hash(remote_node) |
1498 | + |
1499 | + locally_deleted = old_local_node is not None and local_node is None |
1500 | + deleted_on_server = old_remote_node is not None and remote_node is None |
1501 | + # updated means modified or created |
1502 | + locally_updated = not locally_deleted and \ |
1503 | + old_local_content_hash != local_content_hash |
1504 | + updated_on_server = not deleted_on_server and \ |
1505 | + old_remote_content_hash != remote_content_hash |
1506 | + |
1507 | + has_merged_children = merged_children is not None and \ |
1508 | + len(merged_children) > 0 |
1509 | + |
1510 | + either_node_exists = local_node is not None or remote_node is not None |
1511 | + should_delete = (locally_deleted and not updated_on_server) or \ |
1512 | + (deleted_on_server and not locally_updated) |
1513 | + |
1514 | + if (either_node_exists and not should_delete) or has_merged_children: |
1515 | + if node_type != DIRECTORY and \ |
1516 | + locally_updated and updated_on_server and \ |
1517 | + local_content_hash != remote_content_hash: |
1518 | + # local_content_hash will become the merged content_hash; |
1519 | + # save remote_content_hash in conflict info |
1520 | + conflict_info = (str(uuid.uuid4()), remote_content_hash) |
1521 | + else: |
1522 | + conflict_info = None |
1523 | + node_uuid = remote_node.uuid if remote_node is not None else None |
1524 | + if locally_updated: |
1525 | + content_hash = local_content_hash or remote_content_hash |
1526 | + else: |
1527 | + content_hash = remote_content_hash or local_content_hash |
1528 | + return MergeNode(node_type=node_type, uuid=node_uuid, |
1529 | + children=merged_children, content_hash=content_hash, |
1530 | + conflict_info=conflict_info) |
1531 | + else: |
1532 | + return None |
1533 | + |
1534 | + |
1535 | +class ClobberServerMerge(object): |
1536 | + """Clobber server to match local state.""" |
1537 | + |
1538 | + def get_node_type(self, old_local_node, local_node, |
1539 | + old_remote_node, remote_node, path): |
1540 | + """Return local node type.""" |
1541 | + if local_node is not None: |
1542 | + return local_node.node_type |
1543 | + else: |
1544 | + return None |
1545 | + |
1546 | + def merge_node(self, old_local_node, local_node, |
1547 | + old_remote_node, remote_node, node_type, merged_children): |
1548 | + """Copy local node and associate with remote uuid (if applicable).""" |
1549 | + if local_node is None: |
1550 | + return None |
1551 | + if remote_node is not None: |
1552 | + node_uuid = remote_node.uuid |
1553 | + else: |
1554 | + node_uuid = None |
1555 | + return MergeNode(node_type=local_node.node_type, uuid=node_uuid, |
1556 | + content_hash=local_node.content_hash, |
1557 | + children=merged_children) |
1558 | + |
1559 | + |
1560 | +class ClobberLocalMerge(object): |
1561 | + """Clobber local state to match server.""" |
1562 | + |
1563 | + def get_node_type(self, old_local_node, local_node, |
1564 | + old_remote_node, remote_node, path): |
1565 | + """Return remote node type.""" |
1566 | + if remote_node is not None: |
1567 | + return remote_node.node_type |
1568 | + else: |
1569 | + return None |
1570 | + |
1571 | + def merge_node(self, old_local_node, local_node, |
1572 | + old_remote_node, remote_node, node_type, merged_children): |
1573 | + """Copy the remote node.""" |
1574 | + if remote_node is None: |
1575 | + return None |
1576 | + return MergeNode(node_type=node_type, uuid=remote_node.uuid, |
1577 | + content_hash=remote_node.content_hash, |
1578 | + children=merged_children) |
1579 | |
1580 | === added file 'src/u1sync/metadata.py' |
1581 | --- src/u1sync/metadata.py 1970-01-01 00:00:00 +0000 |
1582 | +++ src/u1sync/metadata.py 2010-08-27 14:56:43 +0000 |
1583 | @@ -0,0 +1,145 @@ |
1584 | +# ubuntuone.u1sync.metadata |
1585 | +# |
1586 | +# u1sync metadata routines |
1587 | +# |
1588 | +# Author: Tim Cole <tim.cole@canonical.com> |
1589 | +# |
1590 | +# Copyright 2009 Canonical Ltd. |
1591 | +# |
1592 | +# This program is free software: you can redistribute it and/or modify it |
1593 | +# under the terms of the GNU General Public License version 3, as published |
1594 | +# by the Free Software Foundation. |
1595 | +# |
1596 | +# This program is distributed in the hope that it will be useful, but |
1597 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
1598 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
1599 | +# PURPOSE. See the GNU General Public License for more details. |
1600 | +# |
1601 | +# You should have received a copy of the GNU General Public License along |
1602 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
1603 | +"""Routines for loading/storing u1sync mirror metadata.""" |
1604 | + |
1605 | +from __future__ import with_statement |
1606 | + |
1607 | +import os |
1608 | +import cPickle as pickle |
1609 | +from errno import ENOENT |
1610 | +from contextlib import contextmanager |
1611 | +from ubuntuone.storageprotocol.dircontent_pb2 import DIRECTORY |
1612 | +from u1sync.merge import MergeNode |
1613 | +from u1sync.utils import safe_unlink |
1614 | +import uuid |
1615 | + |
1616 | +class Metadata(object): |
1617 | + """Object representing mirror metadata.""" |
1618 | + def __init__(self, local_tree=None, remote_tree=None, share_uuid=None, |
1619 | + root_uuid=None, path=None): |
1620 | + """Populate fields.""" |
1621 | + self.local_tree = local_tree |
1622 | + self.remote_tree = remote_tree |
1623 | + self.share_uuid = share_uuid |
1624 | + self.root_uuid = root_uuid |
1625 | + self.path = path |
1626 | + |
1627 | +def read(metadata_dir): |
1628 | + """Read metadata for a mirror rooted at directory.""" |
1629 | + index_file = os.path.join(metadata_dir, "local-index") |
1630 | + share_uuid_file = os.path.join(metadata_dir, "share-uuid") |
1631 | + root_uuid_file = os.path.join(metadata_dir, "root-uuid") |
1632 | + path_file = os.path.join(metadata_dir, "path") |
1633 | + |
1634 | + index = read_pickle_file(index_file, {}) |
1635 | + share_uuid = read_uuid_file(share_uuid_file) |
1636 | + root_uuid = read_uuid_file(root_uuid_file) |
1637 | + path = read_string_file(path_file, '/') |
1638 | + |
1639 | + local_tree = index.get("tree", None) |
1640 | + remote_tree = index.get("remote_tree", None) |
1641 | + |
1642 | + if local_tree is None: |
1643 | + local_tree = MergeNode(node_type=DIRECTORY, children={}) |
1644 | + if remote_tree is None: |
1645 | + remote_tree = MergeNode(node_type=DIRECTORY, children={}) |
1646 | + |
1647 | + return Metadata(local_tree=local_tree, remote_tree=remote_tree, |
1648 | + share_uuid=share_uuid, root_uuid=root_uuid, |
1649 | + path=path) |
1650 | + |
1651 | +def write(metadata_dir, info): |
1652 | + """Writes all metadata for the mirror rooted at directory.""" |
1653 | + share_uuid_file = os.path.join(metadata_dir, "share-uuid") |
1654 | + root_uuid_file = os.path.join(metadata_dir, "root-uuid") |
1655 | + index_file = os.path.join(metadata_dir, "local-index") |
1656 | + path_file = os.path.join(metadata_dir, "path") |
1657 | + if info.share_uuid is not None: |
1658 | + write_uuid_file(share_uuid_file, info.share_uuid) |
1659 | + else: |
1660 | + safe_unlink(share_uuid_file) |
1661 | + if info.root_uuid is not None: |
1662 | + write_uuid_file(root_uuid_file, info.root_uuid) |
1663 | + else: |
1664 | + safe_unlink(root_uuid_file) |
1665 | + write_string_file(path_file, info.path) |
1666 | + write_pickle_file(index_file, {"tree": info.local_tree, |
1667 | + "remote_tree": info.remote_tree}) |
1668 | + |
1669 | +def write_pickle_file(filename, value): |
1670 | + """Writes a pickled python object to a file.""" |
1671 | + with atomic_update_file(filename) as stream: |
1672 | + pickle.dump(value, stream, 2) |
1673 | + |
1674 | +def write_string_file(filename, value): |
1675 | + """Writes a string to a file with an added line feed, or |
1676 | + deletes the file if value is None. |
1677 | + """ |
1678 | + if value is not None: |
1679 | + with atomic_update_file(filename) as stream: |
1680 | + stream.write(value) |
1681 | + stream.write('\n') |
1682 | + else: |
1683 | + safe_unlink(filename) |
1684 | + |
1685 | +def write_uuid_file(filename, value): |
1686 | + """Writes a UUID to a file.""" |
1687 | + write_string_file(filename, str(value)) |
1688 | + |
1689 | +def read_pickle_file(filename, default_value=None): |
1690 | + """Reads a pickled python object from a file.""" |
1691 | + try: |
1692 | + with open(filename, "rb") as stream: |
1693 | + return pickle.load(stream) |
1694 | + except IOError, e: |
1695 | + if e.errno != ENOENT: |
1696 | + raise |
1697 | + return default_value |
1698 | + |
1699 | +def read_string_file(filename, default_value=None): |
1700 | + """Reads a string from a file, discarding the final character.""" |
1701 | + try: |
1702 | + with open(filename, "r") as stream: |
1703 | + return stream.read()[:-1] |
1704 | + except IOError, e: |
1705 | + if e.errno != ENOENT: |
1706 | + raise |
1707 | + return default_value |
1708 | + |
1709 | +def read_uuid_file(filename, default_value=None): |
1710 | + """Reads a UUID from a file.""" |
1711 | + try: |
1712 | + with open(filename, "r") as stream: |
1713 | + return uuid.UUID(stream.read()[:-1]) |
1714 | + except IOError, e: |
1715 | + if e.errno != ENOENT: |
1716 | + raise |
1717 | + return default_value |
1718 | + |
1719 | +@contextmanager |
1720 | +def atomic_update_file(filename): |
1721 | + """Returns a context manager for atomically updating a file.""" |
1722 | + temp_filename = "%s.%s" % (filename, uuid.uuid4()) |
1723 | + try: |
1724 | + with open(temp_filename, "w") as stream: |
1725 | + yield stream |
1726 | + os.rename(temp_filename, filename) |
1727 | + finally: |
1728 | + safe_unlink(temp_filename) |
1729 | |
1730 | === added file 'src/u1sync/scan.py' |
1731 | --- src/u1sync/scan.py 1970-01-01 00:00:00 +0000 |
1732 | +++ src/u1sync/scan.py 2010-08-27 14:56:43 +0000 |
1733 | @@ -0,0 +1,102 @@ |
1734 | +# ubuntuone.u1sync.scan |
1735 | +# |
1736 | +# Directory scanning |
1737 | +# |
1738 | +# Author: Tim Cole <tim.cole@canonical.com> |
1739 | +# |
1740 | +# Copyright 2009 Canonical Ltd. |
1741 | +# |
1742 | +# This program is free software: you can redistribute it and/or modify it |
1743 | +# under the terms of the GNU General Public License version 3, as published |
1744 | +# by the Free Software Foundation. |
1745 | +# |
1746 | +# This program is distributed in the hope that it will be useful, but |
1747 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
1748 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
1749 | +# PURPOSE. See the GNU General Public License for more details. |
1750 | +# |
1751 | +# You should have received a copy of the GNU General Public License along |
1752 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
1753 | +"""Code for scanning local directory state.""" |
1754 | + |
1755 | +from __future__ import with_statement |
1756 | + |
1757 | +import os |
1758 | +import hashlib |
1759 | +import shutil |
1760 | +from errno import ENOTDIR, EINVAL |
1761 | +import sys |
1762 | + |
1763 | +EMPTY_HASH = "sha1:%s" % hashlib.sha1().hexdigest() |
1764 | + |
1765 | +from ubuntuone.storageprotocol.dircontent_pb2 import \ |
1766 | + DIRECTORY, FILE, SYMLINK |
1767 | +from u1sync.genericmerge import MergeNode |
1768 | +from u1sync.utils import should_sync |
1769 | + |
1770 | +def scan_directory(path, display_path="", quiet=False): |
1771 | + """Scans a local directory and builds an in-memory tree from it.""" |
1772 | + if display_path != "" and not quiet: |
1773 | + print display_path |
1774 | + |
1775 | + link_target = None |
1776 | + child_names = None |
1777 | + try: |
1778 | + print "Path is " + str(path) |
1779 | + if sys.platform == "win32": |
1780 | + if path.endswith(".lnk") or path.endswith(".url"): |
1781 | + import win32com.client |
1782 | + import pythoncom |
1783 | + pythoncom.CoInitialize() |
1784 | + shell = win32com.client.Dispatch("WScript.Shell") |
1785 | + shortcut = shell.CreateShortCut(path) |
1786 | + print(shortcut.Targetpath) |
1787 | + link_target = shortcut.Targetpath |
1788 | + else: |
1789 | + link_target = None |
1790 | + if os.path.isdir(path): |
1791 | + child_names = os.listdir(path) |
1792 | + else: |
1793 | + link_target = os.readlink(path) |
1794 | + except OSError, e: |
1795 | + if e.errno != EINVAL: |
1796 | + raise |
1797 | + try: |
1798 | + child_names = os.listdir(path) |
1799 | + except OSError, e: |
1800 | + if e.errno != ENOTDIR: |
1801 | + raise |
1802 | + |
1803 | + if link_target is not None: |
1804 | + # symlink |
1805 | + sum = hashlib.sha1() |
1806 | + sum.update(link_target) |
1807 | + content_hash = "sha1:%s" % sum.hexdigest() |
1808 | + return MergeNode(node_type=SYMLINK, content_hash=content_hash) |
1809 | + elif child_names is not None: |
1810 | + # directory |
1811 | + child_names = [n for n in child_names if should_sync(n.decode("utf-8"))] |
1812 | + child_paths = [(os.path.join(path, child_name), |
1813 | + os.path.join(display_path, child_name)) \ |
1814 | + for child_name in child_names] |
1815 | + children = [scan_directory(child_path, child_display_path, quiet) \ |
1816 | + for (child_path, child_display_path) in child_paths] |
1817 | + unicode_child_names = [n.decode("utf-8") for n in child_names] |
1818 | + children = dict(zip(unicode_child_names, children)) |
1819 | + return MergeNode(node_type=DIRECTORY, children=children) |
1820 | + else: |
1821 | + # regular file |
1822 | + sum = hashlib.sha1() |
1823 | + |
1824 | + |
1825 | + class HashStream(object): |
1826 | + """Stream that computes hashes.""" |
1827 | + def write(self, bytes): |
1828 | + """Accumulate bytes.""" |
1829 | + sum.update(bytes) |
1830 | + |
1831 | + |
1832 | + with open(path, "r") as stream: |
1833 | + shutil.copyfileobj(stream, HashStream()) |
1834 | + content_hash = "sha1:%s" % sum.hexdigest() |
1835 | + return MergeNode(node_type=FILE, content_hash=content_hash) |
1836 | |
1837 | === added file 'src/u1sync/sync.py' |
1838 | --- src/u1sync/sync.py 1970-01-01 00:00:00 +0000 |
1839 | +++ src/u1sync/sync.py 2010-08-27 14:56:43 +0000 |
1840 | @@ -0,0 +1,384 @@ |
1841 | +# ubuntuone.u1sync.sync |
1842 | +# |
1843 | +# State update |
1844 | +# |
1845 | +# Author: Tim Cole <tim.cole@canonical.com> |
1846 | +# |
1847 | +# Copyright 2009 Canonical Ltd. |
1848 | +# |
1849 | +# This program is free software: you can redistribute it and/or modify it |
1850 | +# under the terms of the GNU General Public License version 3, as published |
1851 | +# by the Free Software Foundation. |
1852 | +# |
1853 | +# This program is distributed in the hope that it will be useful, but |
1854 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
1855 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
1856 | +# PURPOSE. See the GNU General Public License for more details. |
1857 | +# |
1858 | +# You should have received a copy of the GNU General Public License along |
1859 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
1860 | +"""After merging, these routines are used to synchronize state locally and on |
1861 | +the server to correspond to the merged result.""" |
1862 | + |
1863 | +from __future__ import with_statement |
1864 | + |
1865 | +import os |
1866 | + |
1867 | +EMPTY_HASH = "" |
1868 | +UPLOAD_SYMBOL = u"\u25b2".encode("utf-8") |
1869 | +DOWNLOAD_SYMBOL = u"\u25bc".encode("utf-8") |
1870 | +CONFLICT_SYMBOL = "!" |
1871 | +DELETE_SYMBOL = "X" |
1872 | + |
1873 | +from ubuntuone.storageprotocol import request |
1874 | +from ubuntuone.storageprotocol.dircontent_pb2 import ( |
1875 | + DIRECTORY, SYMLINK) |
1876 | +from u1sync.genericmerge import ( |
1877 | + MergeNode, generic_merge) |
1878 | +from u1sync.utils import safe_mkdir |
1879 | +from u1sync.client import UnsupportedOperationError |
1880 | + |
1881 | +def get_conflict_path(path, conflict_info): |
1882 | + """Returns path for conflict file corresponding to path.""" |
1883 | + dir, name = os.path.split(path) |
1884 | + unique_id = conflict_info[0] |
1885 | + return os.path.join(dir, "conflict-%s-%s" % (unique_id, name)) |
1886 | + |
1887 | +def name_from_path(path): |
1888 | + """Returns unicode name from last path component.""" |
1889 | + return os.path.split(path)[1].decode("UTF-8") |
1890 | + |
1891 | + |
1892 | +class NodeSyncError(Exception): |
1893 | + """Error syncing node.""" |
1894 | + |
1895 | + |
1896 | +class NodeCreateError(NodeSyncError): |
1897 | + """Error creating node.""" |
1898 | + |
1899 | + |
1900 | +class NodeUpdateError(NodeSyncError): |
1901 | + """Error updating node.""" |
1902 | + |
1903 | + |
1904 | +class NodeDeleteError(NodeSyncError): |
1905 | + """Error deleting node.""" |
1906 | + |
1907 | + |
1908 | +def sync_tree(merged_tree, original_tree, sync_mode, path, quiet): |
1909 | + """Performs actual synchronization.""" |
1910 | + |
1911 | + def pre_merge(nodes, name, partial_parent): |
1912 | + """Create nodes and write content as required.""" |
1913 | + (merged_node, original_node) = nodes |
1914 | + # pylint: disable-msg=W0612 |
1915 | + (parent_path, parent_display_path, parent_uuid, parent_synced) \ |
1916 | + = partial_parent |
1917 | + |
1918 | + utf8_name = name.encode("utf-8") |
1919 | + path = os.path.join(parent_path, utf8_name) |
1920 | + display_path = os.path.join(parent_display_path, utf8_name) |
1921 | + node_uuid = None |
1922 | + |
1923 | + synced = False |
1924 | + if merged_node is not None: |
1925 | + if merged_node.node_type == DIRECTORY: |
1926 | + if original_node is not None: |
1927 | + synced = True |
1928 | + node_uuid = original_node.uuid |
1929 | + else: |
1930 | + if not quiet: |
1931 | + print "%s %s" % (sync_mode.symbol, display_path) |
1932 | + try: |
1933 | + create_dir = sync_mode.create_directory |
1934 | + node_uuid = create_dir(parent_uuid=parent_uuid, |
1935 | + path=path) |
1936 | + synced = True |
1937 | + except NodeCreateError, e: |
1938 | + print e |
1939 | + elif merged_node.content_hash is None: |
1940 | + if not quiet: |
1941 | + print "? %s" % display_path |
1942 | + elif original_node is None or \ |
1943 | + original_node.content_hash != merged_node.content_hash or \ |
1944 | + merged_node.conflict_info is not None: |
1945 | + conflict_info = merged_node.conflict_info |
1946 | + if conflict_info is not None: |
1947 | + conflict_symbol = CONFLICT_SYMBOL |
1948 | + else: |
1949 | + conflict_symbol = " " |
1950 | + if not quiet: |
1951 | + print "%s %s %s" % (sync_mode.symbol, conflict_symbol, |
1952 | + display_path) |
1953 | + if original_node is not None: |
1954 | + node_uuid = original_node.uuid or merged_node.uuid |
1955 | + original_hash = original_node.content_hash or EMPTY_HASH |
1956 | + else: |
1957 | + node_uuid = merged_node.uuid |
1958 | + original_hash = EMPTY_HASH |
1959 | + try: |
1960 | + sync_mode.write_file(node_uuid=node_uuid, |
1961 | + content_hash= |
1962 | + merged_node.content_hash, |
1963 | + old_content_hash=original_hash, |
1964 | + path=path, |
1965 | + parent_uuid=parent_uuid, |
1966 | + conflict_info=conflict_info, |
1967 | + node_type=merged_node.node_type) |
1968 | + synced = True |
1969 | + except NodeSyncError, e: |
1970 | + print e |
1971 | + else: |
1972 | + synced = True |
1973 | + |
1974 | + return (path, display_path, node_uuid, synced) |
1975 | + |
1976 | + def post_merge(nodes, partial_result, child_results): |
1977 | + """Delete nodes.""" |
1978 | + (merged_node, original_node) = nodes |
1979 | + # pylint: disable-msg=W0612 |
1980 | + (path, display_path, node_uuid, synced) = partial_result |
1981 | + |
1982 | + if merged_node is None: |
1983 | + assert original_node is not None |
1984 | + if not quiet: |
1985 | + print "%s %s %s" % (sync_mode.symbol, DELETE_SYMBOL, |
1986 | + display_path) |
1987 | + try: |
1988 | + if original_node.node_type == DIRECTORY: |
1989 | + sync_mode.delete_directory(node_uuid=original_node.uuid, |
1990 | + path=path) |
1991 | + else: |
1992 | + # files or symlinks |
1993 | + sync_mode.delete_file(node_uuid=original_node.uuid, |
1994 | + path=path) |
1995 | + synced = True |
1996 | + except NodeDeleteError, e: |
1997 | + print e |
1998 | + |
1999 | + if synced: |
2000 | + model_node = merged_node |
2001 | + else: |
2002 | + model_node = original_node |
2003 | + |
2004 | + if model_node is not None: |
2005 | + if model_node.node_type == DIRECTORY: |
2006 | + child_iter = child_results.iteritems() |
2007 | + merged_children = dict([(name, child) for (name, child) |
2008 | + in child_iter |
2009 | + if child is not None]) |
2010 | + else: |
2011 | + # if there are children here it's because they failed to delete |
2012 | + merged_children = None |
2013 | + return MergeNode(node_type=model_node.node_type, |
2014 | + uuid=model_node.uuid, |
2015 | + children=merged_children, |
2016 | + content_hash=model_node.content_hash) |
2017 | + else: |
2018 | + return None |
2019 | + |
2020 | + return generic_merge(trees=[merged_tree, original_tree], |
2021 | + pre_merge=pre_merge, post_merge=post_merge, |
2022 | + partial_parent=(path, "", None, True), name=u"") |
2023 | + |
2024 | +def download_tree(merged_tree, local_tree, client, share_uuid, path, dry_run, |
2025 | + quiet): |
2026 | + """Downloads a directory.""" |
2027 | + if dry_run: |
2028 | + downloader = DryRun(symbol=DOWNLOAD_SYMBOL) |
2029 | + else: |
2030 | + downloader = Downloader(client=client, share_uuid=share_uuid) |
2031 | + return sync_tree(merged_tree=merged_tree, original_tree=local_tree, |
2032 | + sync_mode=downloader, path=path, quiet=quiet) |
2033 | + |
2034 | +def upload_tree(merged_tree, remote_tree, client, share_uuid, path, dry_run, |
2035 | + quiet): |
2036 | + """Uploads a directory.""" |
2037 | + if dry_run: |
2038 | + uploader = DryRun(symbol=UPLOAD_SYMBOL) |
2039 | + else: |
2040 | + uploader = Uploader(client=client, share_uuid=share_uuid) |
2041 | + return sync_tree(merged_tree=merged_tree, original_tree=remote_tree, |
2042 | + sync_mode=uploader, path=path, quiet=quiet) |
2043 | + |
2044 | + |
2045 | +class DryRun(object): |
2046 | + """A class which implements the sync interface but does nothing.""" |
2047 | + def __init__(self, symbol): |
2048 | + """Initializes a DryRun instance.""" |
2049 | + self.symbol = symbol |
2050 | + |
2051 | + def create_directory(self, parent_uuid, path): |
2052 | + """Doesn't create a directory.""" |
2053 | + return None |
2054 | + |
2055 | + def write_file(self, node_uuid, old_content_hash, content_hash, |
2056 | + parent_uuid, path, conflict_info, node_type): |
2057 | + """Doesn't write a file.""" |
2058 | + return None |
2059 | + |
2060 | + def delete_directory(self, node_uuid, path): |
2061 | + """Doesn't delete a directory.""" |
2062 | + |
2063 | + def delete_file(self, node_uuid, path): |
2064 | + """Doesn't delete a file.""" |
2065 | + |
2066 | + |
2067 | +class Downloader(object): |
2068 | + """A class which implements the download half of syncing.""" |
2069 | + def __init__(self, client, share_uuid): |
2070 | + """Initializes a Downloader instance.""" |
2071 | + self.client = client |
2072 | + self.share_uuid = share_uuid |
2073 | + self.symbol = DOWNLOAD_SYMBOL |
2074 | + |
2075 | + def create_directory(self, parent_uuid, path): |
2076 | + """Creates a directory.""" |
2077 | + try: |
2078 | + safe_mkdir(path) |
2079 | + except OSError, e: |
2080 | + raise NodeCreateError("Error creating local directory %s: %s" % \ |
2081 | + (path, e)) |
2082 | + return None |
2083 | + |
2084 | + def write_file(self, node_uuid, old_content_hash, content_hash, |
2085 | + parent_uuid, path, conflict_info, node_type): |
2086 | + """Creates a file and downloads new content for it.""" |
2087 | + if conflict_info: |
2088 | + # download to conflict file rather than overwriting local changes |
2089 | + path = get_conflict_path(path, conflict_info) |
2090 | + content_hash = conflict_info[1] |
2091 | + try: |
2092 | + if node_type == SYMLINK: |
2093 | + self.client.download_string(share_uuid= |
2094 | + self.share_uuid, |
2095 | + node_uuid=node_uuid, |
2096 | + content_hash=content_hash) |
2097 | + else: |
2098 | + self.client.download_file(share_uuid=self.share_uuid, |
2099 | + node_uuid=node_uuid, |
2100 | + content_hash=content_hash, |
2101 | + filename=path) |
2102 | + except (request.StorageRequestError, UnsupportedOperationError), e: |
2103 | + if os.path.exists(path): |
2104 | + raise NodeUpdateError("Error downloading content for %s: %s" %\ |
2105 | + (path, e)) |
2106 | + else: |
2107 | + raise NodeCreateError("Error locally creating %s: %s" % \ |
2108 | + (path, e)) |
2109 | + |
2110 | + def delete_directory(self, node_uuid, path): |
2111 | + """Deletes a directory.""" |
2112 | + try: |
2113 | + os.rmdir(path) |
2114 | + except OSError, e: |
2115 | + raise NodeDeleteError("Error locally deleting %s: %s" % (path, e)) |
2116 | + |
2117 | + def delete_file(self, node_uuid, path): |
2118 | + """Deletes a file.""" |
2119 | + try: |
2120 | + os.unlink(path) |
2121 | + except OSError, e: |
2122 | + raise NodeDeleteError("Error locally deleting %s: %s" % (path, e)) |
2123 | + |
2124 | + |
2125 | +class Uploader(object): |
2126 | + """A class which implements the upload half of syncing.""" |
2127 | + def __init__(self, client, share_uuid): |
2128 | + """Initializes an uploader instance.""" |
2129 | + self.client = client |
2130 | + self.share_uuid = share_uuid |
2131 | + self.symbol = UPLOAD_SYMBOL |
2132 | + |
2133 | + def create_directory(self, parent_uuid, path): |
2134 | + """Creates a directory on the server.""" |
2135 | + name = name_from_path(path) |
2136 | + try: |
2137 | + return self.client.create_directory(share_uuid=self.share_uuid, |
2138 | + parent_uuid=parent_uuid, |
2139 | + name=name) |
2140 | + except (request.StorageRequestError, UnsupportedOperationError), e: |
2141 | + raise NodeCreateError("Error remotely creating %s: %s" % \ |
2142 | + (path, e)) |
2143 | + |
2144 | + def write_file(self, node_uuid, old_content_hash, content_hash, |
2145 | + parent_uuid, path, conflict_info, node_type): |
2146 | + """Creates a file on the server and uploads new content for it.""" |
2147 | + |
2148 | + if conflict_info: |
2149 | + # move conflicting file out of the way on the server |
2150 | + conflict_path = get_conflict_path(path, conflict_info) |
2151 | + conflict_name = name_from_path(conflict_path) |
2152 | + try: |
2153 | + self.client.move(share_uuid=self.share_uuid, |
2154 | + parent_uuid=parent_uuid, |
2155 | + name=conflict_name, |
2156 | + node_uuid=node_uuid) |
2157 | + except (request.StorageRequestError, UnsupportedOperationError), e: |
2158 | + raise NodeUpdateError("Error remotely renaming %s to %s: %s" %\ |
2159 | + (path, conflict_path, e)) |
2160 | + node_uuid = None |
2161 | + old_content_hash = EMPTY_HASH |
2162 | + |
2163 | + if node_type == SYMLINK: |
2164 | + try: |
2165 | + target = os.readlink(path) |
2166 | + except OSError, e: |
2167 | + raise NodeCreateError("Error retrieving link target " \ |
2168 | + "for %s: %s" % (path, e)) |
2169 | + else: |
2170 | + target = None |
2171 | + |
2172 | + name = name_from_path(path) |
2173 | + if node_uuid is None: |
2174 | + try: |
2175 | + if node_type == SYMLINK: |
2176 | + node_uuid = self.client.create_symlink(share_uuid= |
2177 | + self.share_uuid, |
2178 | + parent_uuid= |
2179 | + parent_uuid, |
2180 | + name=name, |
2181 | + target=target) |
2182 | + old_content_hash = content_hash |
2183 | + else: |
2184 | + node_uuid = self.client.create_file(share_uuid= |
2185 | + self.share_uuid, |
2186 | + parent_uuid= |
2187 | + parent_uuid, |
2188 | + name=name) |
2189 | + except (request.StorageRequestError, UnsupportedOperationError), e: |
2190 | + raise NodeCreateError("Error remotely creating %s: %s" % \ |
2191 | + (path, e)) |
2192 | + |
2193 | + if old_content_hash != content_hash: |
2194 | + try: |
2195 | + if node_type == SYMLINK: |
2196 | + self.client.upload_string(share_uuid=self.share_uuid, |
2197 | + node_uuid=node_uuid, |
2198 | + content_hash=content_hash, |
2199 | + old_content_hash= |
2200 | + old_content_hash, |
2201 | + content=target) |
2202 | + else: |
2203 | + self.client.upload_file(share_uuid=self.share_uuid, |
2204 | + node_uuid=node_uuid, |
2205 | + content_hash=content_hash, |
2206 | + old_content_hash=old_content_hash, |
2207 | + filename=path) |
2208 | + except (request.StorageRequestError, UnsupportedOperationError), e: |
2209 | + raise NodeUpdateError("Error uploading content for %s: %s" % \ |
2210 | + (path, e)) |
2211 | + |
2212 | + def delete_directory(self, node_uuid, path): |
2213 | + """Deletes a directory.""" |
2214 | + try: |
2215 | + self.client.unlink(share_uuid=self.share_uuid, node_uuid=node_uuid) |
2216 | + except (request.StorageRequestError, UnsupportedOperationError), e: |
2217 | + raise NodeDeleteError("Error remotely deleting %s: %s" % (path, e)) |
2218 | + |
2219 | + def delete_file(self, node_uuid, path): |
2220 | + """Deletes a file.""" |
2221 | + try: |
2222 | + self.client.unlink(share_uuid=self.share_uuid, node_uuid=node_uuid) |
2223 | + except (request.StorageRequestError, UnsupportedOperationError), e: |
2224 | + raise NodeDeleteError("Error remotely deleting %s: %s" % (path, e)) |
2225 | |
2226 | === added file 'src/u1sync/ubuntuone_optparse.py' |
2227 | --- src/u1sync/ubuntuone_optparse.py 1970-01-01 00:00:00 +0000 |
2228 | +++ src/u1sync/ubuntuone_optparse.py 2010-08-27 14:56:43 +0000 |
2229 | @@ -0,0 +1,202 @@ |
2230 | +# ubuntuone.u1sync.ubuntuone_optparse |
2231 | +# |
2232 | +# Prototype directory sync client |
2233 | +# |
2234 | +# Author: Manuel de la Pena <manuel.delapena@canonical.com> |
2235 | +# |
2236 | +# Copyright 2010 Canonical Ltd. |
2237 | +# |
2238 | +# This program is free software: you can redistribute it and/or modify it |
2239 | +# under the terms of the GNU General Public License version 3, as published |
2240 | +# by the Free Software Foundation. |
2241 | +# |
2242 | +# This program is distributed in the hope that it will be useful, but |
2243 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
2244 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
2245 | +# PURPOSE. See the GNU General Public License for more details. |
2246 | +# |
2247 | +# You should have received a copy of the GNU General Public License along |
2248 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
2249 | +import uuid |
2250 | +from oauth.oauth import OAuthToken |
2251 | +from optparse import OptionParser, SUPPRESS_HELP |
2252 | +from u1sync.merge import ( |
2253 | + SyncMerge, ClobberServerMerge, ClobberLocalMerge) |
2254 | + |
2255 | +MERGE_ACTIONS = { |
2256 | + # action: (merge_class, should_upload, should_download) |
2257 | + 'sync': (SyncMerge, True, True), |
2258 | + 'clobber-server': (ClobberServerMerge, True, False), |
2259 | + 'clobber-local': (ClobberLocalMerge, False, True), |
2260 | + 'upload': (SyncMerge, True, False), |
2261 | + 'download': (SyncMerge, False, True), |
2262 | + 'auto': None # special case |
2263 | +} |
2264 | + |
2265 | +DEFAULT_MERGE_ACTION = 'auto' |
2266 | + |
2267 | +class NotParsedOptionsError(Exception): |
2268 | + """Exception thrown when there options have not been parsed.""" |
2269 | + |
2270 | +class NotValidatedOptionsError(Exception): |
2271 | + """Exception thrown when the options have not been validated.""" |
2272 | + |
2273 | +class UbuntuOneOptionsParser(OptionParser): |
2274 | + """Parse for the options passed in the command line""" |
2275 | + |
2276 | + def __init__(self): |
2277 | + usage = "Usage: %prog [options] [DIRECTORY]\n" \ |
2278 | + " %prog --authorize [options]\n" \ |
2279 | + " %prog --list-shares [options]\n" \ |
2280 | + " %prog --init [--share=SHARE_UUID] [options] DIRECTORY\n" \ |
2281 | + " %prog --diff [--share=SHARE_UUID] [options] DIRECTORY" |
2282 | + OptionParser.__init__(self, usage=usage) |
2283 | + self._was_validated = False |
2284 | + self._args = None |
2285 | + # add the different options to be used |
2286 | + self.add_option("--port", dest="port", metavar="PORT", |
2287 | + default=443, |
2288 | + help="The port on which to connect to the server") |
2289 | + self.add_option("--host", dest="host", metavar="HOST", |
2290 | + default='fs-1.one.ubuntu.com', |
2291 | + help="The server address") |
2292 | + self.add_option("--realm", dest="realm", metavar="REALM", |
2293 | + default='https://ubuntuone.com', |
2294 | + help="The oauth realm") |
2295 | + self.add_option("--oauth", dest="oauth", metavar="KEY:SECRET", |
2296 | + default=None, |
2297 | + help="Explicitly provide OAuth credentials " |
2298 | + "(default is to query keyring)") |
2299 | + |
2300 | + action_list = ", ".join(sorted(MERGE_ACTIONS.keys())) |
2301 | + self.add_option("--action", dest="action", metavar="ACTION", |
2302 | + default=None, |
2303 | + help="Select a sync action (%s; default is %s)" % \ |
2304 | + (action_list, DEFAULT_MERGE_ACTION)) |
2305 | + |
2306 | + self.add_option("--dry-run", action="store_true", dest="dry_run", |
2307 | + default=False, help="Do a dry run without actually " |
2308 | + "making changes") |
2309 | + self.add_option("--quiet", action="store_true", dest="quiet", |
2310 | + default=False, help="Produces less output") |
2311 | + self.add_option("--authorize", action="store_const", dest="mode", |
2312 | + const="authorize", |
2313 | + help="Authorize this machine") |
2314 | + self.add_option("--list-shares", action="store_const", dest="mode", |
2315 | + const="list-shares", default="sync", |
2316 | + help="List available shares") |
2317 | + self.add_option("--init", action="store_const", dest="mode", |
2318 | + const="init", |
2319 | + help="Initialize a local directory for syncing") |
2320 | + self.add_option("--no-ssl-verify", action="store_true", |
2321 | + dest="no_ssl_verify", |
2322 | + default=False, help=SUPPRESS_HELP) |
2323 | + self.add_option("--diff", action="store_const", dest="mode", |
2324 | + const="diff", |
2325 | + help="Compare tree on server with local tree " \ |
2326 | + "(does not require previous --init)") |
2327 | + self.add_option("--share", dest="share", metavar="SHARE_UUID", |
2328 | + default=None, |
2329 | + help="Sync the directory with a share rather than the " \ |
2330 | + "user's own volume") |
2331 | + self.add_option("--subtree", dest="subtree", metavar="PATH", |
2332 | + default=None, |
2333 | + help="Mirror a subset of the share or volume") |
2334 | + |
2335 | + def get_options(self, arguments): |
2336 | + """Parses the arguments to from the command line.""" |
2337 | + (self.options, self._args) = \ |
2338 | + self.parse_args(arguments) |
2339 | + self._validate_args() |
2340 | + |
2341 | + def _validate_args(self): |
2342 | + """Validates the args that have been parsed.""" |
2343 | + self._is_only_share() |
2344 | + self._get_directory() |
2345 | + self._validate_action_usage() |
2346 | + self._validate_authorize_usage() |
2347 | + self._validate_subtree_usage() |
2348 | + self._validate_action() |
2349 | + self._validate_oauth() |
2350 | + |
2351 | + def _is_only_share(self): |
2352 | + """Ensures that the share options is not convined with any other.""" |
2353 | + if self.options.share is not None and \ |
2354 | + self.options.mode != "init" and \ |
2355 | + self.options.mode != "diff": |
2356 | + self.error("--share is only valid with --init or --diff") |
2357 | + |
2358 | + def _get_directory(self): |
2359 | + """Gets the directory to be used according to the paramenters.""" |
2360 | + print self._args |
2361 | + if self.options.mode == "sync" or self.options.mode == "init" or \ |
2362 | + self.options.mode == "diff": |
2363 | + if len(self._args) > 2: |
2364 | + self.error("Too many arguments") |
2365 | + elif len(self._args) < 1: |
2366 | + if self.options.mode == "init" or self.options.mode == "diff": |
2367 | + self.error("--%s requires a directory to " |
2368 | + "be specified" % self.options.mode) |
2369 | + else: |
2370 | + self.options.directory = "." |
2371 | + else: |
2372 | + self.options.directory = self._args[0] |
2373 | + |
2374 | + def _validate_action_usage(self): |
2375 | + """Ensures that the --action option is correctly used""" |
2376 | + if self.options.mode == "init" or \ |
2377 | + self.options.mode == "list-shares" or \ |
2378 | + self.options.mode == "diff" or \ |
2379 | + self.options.mode == "authorize": |
2380 | + if self.options.action is not None: |
2381 | + self.error("--%s does not take the --action parameter" % \ |
2382 | + self.options.mode) |
2383 | + if self.options.dry_run: |
2384 | + self.error("--%s does not take the --dry-run parameter" % \ |
2385 | + self.options.mode) |
2386 | + |
2387 | + def _validate_authorize_usage(self): |
2388 | + """Validates the usage of the authorize option.""" |
2389 | + if self.options.mode == "authorize": |
2390 | + if self.options.oauth is not None: |
2391 | + self.error("--authorize does not take the --oauth parameter") |
2392 | + if self.options.mode == "list-shares" or \ |
2393 | + self.options.mode == "authorize": |
2394 | + if len(self._args) != 0: |
2395 | + self.error("--list-shares does not take a directory") |
2396 | + |
2397 | + def _validate_subtree_usage(self): |
2398 | + """Validates the usage of the subtree option""" |
2399 | + if self.options.mode != "init" and self.options.mode != "diff": |
2400 | + if self.options.subtree is not None: |
2401 | + self.error("--%s does not take the --subtree parameter" % \ |
2402 | + self.options.mode) |
2403 | + |
2404 | + def _validate_action(self): |
2405 | + """Validates the actions passed to the options.""" |
2406 | + if self.options.action is not None and \ |
2407 | + self.options.action not in MERGE_ACTIONS: |
2408 | + self.error("--action: Unknown action %s" % self.options.action) |
2409 | + |
2410 | + if self.options.action is None: |
2411 | + self.options.action = DEFAULT_MERGE_ACTION |
2412 | + |
2413 | + def _validate_oauth(self): |
2414 | + """Validates that the oatuh was passed.""" |
2415 | + if self.options.oauth is None: |
2416 | + self.error("--oauth is currently compulsery.") |
2417 | + else: |
2418 | + try: |
2419 | + (key, secret) = self.options.oauth.split(':', 2) |
2420 | + except ValueError: |
2421 | + self.error("--oauth requires a key and secret together in the " |
2422 | + " form KEY:SECRET") |
2423 | + self.options.token = OAuthToken(key, secret) |
2424 | + |
2425 | + def _validate_share(self): |
2426 | + """Validates the share option""" |
2427 | + if self.options.share is not None: |
2428 | + try: |
2429 | + uuid.UUID(self.options.share) |
2430 | + except ValueError, e: |
2431 | + self.error("Invalid --share argument: %s" % e) |
2432 | \ No newline at end of file |
2433 | |
2434 | === added file 'src/u1sync/utils.py' |
2435 | --- src/u1sync/utils.py 1970-01-01 00:00:00 +0000 |
2436 | +++ src/u1sync/utils.py 2010-08-27 14:56:43 +0000 |
2437 | @@ -0,0 +1,50 @@ |
2438 | +# ubuntuone.u1sync.utils |
2439 | +# |
2440 | +# Miscellaneous utility functions |
2441 | +# |
2442 | +# Author: Tim Cole <tim.cole@canonical.com> |
2443 | +# |
2444 | +# Copyright 2009 Canonical Ltd. |
2445 | +# |
2446 | +# This program is free software: you can redistribute it and/or modify it |
2447 | +# under the terms of the GNU General Public License version 3, as published |
2448 | +# by the Free Software Foundation. |
2449 | +# |
2450 | +# This program is distributed in the hope that it will be useful, but |
2451 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
2452 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
2453 | +# PURPOSE. See the GNU General Public License for more details. |
2454 | +# |
2455 | +# You should have received a copy of the GNU General Public License along |
2456 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
2457 | +"""Miscellaneous utility functions.""" |
2458 | + |
2459 | +import os |
2460 | +from errno import EEXIST, ENOENT |
2461 | +from u1sync.constants import ( |
2462 | + METADATA_DIR_NAME, SPECIAL_FILE_RE) |
2463 | + |
2464 | +def should_sync(filename): |
2465 | + """Returns True if the filename should be synced. |
2466 | + |
2467 | + @param filename: a unicode filename |
2468 | + |
2469 | + """ |
2470 | + return filename != METADATA_DIR_NAME and \ |
2471 | + not SPECIAL_FILE_RE.match(filename) |
2472 | + |
2473 | +def safe_mkdir(path): |
2474 | + """Creates a directory iff it does not already exist.""" |
2475 | + try: |
2476 | + os.mkdir(path) |
2477 | + except OSError, e: |
2478 | + if e.errno != EEXIST: |
2479 | + raise |
2480 | + |
2481 | +def safe_unlink(path): |
2482 | + """Unlinks a file iff it exists.""" |
2483 | + try: |
2484 | + os.unlink(path) |
2485 | + except OSError, e: |
2486 | + if e.errno != ENOENT: |
2487 | + raise |
Tests Ok, build Ok!