Merge lp:~mandel/ubuntuone-windows-installer/add_python_u1sync into lp:ubuntuone-windows-installer/beta

Proposed by Manuel de la Peña
Status: Merged
Approved by: John Lenton
Approved revision: 56
Merged at revision: 72
Proposed branch: lp:~mandel/ubuntuone-windows-installer/add_python_u1sync
Merge into: lp:ubuntuone-windows-installer/beta
Diff against target: 2487 lines (+2397/-4)
14 files modified
README.txt (+15/-2)
main.build (+11/-2)
src/setup.py (+56/-0)
src/u1sync/__init__.py (+14/-0)
src/u1sync/client.py (+754/-0)
src/u1sync/constants.py (+30/-0)
src/u1sync/genericmerge.py (+88/-0)
src/u1sync/main.py (+360/-0)
src/u1sync/merge.py (+186/-0)
src/u1sync/metadata.py (+145/-0)
src/u1sync/scan.py (+102/-0)
src/u1sync/sync.py (+384/-0)
src/u1sync/ubuntuone_optparse.py (+202/-0)
src/u1sync/utils.py (+50/-0)
To merge this branch: bzr merge lp:~mandel/ubuntuone-windows-installer/add_python_u1sync
Reviewer Review Type Date Requested Status
Rick McBride (community) Approve
Vincenzo Di Somma (community) Approve
Review via email: mp+33914@code.launchpad.net

Description of the change

Modified u1sync to work on windows and added its packaging with py2exe. This provides a temp solution until we refactor certain parts on the syncdaemon.

To post a comment you must log in.
Revision history for this message
Vincenzo Di Somma (vds) wrote :

Tests Ok, build Ok!

review: Approve
Revision history for this message
Rick McBride (rmcbride) :
review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'README.txt'
--- README.txt 2010-08-19 11:13:02 +0000
+++ README.txt 2010-08-27 14:56:43 +0000
@@ -7,8 +7,21 @@
7 * UbuntuOne Windows Service: Windows service that allows to start, stop, pause and resume the sync daemon.7 * UbuntuOne Windows Service: Windows service that allows to start, stop, pause and resume the sync daemon.
8 * UbuntuOne Windows Service Brodcaster: Provides a WCF service hosted in a windows service that allows to .Net languages8 * UbuntuOne Windows Service Brodcaster: Provides a WCF service hosted in a windows service that allows to .Net languages
9 to communicate with the sync daemon and interact with it.9 to communicate with the sync daemon and interact with it.
10 10
112. Build112. Enviroment setup
12
13The ubuntuone windows port solution provides a port of the python code that is used on Linux to perform the u1 sync operations. Due to the fact that
14we did not want user to have to be hunting down the python runtime plus the differen modules that have been used. To simplify the live of the
15windows users we have opted to use py2exe to create an executable that will carry all the different python dependencies of the code. As you
16may already know py2exe is not perfect and does not support egg files. In order to make sure that you easy_install does extra the eggs files after
17the packages are installed please use the following command:
18
19easy_install -Z %package_name%
20
21In order to be able to build the solution you will need to have python win32, the win32 python extensions and the ubuntu one storage protocol in your system.:
22
23
243. Build
1225
13In order to simplify the buil as much as possible, all the required tools for the compilation of the project are provided in the soruce tree. The 26In order to simplify the buil as much as possible, all the required tools for the compilation of the project are provided in the soruce tree. The
14compilation uses a nant project that allows to run the following targets:27compilation uses a nant project that allows to run the following targets:
1528
=== modified file 'main.build'
--- main.build 2010-08-24 18:32:48 +0000
+++ main.build 2010-08-27 14:56:43 +0000
@@ -176,10 +176,19 @@
176 program="nunit-console.exe"176 program="nunit-console.exe"
177 commandline="UbuntuOneClient.Tests.dll /xml=../../../../test-results/UbuntuOneClient.Tests-Result.xml" />177 commandline="UbuntuOneClient.Tests.dll /xml=../../../../test-results/UbuntuOneClient.Tests-Result.xml" />
178 </target>178 </target>
179 179 <target name="package_python"
180 description="Creates the exe binary that embeds the python libs that will be used to perform the sync operation in the windows platform">
181
182 <exec basedir="${python_path}"
183 managed="true"
184 workingdir="src"
185 program="python.exe"
186 commandline="setup.py py2exe" />
187 </target>
188
180 <target name="installer"189 <target name="installer"
181 description="Compiles the solution and create a merge installer that allows to install the solution and other related apps."190 description="Compiles the solution and create a merge installer that allows to install the solution and other related apps."
182 depends="tests">191 depends="tests, package_python">
183 192
184 <mkdir dir="${build_results}" />193 <mkdir dir="${build_results}" />
185 194
186195
=== added file 'src/setup.py'
--- src/setup.py 1970-01-01 00:00:00 +0000
+++ src/setup.py 2010-08-27 14:56:43 +0000
@@ -0,0 +1,56 @@
1#!/usr/bin/env python
2# Copyright (C) 2010 Canonical - All Rights Reserved
3
4""" """
5import sys
6
7# ModuleFinder can't handle runtime changes to __path__, but win32com uses them
8try:
9 # py2exe 0.6.4 introduced a replacement modulefinder.
10 # This means we have to add package paths there, not to the built-in
11 # one. If this new modulefinder gets integrated into Python, then
12 # we might be able to revert this some day.
13 # if this doesn't work, try import modulefinder
14 try:
15 import py2exe.mf as modulefinder
16 except ImportError:
17 import modulefinder
18 import win32com
19 for p in win32com.__path__[1:]:
20 modulefinder.AddPackagePath("win32com", p)
21 for extra in ["win32com.shell"]: #,"win32com.mapi"
22 __import__(extra)
23 m = sys.modules[extra]
24 for p in m.__path__[1:]:
25 modulefinder.AddPackagePath(extra, p)
26except ImportError:
27 # no build path setup, no worries.
28 pass
29
30from distutils.core import setup
31import py2exe
32
33if __name__ == '__main__':
34
35 setup(
36 options = {
37 "py2exe": {
38 "compressed": 1,
39 "optimize": 2}},
40 name='u1sync',
41 version='0.0.1',
42 author = "Canonical Online Services Hackers",
43 description="""u1sync is a utility of Ubuntu One.
44 Ubuntu One is a suite of on-line
45 services. This package contains the a synchronization client for the
46 Ubuntu One file sharing service.""",
47 license='GPLv3',
48 console=['u1sync\\main.py'],
49 requires=[
50 'python (>= 2.5)',
51 'oauth',
52 'twisted.names',
53 'twisted.web',
54 'ubuntuone.storageprotocol (>= 1.3.0)',
55 ],
56 )
057
=== added directory 'src/u1sync'
=== added file 'src/u1sync/__init__.py'
--- src/u1sync/__init__.py 1970-01-01 00:00:00 +0000
+++ src/u1sync/__init__.py 2010-08-27 14:56:43 +0000
@@ -0,0 +1,14 @@
1# Copyright 2009 Canonical Ltd.
2#
3# This program is free software: you can redistribute it and/or modify it
4# under the terms of the GNU General Public License version 3, as published
5# by the Free Software Foundation.
6#
7# This program is distributed in the hope that it will be useful, but
8# WITHOUT ANY WARRANTY; without even the implied warranties of
9# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
10# PURPOSE. See the GNU General Public License for more details.
11#
12# You should have received a copy of the GNU General Public License along
13# with this program. If not, see <http://www.gnu.org/licenses/>.
14"""The guts of the u1sync tool."""
015
=== added file 'src/u1sync/client.py'
--- src/u1sync/client.py 1970-01-01 00:00:00 +0000
+++ src/u1sync/client.py 2010-08-27 14:56:43 +0000
@@ -0,0 +1,754 @@
1# ubuntuone.u1sync.client
2#
3# Client/protocol end of u1sync
4#
5# Author: Lucio Torre <lucio.torre@canonical.com>
6# Author: Tim Cole <tim.cole@canonical.com>
7#
8# Copyright 2009 Canonical Ltd.
9#
10# This program is free software: you can redistribute it and/or modify it
11# under the terms of the GNU General Public License version 3, as published
12# by the Free Software Foundation.
13#
14# This program is distributed in the hope that it will be useful, but
15# WITHOUT ANY WARRANTY; without even the implied warranties of
16# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
17# PURPOSE. See the GNU General Public License for more details.
18#
19# You should have received a copy of the GNU General Public License along
20# with this program. If not, see <http://www.gnu.org/licenses/>.
21"""Pretty API for protocol client."""
22
23from __future__ import with_statement
24
25import os
26import sys
27import shutil
28from Queue import Queue
29from threading import Lock
30import zlib
31import urlparse
32import ConfigParser
33from cStringIO import StringIO
34from twisted.internet import reactor, defer
35from twisted.internet.defer import inlineCallbacks, returnValue
36from ubuntuone.logger import LOGFOLDER
37from ubuntuone.storageprotocol.content_hash import crc32
38from ubuntuone.storageprotocol.context import get_ssl_context
39from u1sync.genericmerge import MergeNode
40from u1sync.utils import should_sync
41
42CONSUMER_KEY = "ubuntuone"
43
44from oauth.oauth import OAuthConsumer
45from ubuntuone.storageprotocol.client import (
46 StorageClientFactory, StorageClient)
47from ubuntuone.storageprotocol import request, volumes
48from ubuntuone.storageprotocol.dircontent_pb2 import \
49 DirectoryContent, DIRECTORY
50import uuid
51import logging
52from logging.handlers import RotatingFileHandler
53import time
54
55def share_str(share_uuid):
56 """Converts a share UUID to a form the protocol likes."""
57 return str(share_uuid) if share_uuid is not None else request.ROOT
58
59LOGFILENAME = os.path.join(LOGFOLDER, 'u1sync.log')
60u1_logger = logging.getLogger("u1sync.timing.log")
61handler = RotatingFileHandler(LOGFILENAME)
62u1_logger.addHandler(handler)
63
64def log_timing(func):
65 def wrapper(*arg, **kwargs):
66 start = time.time()
67 ent = func(*arg, **kwargs)
68 stop = time.time()
69 u1_logger.debug('for %s %0.5f ms elapsed' % (func.func_name, \
70 (stop-start)*1000.0))
71 return ent
72 return wrapper
73
74
75class ForcedShutdown(Exception):
76 """Client shutdown forced."""
77
78
79class Waiter(object):
80 """Wait object for blocking waits."""
81
82 def __init__(self):
83 """Initializes the wait object."""
84 self.queue = Queue()
85
86 def wake(self, result):
87 """Wakes the waiter with a result."""
88 self.queue.put((result, None))
89
90 def wakeAndRaise(self, exc_info):
91 """Wakes the waiter, raising the given exception in it."""
92 self.queue.put((None, exc_info))
93
94 def wakeWithResult(self, func, *args, **kw):
95 """Wakes the waiter with the result of the given function."""
96 try:
97 result = func(*args, **kw)
98 except Exception:
99 self.wakeAndRaise(sys.exc_info())
100 else:
101 self.wake(result)
102
103 def wait(self):
104 """Waits for wakeup."""
105 (result, exc_info) = self.queue.get()
106 if exc_info:
107 try:
108 raise exc_info[0], exc_info[1], exc_info[2]
109 finally:
110 exc_info = None
111 else:
112 return result
113
114
115class SyncStorageClient(StorageClient):
116 """Simple client that calls a callback on connection."""
117
118 @log_timing
119 def connectionMade(self):
120 """Setup and call callback."""
121 StorageClient.connectionMade(self)
122 if self.factory.current_protocol not in (None, self):
123 self.factory.current_protocol.transport.loseConnection()
124 self.factory.current_protocol = self
125 self.factory.observer.connected()
126
127 @log_timing
128 def connectionLost(self, reason=None):
129 """Callback for established connection lost"""
130 if self.factory.current_protocol is self:
131 self.factory.current_protocol = None
132 self.factory.observer.disconnected(reason)
133
134
135class SyncClientFactory(StorageClientFactory):
136 """A cmd protocol factory."""
137 # no init: pylint: disable-msg=W0232
138
139 protocol = SyncStorageClient
140
141 @log_timing
142 def __init__(self, observer):
143 """Create the factory"""
144 self.observer = observer
145 self.current_protocol = None
146
147 @log_timing
148 def clientConnectionFailed(self, connector, reason):
149 """We failed at connecting."""
150 self.current_protocol = None
151 self.observer.connection_failed(reason)
152
153
154class UnsupportedOperationError(Exception):
155 """The operation is unsupported by the protocol version."""
156
157
158class ConnectionError(Exception):
159 """A connection error."""
160
161
162class AuthenticationError(Exception):
163 """An authentication error."""
164
165
166class NoSuchShareError(Exception):
167 """Error when there is no such share available."""
168
169
170class CapabilitiesError(Exception):
171 """A capabilities set/query related error."""
172
173class Client(object):
174 """U1 storage client facade."""
175 required_caps = frozenset(["no-content", "fix462230"])
176
177 def __init__(self, realm, reactor=reactor):
178 """Create the instance."""
179
180 self.reactor = reactor
181 self.factory = SyncClientFactory(self)
182
183 self._status_lock = Lock()
184 self._status = "disconnected"
185 self._status_reason = None
186 self._status_waiting = []
187 self._active_waiters = set()
188 self.consumer_key = CONSUMER_KEY
189 self.consumer_secret = "hammertime"
190
191 def force_shutdown(self):
192 """Forces the client to shut itself down."""
193 with self._status_lock:
194 self._status = "forced_shutdown"
195 self._reason = None
196 for waiter in self._active_waiters:
197 waiter.wakeAndRaise((ForcedShutdown("Forced shutdown"),
198 None, None))
199 self._active_waiters.clear()
200
201 def _get_waiter_locked(self):
202 """Gets a wait object for blocking waits. Should be called with the
203 status lock held.
204 """
205 waiter = Waiter()
206 if self._status == "forced_shutdown":
207 raise ForcedShutdown("Forced shutdown")
208 self._active_waiters.add(waiter)
209 return waiter
210
211 def _get_waiter(self):
212 """Get a wait object for blocking waits. Acquires the status lock."""
213 with self._status_lock:
214 return self._get_waiter_locked()
215
216 def _wait(self, waiter):
217 """Waits for the waiter."""
218 try:
219 return waiter.wait()
220 finally:
221 with self._status_lock:
222 if waiter in self._active_waiters:
223 self._active_waiters.remove(waiter)
224
225 @log_timing
226 def _change_status(self, status, reason=None):
227 """Changes the client status. Usually called from the reactor
228 thread.
229
230 """
231 with self._status_lock:
232 if self._status == "forced_shutdown":
233 return
234 self._status = status
235 self._status_reason = reason
236 waiting = self._status_waiting
237 self._status_waiting = []
238 for waiter in waiting:
239 waiter.wake((status, reason))
240
241 @log_timing
242 def _await_status_not(self, *ignore_statuses):
243 """Blocks until the client status changes, returning the new status.
244 Should never be called from the reactor thread.
245
246 """
247 with self._status_lock:
248 status = self._status
249 reason = self._status_reason
250 while status in ignore_statuses:
251 waiter = self._get_waiter_locked()
252 self._status_waiting.append(waiter)
253 self._status_lock.release()
254 try:
255 status, reason = self._wait(waiter)
256 finally:
257 self._status_lock.acquire()
258 if status == "forced_shutdown":
259 raise ForcedShutdown("Forced shutdown.")
260 return (status, reason)
261
262 def connection_failed(self, reason):
263 """Notification that connection failed."""
264 self._change_status("disconnected", reason)
265
266 def connected(self):
267 """Notification that connection succeeded."""
268 self._change_status("connected")
269
270 def disconnected(self, reason):
271 """Notification that we were disconnected."""
272 self._change_status("disconnected", reason)
273
274 def defer_from_thread(self, function, *args, **kwargs):
275 """Do twisted defer magic to get results and show exceptions."""
276 waiter = self._get_waiter()
277 @log_timing
278 def runner():
279 """inner."""
280 # we do want to catch all
281 # no init: pylint: disable-msg=W0703
282 try:
283 d = function(*args, **kwargs)
284 if isinstance(d, defer.Deferred):
285 d.addCallbacks(lambda r: waiter.wake((r, None, None)),
286 lambda f: waiter.wake((None, None, f)))
287 else:
288 waiter.wake((d, None, None))
289 except Exception:
290 waiter.wake((None, sys.exc_info(), None))
291
292 self.reactor.callFromThread(runner)
293 result, exc_info, failure = self._wait(waiter)
294 if exc_info:
295 try:
296 raise exc_info[0], exc_info[1], exc_info[2]
297 finally:
298 exc_info = None
299 elif failure:
300 failure.raiseException()
301 else:
302 return result
303
304 @log_timing
305 def connect(self, host, port):
306 """Connect to host/port."""
307 def _connect():
308 """Deferred part."""
309 self.reactor.connectTCP(host, port, self.factory)
310 self._connect_inner(_connect)
311
312 @log_timing
313 def connect_ssl(self, host, port, no_verify):
314 """Connect to host/port using ssl."""
315 def _connect():
316 """deferred part."""
317 ctx = get_ssl_context(no_verify)
318 self.reactor.connectSSL(host, port, self.factory, ctx)
319 self._connect_inner(_connect)
320
321 @log_timing
322 def _connect_inner(self, _connect):
323 """Helper function for connecting."""
324 self._change_status("connecting")
325 self.reactor.callFromThread(_connect)
326 status, reason = self._await_status_not("connecting")
327 if status != "connected":
328 raise ConnectionError(reason.value)
329
330 @log_timing
331 def disconnect(self):
332 """Disconnect."""
333 if self.factory.current_protocol is not None:
334 self.reactor.callFromThread(
335 self.factory.current_protocol.transport.loseConnection)
336 self._await_status_not("connecting", "connected", "authenticated")
337
338 @log_timing
339 def set_capabilities(self):
340 """Set the capabilities with the server"""
341
342 client = self.factory.current_protocol
343 @log_timing
344 def set_caps_callback(req):
345 "Caps query succeeded"
346 if not req.accepted:
347 de = defer.fail("The server denied setting %s capabilities" % \
348 req.caps)
349 return de
350
351 @log_timing
352 def query_caps_callback(req):
353 "Caps query succeeded"
354 if req.accepted:
355 set_d = client.set_caps(self.required_caps)
356 set_d.addCallback(set_caps_callback)
357 return set_d
358 else:
359 # the server don't have the requested capabilities.
360 # return a failure for now, in the future we might want
361 # to reconnect to another server
362 de = defer.fail("The server don't have the requested"
363 " capabilities: %s" % str(req.caps))
364 return de
365
366 @log_timing
367 def _wrapped_set_capabilities():
368 """Wrapped set_capabilities """
369 d = client.query_caps(self.required_caps)
370 d.addCallback(query_caps_callback)
371 return d
372
373 try:
374 self.defer_from_thread(_wrapped_set_capabilities)
375 except request.StorageProtocolError, e:
376 raise CapabilitiesError(e)
377
378 @log_timing
379 def get_root_info(self, volume_uuid):
380 """Returns the UUID of the applicable share root."""
381 if volume_uuid is None:
382 _get_root = self.factory.current_protocol.get_root
383 root = self.defer_from_thread(_get_root)
384 return (uuid.UUID(root), True)
385 else:
386 str_volume_uuid = str(volume_uuid)
387 volume = self._match_volume(lambda v: \
388 str(v.volume_id) == str_volume_uuid)
389 if isinstance(volume, volumes.ShareVolume):
390 modify = volume.access_level == "Modify"
391 if isinstance(volume, volumes.UDFVolume):
392 modify = True
393 return (uuid.UUID(str(volume.node_id)), modify)
394
395 @log_timing
396 def resolve_path(self, share_uuid, root_uuid, path):
397 """Resolve path relative to the given root node."""
398
399 @inlineCallbacks
400 def _resolve_worker():
401 """Path resolution worker."""
402 node_uuid = root_uuid
403 local_path = path.strip('/')
404
405 while local_path != '':
406 local_path, name = os.path.split(local_path)
407 hashes = yield self._get_node_hashes(share_uuid, [root_uuid])
408 content_hash = hashes.get(root_uuid, None)
409 if content_hash is None:
410 raise KeyError, "Content hash not available"
411 entries = yield self._get_raw_dir_entries(share_uuid,
412 root_uuid,
413 content_hash)
414 match_name = name.decode('utf-8')
415 match = None
416 for entry in entries:
417 if match_name == entry.name:
418 match = entry
419 break
420
421 if match is None:
422 raise KeyError, "Path not found"
423
424 node_uuid = uuid.UUID(match.node)
425
426 returnValue(node_uuid)
427
428 return self.defer_from_thread(_resolve_worker)
429
430 @log_timing
431 def oauth_from_token(self, token):
432 """Perform OAuth authorisation using an existing token."""
433
434 consumer = OAuthConsumer(self.consumer_key, self.consumer_secret)
435
436 def _auth_successful(value):
437 """Callback for successful auth. Changes status to
438 authenticated."""
439 self._change_status("authenticated")
440 return value
441
442 def _auth_failed(value):
443 """Callback for failed auth. Disconnects."""
444 self.factory.current_protocol.transport.loseConnection()
445 return value
446
447 def _wrapped_authenticate():
448 """Wrapped authenticate."""
449 d = self.factory.current_protocol.oauth_authenticate(consumer,
450 token)
451 d.addCallbacks(_auth_successful, _auth_failed)
452 return d
453
454 try:
455 self.defer_from_thread(_wrapped_authenticate)
456 except request.StorageProtocolError, e:
457 raise AuthenticationError(e)
458 status, reason = self._await_status_not("connected")
459 if status != "authenticated":
460 raise AuthenticationError(reason.value)
461
462 @log_timing
463 def find_volume(self, volume_spec):
464 """Finds a share matching the given UUID. Looks at both share UUIDs
465 and root node UUIDs."""
466 volume = self._match_volume(lambda s: \
467 str(s.volume_id) == volume_spec or \
468 str(s.node_id) == volume_spec)
469 return uuid.UUID(str(volume.volume_id))
470
471 @log_timing
472 def _match_volume(self, predicate):
473 """Finds a volume matching the given predicate."""
474 _list_shares = self.factory.current_protocol.list_volumes
475 r = self.defer_from_thread(_list_shares)
476 for volume in r.volumes:
477 if predicate(volume):
478 return volume
479 raise NoSuchShareError()
480
481 @log_timing
482 def build_tree(self, share_uuid, root_uuid):
483 """Builds and returns a tree representing the metadata for the given
484 subtree in the given share.
485
486 @param share_uuid: the share UUID or None for the user's volume
487 @param root_uuid: the root UUID of the subtree (must be a directory)
488 @return: a MergeNode tree
489
490 """
491 root = MergeNode(node_type=DIRECTORY, uuid=root_uuid)
492
493 @log_timing
494 @inlineCallbacks
495 def _get_root_content_hash():
496 """Obtain the content hash for the root node."""
497 result = yield self._get_node_hashes(share_uuid, [root_uuid])
498 returnValue(result.get(root_uuid, None))
499
500 root.content_hash = self.defer_from_thread(_get_root_content_hash)
501 if root.content_hash is None:
502 raise ValueError("No content available for node %s" % root_uuid)
503
504 @log_timing
505 @inlineCallbacks
506 def _get_children(parent_uuid, parent_content_hash):
507 """Obtain a sequence of MergeNodes corresponding to a node's
508 immediate children.
509
510 """
511 entries = yield self._get_raw_dir_entries(share_uuid,
512 parent_uuid,
513 parent_content_hash)
514 children = {}
515 for entry in entries:
516 if should_sync(entry.name):
517 child = MergeNode(node_type=entry.node_type,
518 uuid=uuid.UUID(entry.node))
519 children[entry.name] = child
520
521 child_uuids = [child.uuid for child in children.itervalues()]
522 content_hashes = yield self._get_node_hashes(share_uuid,
523 child_uuids)
524 for child in children.itervalues():
525 child.content_hash = content_hashes.get(child.uuid, None)
526
527 returnValue(children)
528
529 need_children = [root]
530 while need_children:
531 node = need_children.pop()
532 if node.content_hash is not None:
533 children = self.defer_from_thread(_get_children, node.uuid,
534 node.content_hash)
535 node.children = children
536 for child in children.itervalues():
537 if child.node_type == DIRECTORY:
538 need_children.append(child)
539
540 return root
541
542 @log_timing
543 def _get_raw_dir_entries(self, share_uuid, node_uuid, content_hash):
544 """Gets raw dir entries for the given directory."""
545 d = self.factory.current_protocol.get_content(share_str(share_uuid),
546 str(node_uuid),
547 content_hash)
548 d.addCallback(lambda c: zlib.decompress(c.data))
549
550 def _parse_content(raw_content):
551 """Parses directory content into a list of entry objects."""
552 unserialized_content = DirectoryContent()
553 unserialized_content.ParseFromString(raw_content)
554 return list(unserialized_content.entries)
555
556 d.addCallback(_parse_content)
557 return d
558
559 @log_timing
560 def download_string(self, share_uuid, node_uuid, content_hash):
561 """Reads a file from the server into a string."""
562 output = StringIO()
563 self._download_inner(share_uuid=share_uuid, node_uuid=node_uuid,
564 content_hash=content_hash, output=output)
565 return output.getValue()
566
567 @log_timing
568 def download_file(self, share_uuid, node_uuid, content_hash, filename):
569 """Downloads a file from the server."""
570 # file names have to be quoted to make sure that no issues occur when
571 # spaces exist
572 partial_filename = "%s.u1partial" % filename
573 output = open(partial_filename, "w")
574
575 @log_timing
576 def rename_file():
577 """Renames the temporary file to the final name."""
578 output.close()
579 print "Finished downloading %s" % filename
580 os.rename(partial_filename, filename)
581
582 @log_timing
583 def delete_file():
584 """Deletes the temporary file."""
585 output.close()
586 os.unlink(partial_filename)
587
588 self._download_inner(share_uuid=share_uuid, node_uuid=node_uuid,
589 content_hash=content_hash, output=output,
590 on_success=rename_file, on_failure=delete_file)
591
592 @log_timing
593 def _download_inner(self, share_uuid, node_uuid, content_hash, output,
594 on_success=lambda: None, on_failure=lambda: None):
595 """Helper function for content downloads."""
596 dec = zlib.decompressobj()
597
598 @log_timing
599 def write_data(data):
600 """Helper which writes data to the output file."""
601 uncompressed_data = dec.decompress(data)
602 output.write(uncompressed_data)
603
604 @log_timing
605 def finish_download(value):
606 """Helper which finishes the download."""
607 uncompressed_data = dec.flush()
608 output.write(uncompressed_data)
609 on_success()
610 return value
611
612 @log_timing
613 def abort_download(value):
614 """Helper which aborts the download."""
615 on_failure()
616 return value
617
618 @log_timing
619 def _download():
620 """Async helper."""
621 _get_content = self.factory.current_protocol.get_content
622 d = _get_content(share_str(share_uuid), str(node_uuid),
623 content_hash, callback=write_data)
624 d.addCallbacks(finish_download, abort_download)
625 return d
626
627 self.defer_from_thread(_download)
628
629 @log_timing
630 def create_directory(self, share_uuid, parent_uuid, name):
631 """Creates a directory on the server."""
632 r = self.defer_from_thread(self.factory.current_protocol.make_dir,
633 share_str(share_uuid), str(parent_uuid),
634 name)
635 return uuid.UUID(r.new_id)
636
637 @log_timing
638 def create_file(self, share_uuid, parent_uuid, name):
639 """Creates a file on the server."""
640 r = self.defer_from_thread(self.factory.current_protocol.make_file,
641 share_str(share_uuid), str(parent_uuid),
642 name)
643 return uuid.UUID(r.new_id)
644
645 @log_timing
646 def create_symlink(self, share_uuid, parent_uuid, name, target):
647 """Creates a symlink on the server."""
648 raise UnsupportedOperationError("Protocol does not support symlinks")
649
650 @log_timing
651 def upload_string(self, share_uuid, node_uuid, old_content_hash,
652 content_hash, content):
653 """Uploads a string to the server as file content."""
654 crc = crc32(content, 0)
655 compressed_content = zlib.compress(content, 9)
656 compressed = StringIO(compressed_content)
657 self.defer_from_thread(self.factory.current_protocol.put_content,
658 share_str(share_uuid), str(node_uuid),
659 old_content_hash, content_hash,
660 crc, len(content), len(compressed_content),
661 compressed)
662
663 @log_timing
664 def upload_file(self, share_uuid, node_uuid, old_content_hash,
665 content_hash, filename):
666 """Uploads a file to the server."""
667 parent_dir = os.path.split(filename)[0]
668 unique_filename = os.path.join(parent_dir, "." + str(uuid.uuid4()))
669
670
671 class StagingFile(object):
672 """An object which tracks data being compressed for staging."""
673 def __init__(self, stream):
674 """Initialize a compression object."""
675 self.crc32 = 0
676 self.enc = zlib.compressobj(9)
677 self.size = 0
678 self.compressed_size = 0
679 self.stream = stream
680
681 def write(self, bytes):
682 """Compress bytes, keeping track of length and crc32."""
683 self.size += len(bytes)
684 self.crc32 = crc32(bytes, self.crc32)
685 compressed_bytes = self.enc.compress(bytes)
686 self.compressed_size += len(compressed_bytes)
687 self.stream.write(compressed_bytes)
688
689 def finish(self):
690 """Finish staging compressed data."""
691 compressed_bytes = self.enc.flush()
692 self.compressed_size += len(compressed_bytes)
693 self.stream.write(compressed_bytes)
694
695 with open(unique_filename, "w+") as compressed:
696 os.unlink(unique_filename)
697 with open(filename, "r") as original:
698 staging = StagingFile(compressed)
699 shutil.copyfileobj(original, staging)
700 staging.finish()
701 compressed.seek(0)
702 self.defer_from_thread(self.factory.current_protocol.put_content,
703 share_str(share_uuid), str(node_uuid),
704 old_content_hash, content_hash,
705 staging.crc32,
706 staging.size, staging.compressed_size,
707 compressed)
708
709 @log_timing
710 def move(self, share_uuid, parent_uuid, name, node_uuid):
711 """Moves a file on the server."""
712 self.defer_from_thread(self.factory.current_protocol.move,
713 share_str(share_uuid), str(node_uuid),
714 str(parent_uuid), name)
715
716 @log_timing
717 def unlink(self, share_uuid, node_uuid):
718 """Unlinks a file on the server."""
719 self.defer_from_thread(self.factory.current_protocol.unlink,
720 share_str(share_uuid), str(node_uuid))
721
722 @log_timing
723 def _get_node_hashes(self, share_uuid, node_uuids):
724 """Fetches hashes for the given nodes."""
725 share = share_str(share_uuid)
726 queries = [(share, str(node_uuid), request.UNKNOWN_HASH) \
727 for node_uuid in node_uuids]
728 d = self.factory.current_protocol.query(queries)
729
730 @log_timing
731 def _collect_hashes(multi_result):
732 """Accumulate hashes from query replies."""
733 hashes = {}
734 for (success, value) in multi_result:
735 if success:
736 for node_state in value.response:
737 node_uuid = uuid.UUID(node_state.node)
738 hashes[node_uuid] = node_state.hash
739 return hashes
740
741 d.addCallback(_collect_hashes)
742 return d
743
744 @log_timing
745 def get_incoming_shares(self):
746 """Returns a list of incoming shares as (name, uuid, accepted)
747 tuples.
748
749 """
750 _list_shares = self.factory.current_protocol.list_shares
751 r = self.defer_from_thread(_list_shares)
752 return [(s.name, s.id, s.other_visible_name,
753 s.accepted, s.access_level) \
754 for s in r.shares if s.direction == "to_me"]
0755
=== added file 'src/u1sync/constants.py'
--- src/u1sync/constants.py 1970-01-01 00:00:00 +0000
+++ src/u1sync/constants.py 2010-08-27 14:56:43 +0000
@@ -0,0 +1,30 @@
1# ubuntuone.u1sync.constants
2#
3# u1sync constants
4#
5# Author: Tim Cole <tim.cole@canonical.com>
6#
7# Copyright 2009 Canonical Ltd.
8#
9# This program is free software: you can redistribute it and/or modify it
10# under the terms of the GNU General Public License version 3, as published
11# by the Free Software Foundation.
12#
13# This program is distributed in the hope that it will be useful, but
14# WITHOUT ANY WARRANTY; without even the implied warranties of
15# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
16# PURPOSE. See the GNU General Public License for more details.
17#
18# You should have received a copy of the GNU General Public License along
19# with this program. If not, see <http://www.gnu.org/licenses/>.
20"""Assorted constant definitions which don't fit anywhere else."""
21
22import re
23
24# the name of the directory u1sync uses to keep metadata about a mirror
25METADATA_DIR_NAME = u".ubuntuone-sync"
26
27# filenames to ignore
28SPECIAL_FILE_RE = re.compile(".*\\.("
29 "(u1)?partial|part|"
30 "(u1)?conflict(\\.[0-9]+)?)$")
031
=== added file 'src/u1sync/genericmerge.py'
--- src/u1sync/genericmerge.py 1970-01-01 00:00:00 +0000
+++ src/u1sync/genericmerge.py 2010-08-27 14:56:43 +0000
@@ -0,0 +1,88 @@
1# ubuntuone.u1sync.genericmerge
2#
3# Generic merge function
4#
5# Author: Tim Cole <tim.cole@canonical.com>
6#
7# Copyright 2009 Canonical Ltd.
8#
9# This program is free software: you can redistribute it and/or modify it
10# under the terms of the GNU General Public License version 3, as published
11# by the Free Software Foundation.
12#
13# This program is distributed in the hope that it will be useful, but
14# WITHOUT ANY WARRANTY; without even the implied warranties of
15# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
16# PURPOSE. See the GNU General Public License for more details.
17#
18# You should have received a copy of the GNU General Public License along
19# with this program. If not, see <http://www.gnu.org/licenses/>.
20"""A generic abstraction for merge operations on directory trees."""
21
22from itertools import chain
23from ubuntuone.storageprotocol.dircontent_pb2 import DIRECTORY
24
25class MergeNode(object):
26 """A filesystem node. Should generally be treated as immutable."""
27 def __init__(self, node_type, content_hash=None, uuid=None, children=None,
28 conflict_info=None):
29 """Initializes a node instance."""
30 self.node_type = node_type
31 self.children = children
32 self.uuid = uuid
33 self.content_hash = content_hash
34 self.conflict_info = conflict_info
35
36 def __eq__(self, other):
37 """Equality test."""
38 if type(other) is not type(self):
39 return False
40 return self.node_type == other.node_type and \
41 self.children == other.children and \
42 self.uuid == other.uuid and \
43 self.content_hash == other.content_hash and \
44 self.conflict_info == other.conflict_info
45
46 def __ne__(self, other):
47 """Non-equality test."""
48 return not self.__eq__(other)
49
50
51def show_tree(tree, indent="", name="/"):
52 """Prints a tree."""
53 # TODO: return string do not print
54 if tree.node_type == DIRECTORY:
55 type_str = "DIR "
56 else:
57 type_str = "FILE"
58 print "%s%-36s %s %s %s" % (indent, tree.uuid, type_str, name,
59 tree.content_hash)
60 if tree.node_type == DIRECTORY and tree.children is not None:
61 for name in sorted(tree.children.keys()):
62 subtree = tree.children[name]
63 show_tree(subtree, indent=" " + indent, name=name)
64
65def generic_merge(trees, pre_merge, post_merge, partial_parent, name):
66 """Generic tree merging function."""
67
68 partial_result = pre_merge(nodes=trees, name=name,
69 partial_parent=partial_parent)
70
71 def tree_children(tree):
72 """Returns children if tree is not None"""
73 return tree.children if tree is not None else None
74
75 child_dicts = [tree_children(t) or {} for t in trees]
76 child_names = set(chain(*[cs.iterkeys() for cs in child_dicts]))
77 child_results = {}
78 for child_name in child_names:
79 subtrees = [cs.get(child_name, None) for cs in child_dicts]
80 child_result = generic_merge(trees=subtrees,
81 pre_merge=pre_merge,
82 post_merge=post_merge,
83 partial_parent=partial_result,
84 name=child_name)
85 child_results[child_name] = child_result
86
87 return post_merge(nodes=trees, partial_result=partial_result,
88 child_results=child_results)
089
=== added file 'src/u1sync/main.py'
--- src/u1sync/main.py 1970-01-01 00:00:00 +0000
+++ src/u1sync/main.py 2010-08-27 14:56:43 +0000
@@ -0,0 +1,360 @@
1# ubuntuone.u1sync.main
2#
3# Prototype directory sync client
4#
5# Author: Lucio Torre <lucio.torre@canonical.com>
6# Author: Tim Cole <tim.cole@canonical.com>
7#
8# Copyright 2009 Canonical Ltd.
9#
10# This program is free software: you can redistribute it and/or modify it
11# under the terms of the GNU General Public License version 3, as published
12# by the Free Software Foundation.
13#
14# This program is distributed in the hope that it will be useful, but
15# WITHOUT ANY WARRANTY; without even the implied warranties of
16# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
17# PURPOSE. See the GNU General Public License for more details.
18#
19# You should have received a copy of the GNU General Public License along
20# with this program. If not, see <http://www.gnu.org/licenses/>.
21"""A prototype directory sync client."""
22
23from __future__ import with_statement
24
25import os
26import sys
27import uuid
28import signal
29import logging
30from Queue import Queue
31from errno import EEXIST
32from twisted.internet import reactor
33# import the storeage protocol to allow the communication with the server
34import ubuntuone.storageprotocol.dircontent_pb2 as dircontent_pb2
35from ubuntuone.storageprotocol.dircontent_pb2 import DIRECTORY, SYMLINK
36# import helper modules
37from u1sync.genericmerge import (
38 show_tree, generic_merge)
39from u1sync.client import (
40 ConnectionError, AuthenticationError, NoSuchShareError,
41 ForcedShutdown, Client)
42from u1sync.scan import scan_directory
43from u1sync.merge import (
44 SyncMerge, ClobberServerMerge, ClobberLocalMerge, merge_trees)
45from u1sync.sync import download_tree, upload_tree
46from u1sync.utils import safe_mkdir
47from u1sync import metadata
48from u1sync.constants import METADATA_DIR_NAME
49from u1sync.ubuntuone_optparse import UbuntuOneOptionsParser
50
51# pylint: disable-msg=W0212
52NODE_TYPE_ENUM = dircontent_pb2._NODETYPE
53# pylint: enable-msg=W0212
54def node_type_str(node_type):
55 """Converts a numeric node type to a human-readable string."""
56 return NODE_TYPE_ENUM.values_by_number[node_type].name
57
58
59class ReadOnlyShareError(Exception):
60 """Share is read-only."""
61
62
63class DirectoryAlreadyInitializedError(Exception):
64 """The directory has already been initialized."""
65
66
67class DirectoryNotInitializedError(Exception):
68 """The directory has not been initialized."""
69
70
71class NoParentError(Exception):
72 """A node has no parent."""
73
74
75class TreesDiffer(Exception):
76 """Raised when diff tree differs."""
77 def __init__(self, quiet):
78 self.quiet = quiet
79
80
81MERGE_ACTIONS = {
82 # action: (merge_class, should_upload, should_download)
83 'sync': (SyncMerge, True, True),
84 'clobber-server': (ClobberServerMerge, True, False),
85 'clobber-local': (ClobberLocalMerge, False, True),
86 'upload': (SyncMerge, True, False),
87 'download': (SyncMerge, False, True),
88 'auto': None # special case
89}
90
91DEFAULT_MERGE_ACTION = 'auto'
92
93def do_init(client, share_spec, directory, quiet, subtree_path,
94 metadata=metadata):
95 """Initializes a directory for syncing, and syncs it."""
96 info = metadata.Metadata()
97
98 if share_spec is not None:
99 info.share_uuid = client.find_volume(share_spec)
100 else:
101 info.share_uuid = None
102
103 if subtree_path is not None:
104 info.path = subtree_path
105 else:
106 info.path = "/"
107
108 logging.info("Initializing directory...")
109 safe_mkdir(directory)
110
111 metadata_dir = os.path.join(directory, METADATA_DIR_NAME)
112 try:
113 os.mkdir(metadata_dir)
114 except OSError, e:
115 if e.errno == EEXIST:
116 raise DirectoryAlreadyInitializedError(directory)
117 else:
118 raise
119
120 logging.info("Writing mirror metadata...")
121 metadata.write(metadata_dir, info)
122
123 logging.info("Done.")
124
125def do_sync(client, directory, action, dry_run, quiet):
126 """Synchronizes a directory with the given share."""
127 absolute_path = os.path.abspath(directory)
128 while True:
129 metadata_dir = os.path.join(absolute_path, METADATA_DIR_NAME)
130 if os.path.exists(metadata_dir):
131 break
132 if absolute_path == "/":
133 raise DirectoryNotInitializedError(directory)
134 absolute_path = os.path.split(absolute_path)[0]
135
136 logging.info("Reading mirror metadata...")
137 info = metadata.read(metadata_dir)
138
139 top_uuid, writable = client.get_root_info(info.share_uuid)
140
141 if info.root_uuid is None:
142 info.root_uuid = client.resolve_path(info.share_uuid, top_uuid,
143 info.path)
144
145 if action == 'auto':
146 if writable:
147 action = 'sync'
148 else:
149 action = 'download'
150 merge_type, should_upload, should_download = MERGE_ACTIONS[action]
151 if should_upload and not writable:
152 raise ReadOnlyShareError(info.share_uuid)
153
154 logging.info("Scanning directory...")
155
156 local_tree = scan_directory(absolute_path, quiet=quiet)
157
158 logging.info("Fetching metadata...")
159
160 remote_tree = client.build_tree(info.share_uuid, info.root_uuid)
161 if not quiet:
162 show_tree(remote_tree)
163
164 logging.info("Merging trees...")
165 merged_tree = merge_trees(old_local_tree=info.local_tree,
166 local_tree=local_tree,
167 old_remote_tree=info.remote_tree,
168 remote_tree=remote_tree,
169 merge_action=merge_type())
170 if not quiet:
171 show_tree(merged_tree)
172
173 logging.info("Syncing content...")
174 if should_download:
175 info.local_tree = download_tree(merged_tree=merged_tree,
176 local_tree=local_tree,
177 client=client,
178 share_uuid=info.share_uuid,
179 path=absolute_path, dry_run=dry_run,
180 quiet=quiet)
181 else:
182 info.local_tree = local_tree
183 if should_upload:
184 info.remote_tree = upload_tree(merged_tree=merged_tree,
185 remote_tree=remote_tree,
186 client=client,
187 share_uuid=info.share_uuid,
188 path=absolute_path, dry_run=dry_run,
189 quiet=quiet)
190 else:
191 info.remote_tree = remote_tree
192
193 if not dry_run:
194 logging.info("Updating mirror metadata...")
195 metadata.write(metadata_dir, info)
196
197 logging.info("Done.")
198
199def do_list_shares(client):
200 """Lists available (incoming) shares."""
201 shares = client.get_incoming_shares()
202 for (name, id, user, accepted, access) in shares:
203 if not accepted:
204 status = " [not accepted]"
205 else:
206 status = ""
207 name = name.encode("utf-8")
208 user = user.encode("utf-8")
209 print "%s %s (from %s) [%s]%s" % (id, name, user, access, status)
210
211def do_diff(client, share_spec, directory, quiet, subtree_path,
212 ignore_symlinks=True):
213 """Diffs a local directory with the server."""
214 if share_spec is not None:
215 share_uuid = client.find_volume(share_spec)
216 else:
217 share_uuid = None
218 if subtree_path is None:
219 subtree_path = '/'
220 # pylint: disable-msg=W0612
221 root_uuid, writable = client.get_root_info(share_uuid)
222 subtree_uuid = client.resolve_path(share_uuid, root_uuid, subtree_path)
223 local_tree = scan_directory(directory, quiet=True)
224 remote_tree = client.build_tree(share_uuid, subtree_uuid)
225
226 def pre_merge(nodes, name, partial_parent):
227 """Compares nodes and prints differences."""
228 (local_node, remote_node) = nodes
229 # pylint: disable-msg=W0612
230 (parent_display_path, parent_differs) = partial_parent
231 display_path = os.path.join(parent_display_path, name.encode("UTF-8"))
232 differs = True
233 if local_node is None:
234 logging.info("%s missing from client",display_path)
235 elif remote_node is None:
236 if ignore_symlinks and local_node.node_type == SYMLINK:
237 differs = False
238 logging.info("%s missing from server",display_path)
239 elif local_node.node_type != remote_node.node_type:
240 local_type = node_type_str(local_node.node_type)
241 remote_type = node_type_str(remote_node.node_type)
242 logging.info("%s node types differ (client: %s, server: %s)",
243 display_path, local_type, remote_type)
244 elif local_node.node_type != DIRECTORY and \
245 local_node.content_hash != remote_node.content_hash:
246 local_content = local_node.content_hash
247 remote_content = remote_node.content_hash
248 logging.info("%s has different content (client: %s, server: %s)",
249 display_path, local_content, remote_content)
250 else:
251 differs = False
252 return (display_path, differs)
253
254 def post_merge(nodes, partial_result, child_results):
255 """Aggregates 'differs' flags."""
256 # pylint: disable-msg=W0612
257 (display_path, differs) = partial_result
258 return differs or any(child_results.itervalues())
259
260 differs = generic_merge(trees=[local_tree, remote_tree],
261 pre_merge=pre_merge, post_merge=post_merge,
262 partial_parent=("", False), name=u"")
263 if differs:
264 raise TreesDiffer(quiet=quiet)
265
266def do_main(argv, options_parser):
267 """The main user-facing portion of the script."""
268 # pass the arguments to ensure that the code works correctly
269 options_parser.get_options(argv)
270 client = Client(realm=options_parser.options.realm, reactor=reactor)
271 # set the logging level to info in the user passed quite to be false
272 if options_parser.options.quiet:
273 logging.basicConfig(level=logging.INFO)
274
275 signal.signal(signal.SIGINT, lambda s, f: client.force_shutdown())
276 signal.signal(signal.SIGTERM, lambda s, f: client.force_shutdown())
277
278 def run_client():
279 """Run the blocking client."""
280 token = options_parser.options.token
281
282 client.connect_ssl(options_parser.options.host,
283 int(options_parser.options.port),
284 options_parser.options.no_ssl_verify)
285
286 try:
287 client.set_capabilities()
288 client.oauth_from_token(options_parser.options.token)
289
290 if options_parser.options.mode == "sync":
291 do_sync(client=client, directory=options_parser.options.directory,
292 action=options_parser.options.action,
293 dry_run=options_parser.options.dry_run,
294 quiet=options_parser.options.quiet)
295 elif options_parser.options.mode == "init":
296 do_init(client=client, share_spec=options_parser.options.share,
297 directory=options_parser.options.directory,
298 quiet=options_parser.options.quiet, subtree_path=options_parser.options.subtree)
299 elif options.mode == "list-shares":
300 do_list_shares(client=client)
301 elif options.mode == "diff":
302 do_diff(client=client, share_spec=options_parser.options.share,
303 directory=directory,
304 quiet=options_parser.options.quiet,
305 subtree_path=options_parser.options.subtree,
306 ignore_symlinks=False)
307 elif options_parser.options.mode == "authorize":
308 if not options_parser.options.quiet:
309 print "Authorized."
310 finally:
311 client.disconnect()
312
313 def capture_exception(queue, func):
314 """Capture the exception from calling func."""
315 try:
316 func()
317 except Exception:
318 queue.put(sys.exc_info())
319 else:
320 queue.put(None)
321 finally:
322 reactor.callWhenRunning(reactor.stop)
323
324 queue = Queue()
325 reactor.callInThread(capture_exception, queue, run_client)
326 reactor.run(installSignalHandlers=False)
327 exc_info = queue.get(True, 0.1)
328 if exc_info:
329 raise exc_info[0], exc_info[1], exc_info[2]
330
331def main(argv):
332 """Top-level main function."""
333 try:
334 do_main(argv, UbuntuOneOptionsParser())
335 except AuthenticationError, e:
336 print "Authentication failed: %s" % e
337 except ConnectionError, e:
338 print "Connection failed: %s" % e
339 except DirectoryNotInitializedError:
340 print "Directory not initialized; " \
341 "use --init [DIRECTORY] to initialize it."
342 except DirectoryAlreadyInitializedError:
343 print "Directory already initialized."
344 except NoSuchShareError:
345 print "No matching share found."
346 except ReadOnlyShareError:
347 print "The selected action isn't possible on a read-only share."
348 except (ForcedShutdown, KeyboardInterrupt):
349 print "Interrupted!"
350 except TreesDiffer, e:
351 if not e.quiet:
352 print "Trees differ."
353 else:
354 return 0
355 return 1
356
357if __name__ == "__main__":
358 # set the name of the process, this just works on windows
359 sys.argv[0] = "Ubuntuone"
360 main(sys.argv[1:])
0\ No newline at end of file361\ No newline at end of file
1362
=== added file 'src/u1sync/merge.py'
--- src/u1sync/merge.py 1970-01-01 00:00:00 +0000
+++ src/u1sync/merge.py 2010-08-27 14:56:43 +0000
@@ -0,0 +1,186 @@
1# ubuntuone.u1sync.merge
2#
3# Tree state merging
4#
5# Author: Tim Cole <tim.cole@canonical.com>
6#
7# Copyright 2009 Canonical Ltd.
8#
9# This program is free software: you can redistribute it and/or modify it
10# under the terms of the GNU General Public License version 3, as published
11# by the Free Software Foundation.
12#
13# This program is distributed in the hope that it will be useful, but
14# WITHOUT ANY WARRANTY; without even the implied warranties of
15# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
16# PURPOSE. See the GNU General Public License for more details.
17#
18# You should have received a copy of the GNU General Public License along
19# with this program. If not, see <http://www.gnu.org/licenses/>.
20"""Code for merging changes between modified trees."""
21
22from __future__ import with_statement
23
24import os
25
26from ubuntuone.storageprotocol.dircontent_pb2 import DIRECTORY
27from u1sync.genericmerge import (
28 MergeNode, generic_merge)
29import uuid
30
31
32class NodeTypeMismatchError(Exception):
33 """Node types don't match."""
34
35
36def merge_trees(old_local_tree, local_tree, old_remote_tree, remote_tree,
37 merge_action):
38 """Performs a tree merge using the given merge action."""
39
40 def pre_merge(nodes, name, partial_parent):
41 """Accumulates path and determines merged node type."""
42 old_local_node, local_node, old_remote_node, remote_node = nodes
43 # pylint: disable-msg=W0612
44 (parent_path, parent_type) = partial_parent
45 path = os.path.join(parent_path, name.encode("utf-8"))
46 node_type = merge_action.get_node_type(old_local_node=old_local_node,
47 local_node=local_node,
48 old_remote_node=old_remote_node,
49 remote_node=remote_node,
50 path=path)
51 return (path, node_type)
52
53 def post_merge(nodes, partial_result, child_results):
54 """Drops deleted children and merges node."""
55 old_local_node, local_node, old_remote_node, remote_node = nodes
56 # pylint: disable-msg=W0612
57 (path, node_type) = partial_result
58 if node_type == DIRECTORY:
59 merged_children = dict([(name, child) for (name, child)
60 in child_results.iteritems()
61 if child is not None])
62 else:
63 merged_children = None
64 return merge_action.merge_node(old_local_node=old_local_node,
65 local_node=local_node,
66 old_remote_node=old_remote_node,
67 remote_node=remote_node,
68 node_type=node_type,
69 merged_children=merged_children)
70
71 return generic_merge(trees=[old_local_tree, local_tree,
72 old_remote_tree, remote_tree],
73 pre_merge=pre_merge, post_merge=post_merge,
74 name=u"", partial_parent=("", None))
75
76
77class SyncMerge(object):
78 """Performs a bidirectional sync merge."""
79
80 def get_node_type(self, old_local_node, local_node,
81 old_remote_node, remote_node, path):
82 """Requires that all node types match."""
83 node_type = None
84 for node in (old_local_node, local_node, remote_node):
85 if node is not None:
86 if node_type is not None:
87 if node.node_type != node_type:
88 message = "Node types don't match for %s" % path
89 raise NodeTypeMismatchError(message)
90 else:
91 node_type = node.node_type
92 return node_type
93
94 def merge_node(self, old_local_node, local_node,
95 old_remote_node, remote_node, node_type, merged_children):
96 """Performs bidirectional merge of node state."""
97
98 def node_content_hash(node):
99 """Returns node content hash if node is not None"""
100 return node.content_hash if node is not None else None
101
102 old_local_content_hash = node_content_hash(old_local_node)
103 local_content_hash = node_content_hash(local_node)
104 old_remote_content_hash = node_content_hash(old_remote_node)
105 remote_content_hash = node_content_hash(remote_node)
106
107 locally_deleted = old_local_node is not None and local_node is None
108 deleted_on_server = old_remote_node is not None and remote_node is None
109 # updated means modified or created
110 locally_updated = not locally_deleted and \
111 old_local_content_hash != local_content_hash
112 updated_on_server = not deleted_on_server and \
113 old_remote_content_hash != remote_content_hash
114
115 has_merged_children = merged_children is not None and \
116 len(merged_children) > 0
117
118 either_node_exists = local_node is not None or remote_node is not None
119 should_delete = (locally_deleted and not updated_on_server) or \
120 (deleted_on_server and not locally_updated)
121
122 if (either_node_exists and not should_delete) or has_merged_children:
123 if node_type != DIRECTORY and \
124 locally_updated and updated_on_server and \
125 local_content_hash != remote_content_hash:
126 # local_content_hash will become the merged content_hash;
127 # save remote_content_hash in conflict info
128 conflict_info = (str(uuid.uuid4()), remote_content_hash)
129 else:
130 conflict_info = None
131 node_uuid = remote_node.uuid if remote_node is not None else None
132 if locally_updated:
133 content_hash = local_content_hash or remote_content_hash
134 else:
135 content_hash = remote_content_hash or local_content_hash
136 return MergeNode(node_type=node_type, uuid=node_uuid,
137 children=merged_children, content_hash=content_hash,
138 conflict_info=conflict_info)
139 else:
140 return None
141
142
143class ClobberServerMerge(object):
144 """Clobber server to match local state."""
145
146 def get_node_type(self, old_local_node, local_node,
147 old_remote_node, remote_node, path):
148 """Return local node type."""
149 if local_node is not None:
150 return local_node.node_type
151 else:
152 return None
153
154 def merge_node(self, old_local_node, local_node,
155 old_remote_node, remote_node, node_type, merged_children):
156 """Copy local node and associate with remote uuid (if applicable)."""
157 if local_node is None:
158 return None
159 if remote_node is not None:
160 node_uuid = remote_node.uuid
161 else:
162 node_uuid = None
163 return MergeNode(node_type=local_node.node_type, uuid=node_uuid,
164 content_hash=local_node.content_hash,
165 children=merged_children)
166
167
168class ClobberLocalMerge(object):
169 """Clobber local state to match server."""
170
171 def get_node_type(self, old_local_node, local_node,
172 old_remote_node, remote_node, path):
173 """Return remote node type."""
174 if remote_node is not None:
175 return remote_node.node_type
176 else:
177 return None
178
179 def merge_node(self, old_local_node, local_node,
180 old_remote_node, remote_node, node_type, merged_children):
181 """Copy the remote node."""
182 if remote_node is None:
183 return None
184 return MergeNode(node_type=node_type, uuid=remote_node.uuid,
185 content_hash=remote_node.content_hash,
186 children=merged_children)
0187
=== added file 'src/u1sync/metadata.py'
--- src/u1sync/metadata.py 1970-01-01 00:00:00 +0000
+++ src/u1sync/metadata.py 2010-08-27 14:56:43 +0000
@@ -0,0 +1,145 @@
1# ubuntuone.u1sync.metadata
2#
3# u1sync metadata routines
4#
5# Author: Tim Cole <tim.cole@canonical.com>
6#
7# Copyright 2009 Canonical Ltd.
8#
9# This program is free software: you can redistribute it and/or modify it
10# under the terms of the GNU General Public License version 3, as published
11# by the Free Software Foundation.
12#
13# This program is distributed in the hope that it will be useful, but
14# WITHOUT ANY WARRANTY; without even the implied warranties of
15# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
16# PURPOSE. See the GNU General Public License for more details.
17#
18# You should have received a copy of the GNU General Public License along
19# with this program. If not, see <http://www.gnu.org/licenses/>.
20"""Routines for loading/storing u1sync mirror metadata."""
21
22from __future__ import with_statement
23
24import os
25import cPickle as pickle
26from errno import ENOENT
27from contextlib import contextmanager
28from ubuntuone.storageprotocol.dircontent_pb2 import DIRECTORY
29from u1sync.merge import MergeNode
30from u1sync.utils import safe_unlink
31import uuid
32
33class Metadata(object):
34 """Object representing mirror metadata."""
35 def __init__(self, local_tree=None, remote_tree=None, share_uuid=None,
36 root_uuid=None, path=None):
37 """Populate fields."""
38 self.local_tree = local_tree
39 self.remote_tree = remote_tree
40 self.share_uuid = share_uuid
41 self.root_uuid = root_uuid
42 self.path = path
43
44def read(metadata_dir):
45 """Read metadata for a mirror rooted at directory."""
46 index_file = os.path.join(metadata_dir, "local-index")
47 share_uuid_file = os.path.join(metadata_dir, "share-uuid")
48 root_uuid_file = os.path.join(metadata_dir, "root-uuid")
49 path_file = os.path.join(metadata_dir, "path")
50
51 index = read_pickle_file(index_file, {})
52 share_uuid = read_uuid_file(share_uuid_file)
53 root_uuid = read_uuid_file(root_uuid_file)
54 path = read_string_file(path_file, '/')
55
56 local_tree = index.get("tree", None)
57 remote_tree = index.get("remote_tree", None)
58
59 if local_tree is None:
60 local_tree = MergeNode(node_type=DIRECTORY, children={})
61 if remote_tree is None:
62 remote_tree = MergeNode(node_type=DIRECTORY, children={})
63
64 return Metadata(local_tree=local_tree, remote_tree=remote_tree,
65 share_uuid=share_uuid, root_uuid=root_uuid,
66 path=path)
67
68def write(metadata_dir, info):
69 """Writes all metadata for the mirror rooted at directory."""
70 share_uuid_file = os.path.join(metadata_dir, "share-uuid")
71 root_uuid_file = os.path.join(metadata_dir, "root-uuid")
72 index_file = os.path.join(metadata_dir, "local-index")
73 path_file = os.path.join(metadata_dir, "path")
74 if info.share_uuid is not None:
75 write_uuid_file(share_uuid_file, info.share_uuid)
76 else:
77 safe_unlink(share_uuid_file)
78 if info.root_uuid is not None:
79 write_uuid_file(root_uuid_file, info.root_uuid)
80 else:
81 safe_unlink(root_uuid_file)
82 write_string_file(path_file, info.path)
83 write_pickle_file(index_file, {"tree": info.local_tree,
84 "remote_tree": info.remote_tree})
85
86def write_pickle_file(filename, value):
87 """Writes a pickled python object to a file."""
88 with atomic_update_file(filename) as stream:
89 pickle.dump(value, stream, 2)
90
91def write_string_file(filename, value):
92 """Writes a string to a file with an added line feed, or
93 deletes the file if value is None.
94 """
95 if value is not None:
96 with atomic_update_file(filename) as stream:
97 stream.write(value)
98 stream.write('\n')
99 else:
100 safe_unlink(filename)
101
102def write_uuid_file(filename, value):
103 """Writes a UUID to a file."""
104 write_string_file(filename, str(value))
105
106def read_pickle_file(filename, default_value=None):
107 """Reads a pickled python object from a file."""
108 try:
109 with open(filename, "rb") as stream:
110 return pickle.load(stream)
111 except IOError, e:
112 if e.errno != ENOENT:
113 raise
114 return default_value
115
116def read_string_file(filename, default_value=None):
117 """Reads a string from a file, discarding the final character."""
118 try:
119 with open(filename, "r") as stream:
120 return stream.read()[:-1]
121 except IOError, e:
122 if e.errno != ENOENT:
123 raise
124 return default_value
125
126def read_uuid_file(filename, default_value=None):
127 """Reads a UUID from a file."""
128 try:
129 with open(filename, "r") as stream:
130 return uuid.UUID(stream.read()[:-1])
131 except IOError, e:
132 if e.errno != ENOENT:
133 raise
134 return default_value
135
136@contextmanager
137def atomic_update_file(filename):
138 """Returns a context manager for atomically updating a file."""
139 temp_filename = "%s.%s" % (filename, uuid.uuid4())
140 try:
141 with open(temp_filename, "w") as stream:
142 yield stream
143 os.rename(temp_filename, filename)
144 finally:
145 safe_unlink(temp_filename)
0146
=== added file 'src/u1sync/scan.py'
--- src/u1sync/scan.py 1970-01-01 00:00:00 +0000
+++ src/u1sync/scan.py 2010-08-27 14:56:43 +0000
@@ -0,0 +1,102 @@
1# ubuntuone.u1sync.scan
2#
3# Directory scanning
4#
5# Author: Tim Cole <tim.cole@canonical.com>
6#
7# Copyright 2009 Canonical Ltd.
8#
9# This program is free software: you can redistribute it and/or modify it
10# under the terms of the GNU General Public License version 3, as published
11# by the Free Software Foundation.
12#
13# This program is distributed in the hope that it will be useful, but
14# WITHOUT ANY WARRANTY; without even the implied warranties of
15# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
16# PURPOSE. See the GNU General Public License for more details.
17#
18# You should have received a copy of the GNU General Public License along
19# with this program. If not, see <http://www.gnu.org/licenses/>.
20"""Code for scanning local directory state."""
21
22from __future__ import with_statement
23
24import os
25import hashlib
26import shutil
27from errno import ENOTDIR, EINVAL
28import sys
29
30EMPTY_HASH = "sha1:%s" % hashlib.sha1().hexdigest()
31
32from ubuntuone.storageprotocol.dircontent_pb2 import \
33 DIRECTORY, FILE, SYMLINK
34from u1sync.genericmerge import MergeNode
35from u1sync.utils import should_sync
36
37def scan_directory(path, display_path="", quiet=False):
38 """Scans a local directory and builds an in-memory tree from it."""
39 if display_path != "" and not quiet:
40 print display_path
41
42 link_target = None
43 child_names = None
44 try:
45 print "Path is " + str(path)
46 if sys.platform == "win32":
47 if path.endswith(".lnk") or path.endswith(".url"):
48 import win32com.client
49 import pythoncom
50 pythoncom.CoInitialize()
51 shell = win32com.client.Dispatch("WScript.Shell")
52 shortcut = shell.CreateShortCut(path)
53 print(shortcut.Targetpath)
54 link_target = shortcut.Targetpath
55 else:
56 link_target = None
57 if os.path.isdir(path):
58 child_names = os.listdir(path)
59 else:
60 link_target = os.readlink(path)
61 except OSError, e:
62 if e.errno != EINVAL:
63 raise
64 try:
65 child_names = os.listdir(path)
66 except OSError, e:
67 if e.errno != ENOTDIR:
68 raise
69
70 if link_target is not None:
71 # symlink
72 sum = hashlib.sha1()
73 sum.update(link_target)
74 content_hash = "sha1:%s" % sum.hexdigest()
75 return MergeNode(node_type=SYMLINK, content_hash=content_hash)
76 elif child_names is not None:
77 # directory
78 child_names = [n for n in child_names if should_sync(n.decode("utf-8"))]
79 child_paths = [(os.path.join(path, child_name),
80 os.path.join(display_path, child_name)) \
81 for child_name in child_names]
82 children = [scan_directory(child_path, child_display_path, quiet) \
83 for (child_path, child_display_path) in child_paths]
84 unicode_child_names = [n.decode("utf-8") for n in child_names]
85 children = dict(zip(unicode_child_names, children))
86 return MergeNode(node_type=DIRECTORY, children=children)
87 else:
88 # regular file
89 sum = hashlib.sha1()
90
91
92 class HashStream(object):
93 """Stream that computes hashes."""
94 def write(self, bytes):
95 """Accumulate bytes."""
96 sum.update(bytes)
97
98
99 with open(path, "r") as stream:
100 shutil.copyfileobj(stream, HashStream())
101 content_hash = "sha1:%s" % sum.hexdigest()
102 return MergeNode(node_type=FILE, content_hash=content_hash)
0103
=== added file 'src/u1sync/sync.py'
--- src/u1sync/sync.py 1970-01-01 00:00:00 +0000
+++ src/u1sync/sync.py 2010-08-27 14:56:43 +0000
@@ -0,0 +1,384 @@
1# ubuntuone.u1sync.sync
2#
3# State update
4#
5# Author: Tim Cole <tim.cole@canonical.com>
6#
7# Copyright 2009 Canonical Ltd.
8#
9# This program is free software: you can redistribute it and/or modify it
10# under the terms of the GNU General Public License version 3, as published
11# by the Free Software Foundation.
12#
13# This program is distributed in the hope that it will be useful, but
14# WITHOUT ANY WARRANTY; without even the implied warranties of
15# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
16# PURPOSE. See the GNU General Public License for more details.
17#
18# You should have received a copy of the GNU General Public License along
19# with this program. If not, see <http://www.gnu.org/licenses/>.
20"""After merging, these routines are used to synchronize state locally and on
21the server to correspond to the merged result."""
22
23from __future__ import with_statement
24
25import os
26
27EMPTY_HASH = ""
28UPLOAD_SYMBOL = u"\u25b2".encode("utf-8")
29DOWNLOAD_SYMBOL = u"\u25bc".encode("utf-8")
30CONFLICT_SYMBOL = "!"
31DELETE_SYMBOL = "X"
32
33from ubuntuone.storageprotocol import request
34from ubuntuone.storageprotocol.dircontent_pb2 import (
35 DIRECTORY, SYMLINK)
36from u1sync.genericmerge import (
37 MergeNode, generic_merge)
38from u1sync.utils import safe_mkdir
39from u1sync.client import UnsupportedOperationError
40
41def get_conflict_path(path, conflict_info):
42 """Returns path for conflict file corresponding to path."""
43 dir, name = os.path.split(path)
44 unique_id = conflict_info[0]
45 return os.path.join(dir, "conflict-%s-%s" % (unique_id, name))
46
47def name_from_path(path):
48 """Returns unicode name from last path component."""
49 return os.path.split(path)[1].decode("UTF-8")
50
51
52class NodeSyncError(Exception):
53 """Error syncing node."""
54
55
56class NodeCreateError(NodeSyncError):
57 """Error creating node."""
58
59
60class NodeUpdateError(NodeSyncError):
61 """Error updating node."""
62
63
64class NodeDeleteError(NodeSyncError):
65 """Error deleting node."""
66
67
68def sync_tree(merged_tree, original_tree, sync_mode, path, quiet):
69 """Performs actual synchronization."""
70
71 def pre_merge(nodes, name, partial_parent):
72 """Create nodes and write content as required."""
73 (merged_node, original_node) = nodes
74 # pylint: disable-msg=W0612
75 (parent_path, parent_display_path, parent_uuid, parent_synced) \
76 = partial_parent
77
78 utf8_name = name.encode("utf-8")
79 path = os.path.join(parent_path, utf8_name)
80 display_path = os.path.join(parent_display_path, utf8_name)
81 node_uuid = None
82
83 synced = False
84 if merged_node is not None:
85 if merged_node.node_type == DIRECTORY:
86 if original_node is not None:
87 synced = True
88 node_uuid = original_node.uuid
89 else:
90 if not quiet:
91 print "%s %s" % (sync_mode.symbol, display_path)
92 try:
93 create_dir = sync_mode.create_directory
94 node_uuid = create_dir(parent_uuid=parent_uuid,
95 path=path)
96 synced = True
97 except NodeCreateError, e:
98 print e
99 elif merged_node.content_hash is None:
100 if not quiet:
101 print "? %s" % display_path
102 elif original_node is None or \
103 original_node.content_hash != merged_node.content_hash or \
104 merged_node.conflict_info is not None:
105 conflict_info = merged_node.conflict_info
106 if conflict_info is not None:
107 conflict_symbol = CONFLICT_SYMBOL
108 else:
109 conflict_symbol = " "
110 if not quiet:
111 print "%s %s %s" % (sync_mode.symbol, conflict_symbol,
112 display_path)
113 if original_node is not None:
114 node_uuid = original_node.uuid or merged_node.uuid
115 original_hash = original_node.content_hash or EMPTY_HASH
116 else:
117 node_uuid = merged_node.uuid
118 original_hash = EMPTY_HASH
119 try:
120 sync_mode.write_file(node_uuid=node_uuid,
121 content_hash=
122 merged_node.content_hash,
123 old_content_hash=original_hash,
124 path=path,
125 parent_uuid=parent_uuid,
126 conflict_info=conflict_info,
127 node_type=merged_node.node_type)
128 synced = True
129 except NodeSyncError, e:
130 print e
131 else:
132 synced = True
133
134 return (path, display_path, node_uuid, synced)
135
136 def post_merge(nodes, partial_result, child_results):
137 """Delete nodes."""
138 (merged_node, original_node) = nodes
139 # pylint: disable-msg=W0612
140 (path, display_path, node_uuid, synced) = partial_result
141
142 if merged_node is None:
143 assert original_node is not None
144 if not quiet:
145 print "%s %s %s" % (sync_mode.symbol, DELETE_SYMBOL,
146 display_path)
147 try:
148 if original_node.node_type == DIRECTORY:
149 sync_mode.delete_directory(node_uuid=original_node.uuid,
150 path=path)
151 else:
152 # files or symlinks
153 sync_mode.delete_file(node_uuid=original_node.uuid,
154 path=path)
155 synced = True
156 except NodeDeleteError, e:
157 print e
158
159 if synced:
160 model_node = merged_node
161 else:
162 model_node = original_node
163
164 if model_node is not None:
165 if model_node.node_type == DIRECTORY:
166 child_iter = child_results.iteritems()
167 merged_children = dict([(name, child) for (name, child)
168 in child_iter
169 if child is not None])
170 else:
171 # if there are children here it's because they failed to delete
172 merged_children = None
173 return MergeNode(node_type=model_node.node_type,
174 uuid=model_node.uuid,
175 children=merged_children,
176 content_hash=model_node.content_hash)
177 else:
178 return None
179
180 return generic_merge(trees=[merged_tree, original_tree],
181 pre_merge=pre_merge, post_merge=post_merge,
182 partial_parent=(path, "", None, True), name=u"")
183
184def download_tree(merged_tree, local_tree, client, share_uuid, path, dry_run,
185 quiet):
186 """Downloads a directory."""
187 if dry_run:
188 downloader = DryRun(symbol=DOWNLOAD_SYMBOL)
189 else:
190 downloader = Downloader(client=client, share_uuid=share_uuid)
191 return sync_tree(merged_tree=merged_tree, original_tree=local_tree,
192 sync_mode=downloader, path=path, quiet=quiet)
193
194def upload_tree(merged_tree, remote_tree, client, share_uuid, path, dry_run,
195 quiet):
196 """Uploads a directory."""
197 if dry_run:
198 uploader = DryRun(symbol=UPLOAD_SYMBOL)
199 else:
200 uploader = Uploader(client=client, share_uuid=share_uuid)
201 return sync_tree(merged_tree=merged_tree, original_tree=remote_tree,
202 sync_mode=uploader, path=path, quiet=quiet)
203
204
205class DryRun(object):
206 """A class which implements the sync interface but does nothing."""
207 def __init__(self, symbol):
208 """Initializes a DryRun instance."""
209 self.symbol = symbol
210
211 def create_directory(self, parent_uuid, path):
212 """Doesn't create a directory."""
213 return None
214
215 def write_file(self, node_uuid, old_content_hash, content_hash,
216 parent_uuid, path, conflict_info, node_type):
217 """Doesn't write a file."""
218 return None
219
220 def delete_directory(self, node_uuid, path):
221 """Doesn't delete a directory."""
222
223 def delete_file(self, node_uuid, path):
224 """Doesn't delete a file."""
225
226
227class Downloader(object):
228 """A class which implements the download half of syncing."""
229 def __init__(self, client, share_uuid):
230 """Initializes a Downloader instance."""
231 self.client = client
232 self.share_uuid = share_uuid
233 self.symbol = DOWNLOAD_SYMBOL
234
235 def create_directory(self, parent_uuid, path):
236 """Creates a directory."""
237 try:
238 safe_mkdir(path)
239 except OSError, e:
240 raise NodeCreateError("Error creating local directory %s: %s" % \
241 (path, e))
242 return None
243
244 def write_file(self, node_uuid, old_content_hash, content_hash,
245 parent_uuid, path, conflict_info, node_type):
246 """Creates a file and downloads new content for it."""
247 if conflict_info:
248 # download to conflict file rather than overwriting local changes
249 path = get_conflict_path(path, conflict_info)
250 content_hash = conflict_info[1]
251 try:
252 if node_type == SYMLINK:
253 self.client.download_string(share_uuid=
254 self.share_uuid,
255 node_uuid=node_uuid,
256 content_hash=content_hash)
257 else:
258 self.client.download_file(share_uuid=self.share_uuid,
259 node_uuid=node_uuid,
260 content_hash=content_hash,
261 filename=path)
262 except (request.StorageRequestError, UnsupportedOperationError), e:
263 if os.path.exists(path):
264 raise NodeUpdateError("Error downloading content for %s: %s" %\
265 (path, e))
266 else:
267 raise NodeCreateError("Error locally creating %s: %s" % \
268 (path, e))
269
270 def delete_directory(self, node_uuid, path):
271 """Deletes a directory."""
272 try:
273 os.rmdir(path)
274 except OSError, e:
275 raise NodeDeleteError("Error locally deleting %s: %s" % (path, e))
276
277 def delete_file(self, node_uuid, path):
278 """Deletes a file."""
279 try:
280 os.unlink(path)
281 except OSError, e:
282 raise NodeDeleteError("Error locally deleting %s: %s" % (path, e))
283
284
285class Uploader(object):
286 """A class which implements the upload half of syncing."""
287 def __init__(self, client, share_uuid):
288 """Initializes an uploader instance."""
289 self.client = client
290 self.share_uuid = share_uuid
291 self.symbol = UPLOAD_SYMBOL
292
293 def create_directory(self, parent_uuid, path):
294 """Creates a directory on the server."""
295 name = name_from_path(path)
296 try:
297 return self.client.create_directory(share_uuid=self.share_uuid,
298 parent_uuid=parent_uuid,
299 name=name)
300 except (request.StorageRequestError, UnsupportedOperationError), e:
301 raise NodeCreateError("Error remotely creating %s: %s" % \
302 (path, e))
303
304 def write_file(self, node_uuid, old_content_hash, content_hash,
305 parent_uuid, path, conflict_info, node_type):
306 """Creates a file on the server and uploads new content for it."""
307
308 if conflict_info:
309 # move conflicting file out of the way on the server
310 conflict_path = get_conflict_path(path, conflict_info)
311 conflict_name = name_from_path(conflict_path)
312 try:
313 self.client.move(share_uuid=self.share_uuid,
314 parent_uuid=parent_uuid,
315 name=conflict_name,
316 node_uuid=node_uuid)
317 except (request.StorageRequestError, UnsupportedOperationError), e:
318 raise NodeUpdateError("Error remotely renaming %s to %s: %s" %\
319 (path, conflict_path, e))
320 node_uuid = None
321 old_content_hash = EMPTY_HASH
322
323 if node_type == SYMLINK:
324 try:
325 target = os.readlink(path)
326 except OSError, e:
327 raise NodeCreateError("Error retrieving link target " \
328 "for %s: %s" % (path, e))
329 else:
330 target = None
331
332 name = name_from_path(path)
333 if node_uuid is None:
334 try:
335 if node_type == SYMLINK:
336 node_uuid = self.client.create_symlink(share_uuid=
337 self.share_uuid,
338 parent_uuid=
339 parent_uuid,
340 name=name,
341 target=target)
342 old_content_hash = content_hash
343 else:
344 node_uuid = self.client.create_file(share_uuid=
345 self.share_uuid,
346 parent_uuid=
347 parent_uuid,
348 name=name)
349 except (request.StorageRequestError, UnsupportedOperationError), e:
350 raise NodeCreateError("Error remotely creating %s: %s" % \
351 (path, e))
352
353 if old_content_hash != content_hash:
354 try:
355 if node_type == SYMLINK:
356 self.client.upload_string(share_uuid=self.share_uuid,
357 node_uuid=node_uuid,
358 content_hash=content_hash,
359 old_content_hash=
360 old_content_hash,
361 content=target)
362 else:
363 self.client.upload_file(share_uuid=self.share_uuid,
364 node_uuid=node_uuid,
365 content_hash=content_hash,
366 old_content_hash=old_content_hash,
367 filename=path)
368 except (request.StorageRequestError, UnsupportedOperationError), e:
369 raise NodeUpdateError("Error uploading content for %s: %s" % \
370 (path, e))
371
372 def delete_directory(self, node_uuid, path):
373 """Deletes a directory."""
374 try:
375 self.client.unlink(share_uuid=self.share_uuid, node_uuid=node_uuid)
376 except (request.StorageRequestError, UnsupportedOperationError), e:
377 raise NodeDeleteError("Error remotely deleting %s: %s" % (path, e))
378
379 def delete_file(self, node_uuid, path):
380 """Deletes a file."""
381 try:
382 self.client.unlink(share_uuid=self.share_uuid, node_uuid=node_uuid)
383 except (request.StorageRequestError, UnsupportedOperationError), e:
384 raise NodeDeleteError("Error remotely deleting %s: %s" % (path, e))
0385
=== added file 'src/u1sync/ubuntuone_optparse.py'
--- src/u1sync/ubuntuone_optparse.py 1970-01-01 00:00:00 +0000
+++ src/u1sync/ubuntuone_optparse.py 2010-08-27 14:56:43 +0000
@@ -0,0 +1,202 @@
1# ubuntuone.u1sync.ubuntuone_optparse
2#
3# Prototype directory sync client
4#
5# Author: Manuel de la Pena <manuel.delapena@canonical.com>
6#
7# Copyright 2010 Canonical Ltd.
8#
9# This program is free software: you can redistribute it and/or modify it
10# under the terms of the GNU General Public License version 3, as published
11# by the Free Software Foundation.
12#
13# This program is distributed in the hope that it will be useful, but
14# WITHOUT ANY WARRANTY; without even the implied warranties of
15# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
16# PURPOSE. See the GNU General Public License for more details.
17#
18# You should have received a copy of the GNU General Public License along
19# with this program. If not, see <http://www.gnu.org/licenses/>.
20import uuid
21from oauth.oauth import OAuthToken
22from optparse import OptionParser, SUPPRESS_HELP
23from u1sync.merge import (
24 SyncMerge, ClobberServerMerge, ClobberLocalMerge)
25
26MERGE_ACTIONS = {
27 # action: (merge_class, should_upload, should_download)
28 'sync': (SyncMerge, True, True),
29 'clobber-server': (ClobberServerMerge, True, False),
30 'clobber-local': (ClobberLocalMerge, False, True),
31 'upload': (SyncMerge, True, False),
32 'download': (SyncMerge, False, True),
33 'auto': None # special case
34}
35
36DEFAULT_MERGE_ACTION = 'auto'
37
38class NotParsedOptionsError(Exception):
39 """Exception thrown when there options have not been parsed."""
40
41class NotValidatedOptionsError(Exception):
42 """Exception thrown when the options have not been validated."""
43
44class UbuntuOneOptionsParser(OptionParser):
45 """Parse for the options passed in the command line"""
46
47 def __init__(self):
48 usage = "Usage: %prog [options] [DIRECTORY]\n" \
49 " %prog --authorize [options]\n" \
50 " %prog --list-shares [options]\n" \
51 " %prog --init [--share=SHARE_UUID] [options] DIRECTORY\n" \
52 " %prog --diff [--share=SHARE_UUID] [options] DIRECTORY"
53 OptionParser.__init__(self, usage=usage)
54 self._was_validated = False
55 self._args = None
56 # add the different options to be used
57 self.add_option("--port", dest="port", metavar="PORT",
58 default=443,
59 help="The port on which to connect to the server")
60 self.add_option("--host", dest="host", metavar="HOST",
61 default='fs-1.one.ubuntu.com',
62 help="The server address")
63 self.add_option("--realm", dest="realm", metavar="REALM",
64 default='https://ubuntuone.com',
65 help="The oauth realm")
66 self.add_option("--oauth", dest="oauth", metavar="KEY:SECRET",
67 default=None,
68 help="Explicitly provide OAuth credentials "
69 "(default is to query keyring)")
70
71 action_list = ", ".join(sorted(MERGE_ACTIONS.keys()))
72 self.add_option("--action", dest="action", metavar="ACTION",
73 default=None,
74 help="Select a sync action (%s; default is %s)" % \
75 (action_list, DEFAULT_MERGE_ACTION))
76
77 self.add_option("--dry-run", action="store_true", dest="dry_run",
78 default=False, help="Do a dry run without actually "
79 "making changes")
80 self.add_option("--quiet", action="store_true", dest="quiet",
81 default=False, help="Produces less output")
82 self.add_option("--authorize", action="store_const", dest="mode",
83 const="authorize",
84 help="Authorize this machine")
85 self.add_option("--list-shares", action="store_const", dest="mode",
86 const="list-shares", default="sync",
87 help="List available shares")
88 self.add_option("--init", action="store_const", dest="mode",
89 const="init",
90 help="Initialize a local directory for syncing")
91 self.add_option("--no-ssl-verify", action="store_true",
92 dest="no_ssl_verify",
93 default=False, help=SUPPRESS_HELP)
94 self.add_option("--diff", action="store_const", dest="mode",
95 const="diff",
96 help="Compare tree on server with local tree " \
97 "(does not require previous --init)")
98 self.add_option("--share", dest="share", metavar="SHARE_UUID",
99 default=None,
100 help="Sync the directory with a share rather than the " \
101 "user's own volume")
102 self.add_option("--subtree", dest="subtree", metavar="PATH",
103 default=None,
104 help="Mirror a subset of the share or volume")
105
106 def get_options(self, arguments):
107 """Parses the arguments to from the command line."""
108 (self.options, self._args) = \
109 self.parse_args(arguments)
110 self._validate_args()
111
112 def _validate_args(self):
113 """Validates the args that have been parsed."""
114 self._is_only_share()
115 self._get_directory()
116 self._validate_action_usage()
117 self._validate_authorize_usage()
118 self._validate_subtree_usage()
119 self._validate_action()
120 self._validate_oauth()
121
122 def _is_only_share(self):
123 """Ensures that the share options is not convined with any other."""
124 if self.options.share is not None and \
125 self.options.mode != "init" and \
126 self.options.mode != "diff":
127 self.error("--share is only valid with --init or --diff")
128
129 def _get_directory(self):
130 """Gets the directory to be used according to the paramenters."""
131 print self._args
132 if self.options.mode == "sync" or self.options.mode == "init" or \
133 self.options.mode == "diff":
134 if len(self._args) > 2:
135 self.error("Too many arguments")
136 elif len(self._args) < 1:
137 if self.options.mode == "init" or self.options.mode == "diff":
138 self.error("--%s requires a directory to "
139 "be specified" % self.options.mode)
140 else:
141 self.options.directory = "."
142 else:
143 self.options.directory = self._args[0]
144
145 def _validate_action_usage(self):
146 """Ensures that the --action option is correctly used"""
147 if self.options.mode == "init" or \
148 self.options.mode == "list-shares" or \
149 self.options.mode == "diff" or \
150 self.options.mode == "authorize":
151 if self.options.action is not None:
152 self.error("--%s does not take the --action parameter" % \
153 self.options.mode)
154 if self.options.dry_run:
155 self.error("--%s does not take the --dry-run parameter" % \
156 self.options.mode)
157
158 def _validate_authorize_usage(self):
159 """Validates the usage of the authorize option."""
160 if self.options.mode == "authorize":
161 if self.options.oauth is not None:
162 self.error("--authorize does not take the --oauth parameter")
163 if self.options.mode == "list-shares" or \
164 self.options.mode == "authorize":
165 if len(self._args) != 0:
166 self.error("--list-shares does not take a directory")
167
168 def _validate_subtree_usage(self):
169 """Validates the usage of the subtree option"""
170 if self.options.mode != "init" and self.options.mode != "diff":
171 if self.options.subtree is not None:
172 self.error("--%s does not take the --subtree parameter" % \
173 self.options.mode)
174
175 def _validate_action(self):
176 """Validates the actions passed to the options."""
177 if self.options.action is not None and \
178 self.options.action not in MERGE_ACTIONS:
179 self.error("--action: Unknown action %s" % self.options.action)
180
181 if self.options.action is None:
182 self.options.action = DEFAULT_MERGE_ACTION
183
184 def _validate_oauth(self):
185 """Validates that the oatuh was passed."""
186 if self.options.oauth is None:
187 self.error("--oauth is currently compulsery.")
188 else:
189 try:
190 (key, secret) = self.options.oauth.split(':', 2)
191 except ValueError:
192 self.error("--oauth requires a key and secret together in the "
193 " form KEY:SECRET")
194 self.options.token = OAuthToken(key, secret)
195
196 def _validate_share(self):
197 """Validates the share option"""
198 if self.options.share is not None:
199 try:
200 uuid.UUID(self.options.share)
201 except ValueError, e:
202 self.error("Invalid --share argument: %s" % e)
0\ No newline at end of file203\ No newline at end of file
1204
=== added file 'src/u1sync/utils.py'
--- src/u1sync/utils.py 1970-01-01 00:00:00 +0000
+++ src/u1sync/utils.py 2010-08-27 14:56:43 +0000
@@ -0,0 +1,50 @@
1# ubuntuone.u1sync.utils
2#
3# Miscellaneous utility functions
4#
5# Author: Tim Cole <tim.cole@canonical.com>
6#
7# Copyright 2009 Canonical Ltd.
8#
9# This program is free software: you can redistribute it and/or modify it
10# under the terms of the GNU General Public License version 3, as published
11# by the Free Software Foundation.
12#
13# This program is distributed in the hope that it will be useful, but
14# WITHOUT ANY WARRANTY; without even the implied warranties of
15# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
16# PURPOSE. See the GNU General Public License for more details.
17#
18# You should have received a copy of the GNU General Public License along
19# with this program. If not, see <http://www.gnu.org/licenses/>.
20"""Miscellaneous utility functions."""
21
22import os
23from errno import EEXIST, ENOENT
24from u1sync.constants import (
25 METADATA_DIR_NAME, SPECIAL_FILE_RE)
26
27def should_sync(filename):
28 """Returns True if the filename should be synced.
29
30 @param filename: a unicode filename
31
32 """
33 return filename != METADATA_DIR_NAME and \
34 not SPECIAL_FILE_RE.match(filename)
35
36def safe_mkdir(path):
37 """Creates a directory iff it does not already exist."""
38 try:
39 os.mkdir(path)
40 except OSError, e:
41 if e.errno != EEXIST:
42 raise
43
44def safe_unlink(path):
45 """Unlinks a file iff it exists."""
46 try:
47 os.unlink(path)
48 except OSError, e:
49 if e.errno != ENOENT:
50 raise

Subscribers

People subscribed via source and target branches

to all changes: