Merge lp:~alecu/ubuntuone-client/ziggy-for-filesync into lp:ubuntuone-client

Proposed by Alejandro J. Cura
Status: Merged
Approved by: dobey
Approved revision: 779
Merged at revision: 774
Proposed branch: lp:~alecu/ubuntuone-client/ziggy-for-filesync
Merge into: lp:ubuntuone-client
Diff against target: 1754 lines (+1401/-83)
8 files modified
contrib/testing/testcase.py (+1/-1)
tests/syncdaemon/test_event_logging.py (+930/-74)
tests/syncdaemon/test_fsm.py (+36/-0)
tests/syncdaemon/test_sync.py (+55/-3)
ubuntuone/syncdaemon/event_logging.py (+359/-1)
ubuntuone/syncdaemon/event_queue.py (+6/-0)
ubuntuone/syncdaemon/filesystem_manager.py (+4/-1)
ubuntuone/syncdaemon/sync.py (+10/-3)
To merge this branch: bzr merge lp:~alecu/ubuntuone-client/ziggy-for-filesync
Reviewer Review Type Date Requested Status
Facundo Batista (community) Approve
Natalia Bidart (community) Approve
Review via email: mp+43495@code.launchpad.net

Commit message

Remaining SyncDaemon events to be logged into Zeitgeist.

Description of the change

Remaining SyncDaemon events to be logged into Zeitgeist.
https://wiki.ubuntu.com/UbuntuOne/Specs/ZeitgeistIntegration/EventsSpec

Directory and files synchronization and conflict, UDF creation/deletion/subscribing, file publishing.

To post a comment you must log in.
Revision history for this message
Natalia Bidart (nataliabidart) wrote :

Great work. Really.

review: Approve
Revision history for this message
Facundo Batista (facundo) wrote :

Like it

review: Approve
Revision history for this message
Ubuntu One Auto Pilot (otto-pilot) wrote :

There are additional revisions which have not been approved in review. Please seek review and approval of these new revisions.

Revision history for this message
Ubuntu One Auto Pilot (otto-pilot) wrote :
Download full text (292.7 KiB)

The attempt to merge lp:~alecu/ubuntuone-client/ziggy-for-filesync into lp:ubuntuone-client failed. Below is the output from the failed tests.

/usr/bin/gnome-autogen.sh
checking for autoconf >= 2.53...
  testing autoconf2.50... not found.
  testing autoconf... found 2.67
checking for automake >= 1.10...
  testing automake-1.11... found 1.11.1
checking for libtool >= 1.5...
  testing libtoolize... found 2.2.6b
checking for intltool >= 0.30...
  testing intltoolize... found 0.41.1
checking for pkg-config >= 0.14.0...
  testing pkg-config... found 0.25
checking for gtk-doc >= 1.0...
  testing gtkdocize... found 1.15
Checking for required M4 macros...
Checking for forbidden M4 macros...
Processing ./configure.ac
Running libtoolize...
libtoolize: putting auxiliary files in `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `m4'.
libtoolize: copying file `m4/libtool.m4'
libtoolize: copying file `m4/ltoptions.m4'
libtoolize: copying file `m4/ltsugar.m4'
libtoolize: copying file `m4/ltversion.m4'
libtoolize: copying file `m4/lt~obsolete.m4'
Running intltoolize...
Running gtkdocize...
Running aclocal-1.11...
Running autoconf...
Running autoheader...
Running automake-1.11...
Running ./configure --enable-gtk-doc --enable-debug ...
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking for library containing strerror... none required
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking dependency style of gcc... (cached) gcc3
checking build system type... i686-pc-linux-gnu
checking host system type... i686-pc-linux-gnu
checking for a sed that does not truncate output... /bin/sed
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking whether the shell understands some XSI constructs... yes
checking whether the shell understands "+="... yes
checking for /usr/bin/ld option to reload object files... -r
checking for objdump... objdump
checking how to recognize dependent lib...

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'contrib/testing/testcase.py'
--- contrib/testing/testcase.py 2010-12-02 17:50:57 +0000
+++ contrib/testing/testcase.py 2010-12-13 23:42:15 +0000
@@ -143,7 +143,7 @@
143143
144 cancel_download = cancel_upload = download = upload = make_dir = disconnect144 cancel_download = cancel_upload = download = upload = make_dir = disconnect
145 make_file = move = unlink = list_shares = disconnect145 make_file = move = unlink = list_shares = disconnect
146 list_volumes = create_share = inquire_free_space = disconnect146 list_volumes = create_share = create_udf = inquire_free_space = disconnect
147 inquire_account_info = delete_volume = change_public_access = disconnect147 inquire_account_info = delete_volume = change_public_access = disconnect
148 query_volumes = get_delta = rescan_from_scratch = disconnect148 query_volumes = get_delta = rescan_from_scratch = disconnect
149 node_is_with_queued_move = disconnect149 node_is_with_queued_move = disconnect
150150
=== modified file 'tests/syncdaemon/test_event_logging.py'
--- tests/syncdaemon/test_event_logging.py 2010-12-09 14:28:21 +0000
+++ tests/syncdaemon/test_event_logging.py 2010-12-13 23:42:15 +0000
@@ -1,3 +1,5 @@
1# -*- coding: utf-8 -*-
2#
1# tests.syncdaemon.test_event_logging - test logging ZG events3# tests.syncdaemon.test_event_logging - test logging ZG events
2#4#
3# Author: Alejandro J. Cura <alecu@canonical.com>5# Author: Alejandro J. Cura <alecu@canonical.com>
@@ -17,23 +19,41 @@
17# with this program. If not, see <http://www.gnu.org/licenses/>.19# with this program. If not, see <http://www.gnu.org/licenses/>.
18"""Test the event logging from SyncDaemon into Zeitgeist."""20"""Test the event logging from SyncDaemon into Zeitgeist."""
1921
22import logging
20import os23import os
24import shutil
21import uuid25import uuid
2226
23from twisted.internet import defer27from twisted.internet import defer
24from zeitgeist.datamodel import Interpretation, Manifestation28from zeitgeist.datamodel import Interpretation, Manifestation
2529
26from contrib.testing.testcase import FakeMainTestCase30from contrib.testing.testcase import (
31 FakeMain, FakeMainTestCase, BaseTwistedTestCase, MementoHandler)
32from ubuntuone.platform.linux import get_udf_path
33from ubuntuone.storageprotocol import client, delta
34from ubuntuone.storageprotocol.request import ROOT
27from ubuntuone.storageprotocol.sharersp import NotifyShareHolder35from ubuntuone.storageprotocol.sharersp import NotifyShareHolder
36from ubuntuone.syncdaemon.action_queue import (
37 RequestQueue, Upload, MakeFile, MakeDir)
28from ubuntuone.syncdaemon.event_logging import (38from ubuntuone.syncdaemon.event_logging import (
29 zglog, ZeitgeistListener, ACTOR_UBUNTUONE,39 zglog, ZeitgeistListener, ACTOR_UBUNTUONE,
30 EVENT_INTERPRETATION_U1_FOLDER_SHARED,40 EVENT_INTERPRETATION_U1_FOLDER_SHARED,
31 EVENT_INTERPRETATION_U1_FOLDER_UNSHARED,41 EVENT_INTERPRETATION_U1_FOLDER_UNSHARED,
32 EVENT_INTERPRETATION_U1_SHARE_ACCEPTED,42 EVENT_INTERPRETATION_U1_SHARE_ACCEPTED,
33 EVENT_INTERPRETATION_U1_SHARE_UNACCEPTED,43 EVENT_INTERPRETATION_U1_SHARE_UNACCEPTED,
44 EVENT_INTERPRETATION_U1_CONFLICT_RENAME,
45 EVENT_INTERPRETATION_U1_UDF_CREATED,
46 EVENT_INTERPRETATION_U1_UDF_DELETED,
47 EVENT_INTERPRETATION_U1_UDF_SUBSCRIBED,
48 EVENT_INTERPRETATION_U1_UDF_UNSUBSCRIBED,
34 MANIFESTATION_U1_CONTACT_DATA_OBJECT, DIRECTORY_MIMETYPE,49 MANIFESTATION_U1_CONTACT_DATA_OBJECT, DIRECTORY_MIMETYPE,
35 INTERPRETATION_U1_CONTACT, URI_PROTOCOL_U1, STORAGE_NETWORK)50 INTERPRETATION_U1_CONTACT, URI_PROTOCOL_U1,
36from ubuntuone.syncdaemon.volume_manager import Share, Shared51 STORAGE_DELETED, STORAGE_NETWORK, STORAGE_LOCAL)
52from ubuntuone.syncdaemon.sync import Sync
53from ubuntuone.syncdaemon.volume_manager import Share, Shared, UDF
54from test_action_queue import ConnectedBaseTestCase
55
56VOLUME = uuid.UUID('12345678-1234-1234-1234-123456789abc')
3757
3858
39class MockLogger(object):59class MockLogger(object):
@@ -47,6 +67,33 @@
47 """Log the event."""67 """Log the event."""
48 self.events.append(event)68 self.events.append(event)
4969
70def listen_for(event_q, event, callback, count=1, collect=False):
71 """Setup a EQ listener for the specified event."""
72 class Listener(object):
73 """A basic listener to handle the pushed event."""
74
75 def __init__(self):
76 self.hits = 0
77 self.events = []
78
79 def _handle_event(self, *args, **kwargs):
80 self.hits += 1
81 if collect:
82 self.events.append((args, kwargs))
83 if self.hits == count:
84 event_q.unsubscribe(self)
85 if collect:
86 callback(self.events)
87 elif kwargs:
88 callback((args, kwargs))
89 else:
90 callback(args)
91
92 listener = Listener()
93 setattr(listener, 'handle_'+event, listener._handle_event)
94 event_q.subscribe(listener)
95 return listener
96
5097
51class ZeitgeistListenerTestCase(FakeMainTestCase):98class ZeitgeistListenerTestCase(FakeMainTestCase):
52 """Tests for ZeitgeistListener."""99 """Tests for ZeitgeistListener."""
@@ -58,33 +105,12 @@
58 self.listener = ZeitgeistListener(self.fs, self.vm)105 self.listener = ZeitgeistListener(self.fs, self.vm)
59 self.event_q.subscribe(self.listener)106 self.event_q.subscribe(self.listener)
60107
61 def _listen_for(self, event, callback, count=1, collect=False):108 def _listen_for(self, *args, **kwargs):
62 """Setup a EQ listener for the specified event."""109 return listen_for(self.main.event_q, *args, **kwargs)
63 event_q = self.main.event_q110
64 class Listener(object):111
65 """A basic listener to handle the pushed event."""112class ZeitgeistSharesTestCase(ZeitgeistListenerTestCase):
66113 """Tests for all Share-related zeitgeist events."""
67 def __init__(self):
68 self.hits = 0
69 self.events = []
70
71 def _handle_event(self, *args, **kwargs):
72 self.hits += 1
73 if collect:
74 self.events.append((args, kwargs))
75 if self.hits == count:
76 event_q.unsubscribe(self)
77 if collect:
78 callback(self.events)
79 elif kwargs:
80 callback((args, kwargs))
81 else:
82 callback(args)
83
84 listener = Listener()
85 setattr(listener, 'handle_'+event, listener._handle_event)
86 event_q.subscribe(listener)
87 return listener
88114
89 def test_share_created_with_username_is_logged(self):115 def test_share_created_with_username_is_logged(self):
90 """A ShareCreated event is logged."""116 """A ShareCreated event is logged."""
@@ -129,28 +155,28 @@
129 def assert_folder_shared_is_logged(self, path, fake_username):155 def assert_folder_shared_is_logged(self, path, fake_username):
130 """Assert that the FolderShared event was logged."""156 """Assert that the FolderShared event was logged."""
131157
132 self.assertEquals(len(self.listener.zg.events), 1)158 self.assertEqual(len(self.listener.zg.events), 1)
133 event = self.listener.zg.events[0]159 event = self.listener.zg.events[0]
134160
135 self.assertEquals(event.interpretation,161 self.assertEqual(event.interpretation,
136 EVENT_INTERPRETATION_U1_FOLDER_SHARED)162 EVENT_INTERPRETATION_U1_FOLDER_SHARED)
137 self.assertEquals(event.manifestation,163 self.assertEqual(event.manifestation,
138 Manifestation.USER_ACTIVITY)164 Manifestation.USER_ACTIVITY)
139 self.assertEquals(event.actor, ACTOR_UBUNTUONE)165 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
140166
141 folder = event.subjects[0]167 folder = event.subjects[0]
142 self.assertTrue(folder.uri.startswith(URI_PROTOCOL_U1))168 self.assertTrue(folder.uri.startswith(URI_PROTOCOL_U1))
143 self.assertEquals(folder.interpretation, Interpretation.FOLDER)169 self.assertEqual(folder.interpretation, Interpretation.FOLDER)
144 self.assertEquals(folder.manifestation,170 self.assertEqual(folder.manifestation,
145 Manifestation.REMOTE_DATA_OBJECT)171 Manifestation.REMOTE_DATA_OBJECT)
146 self.assertTrue(folder.origin.endswith(path))172 self.assertTrue(folder.origin.endswith(path))
147 self.assertEquals(folder.mimetype, DIRECTORY_MIMETYPE)173 self.assertEqual(folder.mimetype, DIRECTORY_MIMETYPE)
148 self.assertEquals(folder.storage, STORAGE_NETWORK)174 self.assertEqual(folder.storage, STORAGE_NETWORK)
149175
150 other_user = event.subjects[1]176 other_user = event.subjects[1]
151 self.assertEquals(other_user.uri, "mailto:" + fake_username)177 self.assertEqual(other_user.uri, "mailto:" + fake_username)
152 self.assertEquals(other_user.interpretation, INTERPRETATION_U1_CONTACT)178 self.assertEqual(other_user.interpretation, INTERPRETATION_U1_CONTACT)
153 self.assertEquals(other_user.manifestation,179 self.assertEqual(other_user.manifestation,
154 MANIFESTATION_U1_CONTACT_DATA_OBJECT)180 MANIFESTATION_U1_CONTACT_DATA_OBJECT)
155181
156182
@@ -178,28 +204,28 @@
178 self.vm.delete_share(share.volume_id)204 self.vm.delete_share(share.volume_id)
179 yield d205 yield d
180206
181 self.assertEquals(len(self.listener.zg.events), 1)207 self.assertEqual(len(self.listener.zg.events), 1)
182 event = self.listener.zg.events[0]208 event = self.listener.zg.events[0]
183209
184 self.assertEquals(event.interpretation,210 self.assertEqual(event.interpretation,
185 EVENT_INTERPRETATION_U1_FOLDER_UNSHARED)211 EVENT_INTERPRETATION_U1_FOLDER_UNSHARED)
186 self.assertEquals(event.manifestation,212 self.assertEqual(event.manifestation,
187 Manifestation.USER_ACTIVITY)213 Manifestation.USER_ACTIVITY)
188 self.assertEquals(event.actor, ACTOR_UBUNTUONE)214 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
189215
190 folder = event.subjects[0]216 folder = event.subjects[0]
191 self.assertTrue(folder.uri.startswith(URI_PROTOCOL_U1))217 self.assertTrue(folder.uri.startswith(URI_PROTOCOL_U1))
192 self.assertEquals(folder.interpretation, Interpretation.FOLDER)218 self.assertEqual(folder.interpretation, Interpretation.FOLDER)
193 self.assertEquals(folder.manifestation,219 self.assertEqual(folder.manifestation,
194 Manifestation.REMOTE_DATA_OBJECT)220 Manifestation.REMOTE_DATA_OBJECT)
195 self.assertTrue(folder.origin.endswith(path))221 self.assertTrue(folder.origin.endswith(path))
196 self.assertEquals(folder.mimetype, DIRECTORY_MIMETYPE)222 self.assertEqual(folder.mimetype, DIRECTORY_MIMETYPE)
197 self.assertEquals(folder.storage, STORAGE_NETWORK)223 self.assertEqual(folder.storage, STORAGE_NETWORK)
198224
199 other_user = event.subjects[1]225 other_user = event.subjects[1]
200 self.assertEquals(other_user.uri, "mailto:" + fake_username)226 self.assertEqual(other_user.uri, "mailto:" + fake_username)
201 self.assertEquals(other_user.interpretation, INTERPRETATION_U1_CONTACT)227 self.assertEqual(other_user.interpretation, INTERPRETATION_U1_CONTACT)
202 self.assertEquals(other_user.manifestation,228 self.assertEqual(other_user.manifestation,
203 MANIFESTATION_U1_CONTACT_DATA_OBJECT)229 MANIFESTATION_U1_CONTACT_DATA_OBJECT)
204230
205 def test_share_accepted_is_logged(self):231 def test_share_accepted_is_logged(self):
@@ -214,28 +240,28 @@
214 other_username=fake_username)240 other_username=fake_username)
215 self.vm.add_share(share)241 self.vm.add_share(share)
216242
217 self.assertEquals(len(self.listener.zg.events), 1)243 self.assertEqual(len(self.listener.zg.events), 1)
218 event = self.listener.zg.events[0]244 event = self.listener.zg.events[0]
219245
220 self.assertEquals(event.interpretation,246 self.assertEqual(event.interpretation,
221 EVENT_INTERPRETATION_U1_SHARE_ACCEPTED)247 EVENT_INTERPRETATION_U1_SHARE_ACCEPTED)
222 self.assertEquals(event.manifestation,248 self.assertEqual(event.manifestation,
223 Manifestation.USER_ACTIVITY)249 Manifestation.USER_ACTIVITY)
224 self.assertEquals(event.actor, ACTOR_UBUNTUONE)250 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
225251
226 folder = event.subjects[0]252 folder = event.subjects[0]
227 self.assertTrue(folder.uri.startswith(URI_PROTOCOL_U1))253 self.assertTrue(folder.uri.startswith(URI_PROTOCOL_U1))
228 self.assertEquals(folder.interpretation, Interpretation.FOLDER)254 self.assertEqual(folder.interpretation, Interpretation.FOLDER)
229 self.assertEquals(folder.manifestation,255 self.assertEqual(folder.manifestation,
230 Manifestation.REMOTE_DATA_OBJECT)256 Manifestation.REMOTE_DATA_OBJECT)
231 self.assertTrue(folder.origin.endswith(share_path))257 self.assertTrue(folder.origin.endswith(share_path))
232 self.assertEquals(folder.mimetype, DIRECTORY_MIMETYPE)258 self.assertEqual(folder.mimetype, DIRECTORY_MIMETYPE)
233 self.assertEquals(folder.storage, STORAGE_NETWORK)259 self.assertEqual(folder.storage, STORAGE_NETWORK)
234260
235 other_user = event.subjects[1]261 other_user = event.subjects[1]
236 self.assertEquals(other_user.uri, "mailto:" + fake_username)262 self.assertEqual(other_user.uri, "mailto:" + fake_username)
237 self.assertEquals(other_user.interpretation, INTERPRETATION_U1_CONTACT)263 self.assertEqual(other_user.interpretation, INTERPRETATION_U1_CONTACT)
238 self.assertEquals(other_user.manifestation,264 self.assertEqual(other_user.manifestation,
239 MANIFESTATION_U1_CONTACT_DATA_OBJECT)265 MANIFESTATION_U1_CONTACT_DATA_OBJECT)
240266
241267
@@ -257,26 +283,856 @@
257 self.main.event_q.push('SV_SHARE_DELETED', holder.share_id)283 self.main.event_q.push('SV_SHARE_DELETED', holder.share_id)
258 yield d284 yield d
259285
260 self.assertEquals(len(self.listener.zg.events), 2)286 self.assertEqual(len(self.listener.zg.events), 2)
261 event = self.listener.zg.events[1]287 event = self.listener.zg.events[1]
262288
263 self.assertEquals(event.interpretation,289 self.assertEqual(event.interpretation,
264 EVENT_INTERPRETATION_U1_SHARE_UNACCEPTED)290 EVENT_INTERPRETATION_U1_SHARE_UNACCEPTED)
265 self.assertEquals(event.manifestation,291 self.assertEqual(event.manifestation,
266 Manifestation.USER_ACTIVITY)292 Manifestation.USER_ACTIVITY)
267 self.assertEquals(event.actor, ACTOR_UBUNTUONE)293 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
268294
269 folder = event.subjects[0]295 folder = event.subjects[0]
270 self.assertTrue(folder.uri.startswith(URI_PROTOCOL_U1))296 self.assertTrue(folder.uri.startswith(URI_PROTOCOL_U1))
271 self.assertEquals(folder.interpretation, Interpretation.FOLDER)297 self.assertEqual(folder.interpretation, Interpretation.FOLDER)
272 self.assertEquals(folder.manifestation,298 self.assertEqual(folder.manifestation,
273 Manifestation.REMOTE_DATA_OBJECT)299 Manifestation.REMOTE_DATA_OBJECT)
274 self.assertTrue(folder.origin.endswith(share_path))300 self.assertTrue(folder.origin.endswith(share_path))
275 self.assertEquals(folder.mimetype, DIRECTORY_MIMETYPE)301 self.assertEqual(folder.mimetype, DIRECTORY_MIMETYPE)
276 self.assertEquals(folder.storage, STORAGE_NETWORK)302 self.assertEqual(folder.storage, STORAGE_NETWORK)
277303
278 other_user = event.subjects[1]304 other_user = event.subjects[1]
279 self.assertEquals(other_user.uri, "mailto:" + fake_username)305 self.assertEqual(other_user.uri, "mailto:" + fake_username)
280 self.assertEquals(other_user.interpretation, INTERPRETATION_U1_CONTACT)306 self.assertEqual(other_user.interpretation, INTERPRETATION_U1_CONTACT)
281 self.assertEquals(other_user.manifestation,307 self.assertEqual(other_user.manifestation,
282 MANIFESTATION_U1_CONTACT_DATA_OBJECT)308 MANIFESTATION_U1_CONTACT_DATA_OBJECT)
309
310
311class ZeitgeistUDFsTestCase(ZeitgeistListenerTestCase):
312 """Tests for all UDFs-related zeitgeist events."""
313
314 def setUp(self):
315 """Initialize this test instance."""
316 super(ZeitgeistUDFsTestCase, self).setUp()
317 self.home_dir = self.mktemp('ubuntuonehacker')
318 self._old_home = os.environ['HOME']
319 os.environ['HOME'] = self.home_dir
320
321 def tearDown(self):
322 """Finalize this test instance."""
323 self.rmtree(self.home_dir)
324 os.environ['HOME'] = self._old_home
325 super(ZeitgeistUDFsTestCase, self).tearDown()
326
327 def _create_udf(self, id, node_id, suggested_path, subscribed=True):
328 """Create an UDF and returns it and the volume."""
329 path = get_udf_path(suggested_path)
330 # make sure suggested_path is unicode
331 if isinstance(suggested_path, str):
332 suggested_path = suggested_path.decode('utf-8')
333 udf = UDF(str(id), str(node_id), suggested_path, path, subscribed)
334 return udf
335
336 @defer.inlineCallbacks
337 def test_udf_create_is_logged(self):
338 """Test for Folders.create."""
339 path = os.path.join(self.home_dir, u'ñoño'.encode('utf-8'))
340 id = uuid.uuid4()
341 node_id = uuid.uuid4()
342
343 def create_udf(path, name, marker):
344 """Fake create_udf."""
345 # check that the marker is the full path to the udf
346 expanded_path = os.path.expanduser(path.encode('utf-8'))
347 udf_path = os.path.join(expanded_path, name.encode('utf-8'))
348 if str(marker) != udf_path:
349 d.errback(ValueError("marker != path - "
350 "marker: %r path: %r" % (marker, udf_path)))
351 self.main.event_q.push("AQ_CREATE_UDF_OK", **dict(volume_id=id,
352 node_id=node_id,
353 marker=marker))
354
355 self.patch(self.main.action_q, "create_udf", create_udf)
356
357 d = defer.Deferred()
358 self._listen_for('VM_UDF_CREATED', d.callback, 1, collect=True)
359 self.vm.create_udf(path.encode('utf-8'))
360 yield d
361
362 self.assertEqual(len(self.listener.zg.events), 1)
363 event = self.listener.zg.events[0]
364
365 self.assertEqual(event.interpretation,
366 EVENT_INTERPRETATION_U1_UDF_CREATED)
367 self.assertEqual(event.manifestation,
368 Manifestation.USER_ACTIVITY)
369 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
370
371 folder = event.subjects[0]
372 self.assertTrue(folder.uri.startswith(URI_PROTOCOL_U1))
373 self.assertEqual(folder.interpretation, Interpretation.FOLDER)
374 self.assertEqual(folder.manifestation,
375 Manifestation.REMOTE_DATA_OBJECT)
376 self.assertTrue(folder.origin.endswith(path))
377 self.assertEqual(folder.mimetype, DIRECTORY_MIMETYPE)
378 self.assertEqual(folder.storage, STORAGE_NETWORK)
379
380 @defer.inlineCallbacks
381 def test_udf_delete_is_logged(self):
382 """Test for Folders.delete."""
383 id = uuid.uuid4()
384 node_id = uuid.uuid4()
385 path = os.path.join(self.home_dir, u'ñoño'.encode('utf-8'))
386
387 def create_udf(path, name, marker):
388 """Fake create_udf."""
389 # check that the marker is the full path to the udf
390 expanded_path = os.path.expanduser(path.encode('utf-8'))
391 udf_path = os.path.join(expanded_path, name.encode('utf-8'))
392 if str(marker) != udf_path:
393 d.errback(ValueError("marker != path - "
394 "marker: %r path: %r" % (marker, udf_path)))
395 self.main.event_q.push("AQ_CREATE_UDF_OK", **dict(volume_id=id,
396 node_id=node_id,
397 marker=marker))
398
399 self.patch(self.main.action_q, "create_udf", create_udf)
400
401 d = defer.Deferred()
402 self._listen_for('VM_UDF_CREATED', d.callback, 1, collect=True)
403 self.vm.create_udf(path.encode('utf-8'))
404 yield d
405
406 def delete_volume(path):
407 """Fake delete_volume."""
408 self.main.event_q.push("AQ_DELETE_VOLUME_OK", volume_id=id)
409
410 self.patch(self.main.action_q, "delete_volume", delete_volume)
411
412 self.vm.delete_volume(str(id))
413
414 self.assertEqual(len(self.listener.zg.events), 1)
415 event = self.listener.zg.events[0]
416
417 d = defer.Deferred()
418 self._listen_for('VM_VOLUME_DELETED', d.callback, 1, collect=True)
419 yield d
420
421 self.assertEqual(len(self.listener.zg.events), 2)
422 event = self.listener.zg.events[1]
423
424 self.assertEqual(event.interpretation,
425 EVENT_INTERPRETATION_U1_UDF_DELETED)
426 self.assertEqual(event.manifestation,
427 Manifestation.USER_ACTIVITY)
428 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
429
430 folder = event.subjects[0]
431 self.assertTrue(folder.uri.startswith(URI_PROTOCOL_U1))
432 self.assertEqual(folder.interpretation, Interpretation.FOLDER)
433 self.assertEqual(folder.manifestation,
434 Manifestation.DELETED_RESOURCE)
435 self.assertTrue(folder.origin.endswith(path))
436 self.assertEqual(folder.mimetype, DIRECTORY_MIMETYPE)
437 self.assertEqual(folder.storage, STORAGE_DELETED)
438
439 @defer.inlineCallbacks
440 def test_udf_subscribe_is_logged(self):
441 """Test for Folders.subscribe."""
442 suggested_path = u'~/ñoño'
443 udf = self._create_udf(uuid.uuid4(), 'node_id', suggested_path,
444 subscribed=False)
445 yield self.main.vm.add_udf(udf)
446 d = defer.Deferred()
447 self._listen_for('VM_UDF_SUBSCRIBED', d.callback, 1, collect=True)
448 self.vm.subscribe_udf(udf.volume_id)
449 yield d
450
451 self.assertEqual(len(self.listener.zg.events), 2)
452 event = self.listener.zg.events[1]
453
454 self.assertEqual(event.interpretation,
455 EVENT_INTERPRETATION_U1_UDF_SUBSCRIBED)
456 self.assertEqual(event.manifestation,
457 Manifestation.USER_ACTIVITY)
458 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
459
460 folder = event.subjects[0]
461 self.assertTrue(folder.uri.endswith(udf.path))
462 self.assertEqual(folder.interpretation, Interpretation.FOLDER)
463 self.assertEqual(folder.manifestation,
464 Manifestation.FILE_DATA_OBJECT)
465 self.assertTrue(folder.origin.startswith(URI_PROTOCOL_U1))
466 self.assertEqual(folder.mimetype, DIRECTORY_MIMETYPE)
467 self.assertEqual(folder.storage, STORAGE_LOCAL)
468
469 @defer.inlineCallbacks
470 def test_udf_unsubscribe_is_logged(self):
471 """Test for Folders.unsubscribe."""
472 suggested_path = u'~/ñoño'
473 udf = self._create_udf(uuid.uuid4(), 'node_id', suggested_path,
474 subscribed=True)
475 yield self.main.vm.add_udf(udf)
476 d = defer.Deferred()
477 self._listen_for('VM_UDF_UNSUBSCRIBED', d.callback, 1, collect=True)
478 self.vm.unsubscribe_udf(udf.volume_id)
479 yield d
480
481 self.assertEqual(len(self.listener.zg.events), 2)
482 event = self.listener.zg.events[1]
483
484 self.assertEqual(event.interpretation,
485 EVENT_INTERPRETATION_U1_UDF_UNSUBSCRIBED)
486 self.assertEqual(event.manifestation,
487 Manifestation.USER_ACTIVITY)
488 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
489
490 folder = event.subjects[0]
491 self.assertTrue(folder.uri.endswith(udf.path))
492 self.assertEqual(folder.interpretation, Interpretation.FOLDER)
493 self.assertEqual(folder.manifestation,
494 Manifestation.DELETED_RESOURCE)
495 self.assertTrue(folder.origin.startswith(URI_PROTOCOL_U1))
496 self.assertEqual(folder.mimetype, DIRECTORY_MIMETYPE)
497 self.assertEqual(folder.storage, STORAGE_DELETED)
498
499
500class ZeitgeistRemoteFileSyncTestCase(ConnectedBaseTestCase):
501 """File sync events are logged into Zeitgeist."""
502
503 def setUp(self):
504 """Initialize this test instance."""
505 ConnectedBaseTestCase.setUp(self)
506 self.rq = request_queue = RequestQueue(name='FOO',
507 action_queue=self.action_queue)
508
509 class MyUpload(Upload):
510 """Just to allow monkeypatching."""
511
512 self.share_id = ""
513 self.command = MyUpload(request_queue, share_id=self.share_id,
514 node_id='a_node_id', previous_hash='prev_hash',
515 hash='yadda', crc32=0, size=0,
516 fileobj_factory=lambda: None,
517 tempfile_factory=lambda: None)
518 self.command.pre_queue_setup() # create the logger
519 self.fsm = self.action_queue.main.fs
520 self.vm = self.action_queue.main.vm
521 self.patch(zglog, "ZeitgeistLogger", MockLogger)
522 self.listener = ZeitgeistListener(self.fsm, self.vm)
523 self.action_queue.event_queue.subscribe(self.listener)
524 self.root_id = "roootid"
525 self.sync = Sync(main=self.main)
526
527 def tearDown(self):
528 """Finalize this test instance."""
529 ConnectedBaseTestCase.tearDown(self)
530
531 def test_syncdaemon_creates_file_on_server_is_logged(self):
532 """Files created by SyncDaemon on the server are logged."""
533 filename = "filename.mp3"
534 path = os.path.join(self.vm.root.path, filename)
535 self.fsm.create(path, "")
536 self.fsm.set_node_id(path, "a_node_id")
537
538 request = client.MakeFile(self.action_queue.client, self.share_id,
539 'parent', filename)
540 request.new_id = 'a_node_id'
541 request.new_generation = 13
542
543 # create a command and trigger it success
544 cmd = MakeFile(self.rq, self.share_id, 'parent', filename, 'marker')
545 res = cmd.handle_success(request)
546 assert res is request
547
548 # create a request and fill it with succesful information
549 request = client.PutContent(self.action_queue.client, self.share_id,
550 'node', 'prvhash', 'newhash', 'crc32',
551 'size', 'deflated', 'fd')
552 request.new_generation = 13
553
554 # trigger success in the command
555 self.command.handle_success(request)
556
557 # check for successful event
558 kwargs = dict(share_id=self.command.share_id, node_id='a_node_id',
559 hash='yadda', new_generation=13)
560
561 info = dict(marker='marker', new_id='a_node_id', new_generation=13,
562 volume_id=self.share_id)
563 events = [
564 ('AQ_FILE_NEW_OK', (), info),
565 ('AQ_UPLOAD_FINISHED', (), kwargs)
566 ]
567 self.assertEqual(events, self.command.action_queue.event_queue.events)
568
569 self.assertEqual(len(self.listener.zg.events), 1)
570 event = self.listener.zg.events[0]
571
572 self.assertEqual(event.interpretation,
573 Interpretation.CREATE_EVENT)
574 self.assertEqual(event.manifestation,
575 Manifestation.SCHEDULED_ACTIVITY)
576 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
577
578 remote_file = event.subjects[0]
579 self.assertTrue(remote_file.uri.startswith(URI_PROTOCOL_U1))
580 self.assertEqual(remote_file.interpretation, Interpretation.AUDIO)
581 self.assertEqual(remote_file.manifestation,
582 Manifestation.REMOTE_DATA_OBJECT)
583 self.assertTrue(remote_file.origin.endswith(filename))
584 self.assertEqual(remote_file.mimetype, "audio/mpeg")
585 self.assertEqual(remote_file.storage, STORAGE_NETWORK)
586
587 def test_syncdaemon_creates_dir_on_server_is_logged(self):
588 """Dirs created by SyncDaemon on the server are logged."""
589 dirname = "dirname"
590 path = os.path.join(self.vm.root.path, dirname)
591 self.fsm.create(path, "")
592 self.fsm.set_node_id(path, "a_node_id")
593
594 request = client.MakeDir(self.action_queue.client, self.share_id,
595 'parent', dirname)
596 request.new_id = 'a_node_id'
597 request.new_generation = 13
598
599 # create a command and trigger it success
600 cmd = MakeDir(self.rq, self.share_id, 'parent', dirname, 'marker')
601 res = cmd.handle_success(request)
602 assert res is request
603
604 # check for successful event
605 info = dict(marker='marker', new_id='a_node_id', new_generation=13,
606 volume_id=self.share_id)
607 events = [
608 ('AQ_DIR_NEW_OK', (), info),
609 ]
610 self.assertEqual(events, self.command.action_queue.event_queue.events)
611
612 self.assertEqual(len(self.listener.zg.events), 1)
613 event = self.listener.zg.events[0]
614
615 self.assertEqual(event.interpretation,
616 Interpretation.CREATE_EVENT)
617 self.assertEqual(event.manifestation,
618 Manifestation.SCHEDULED_ACTIVITY)
619 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
620
621 remote_file = event.subjects[0]
622 self.assertTrue(remote_file.uri.startswith(URI_PROTOCOL_U1))
623 self.assertEqual(remote_file.interpretation, Interpretation.FOLDER)
624 self.assertEqual(remote_file.manifestation,
625 Manifestation.REMOTE_DATA_OBJECT)
626 self.assertTrue(remote_file.origin.endswith(dirname))
627 self.assertEqual(remote_file.mimetype, DIRECTORY_MIMETYPE)
628 self.assertEqual(remote_file.storage, STORAGE_NETWORK)
629
630 def test_syncdaemon_modifies_on_server_is_logged(self):
631 """Files modified by SyncDaemon on the server are logged."""
632 filename = "filename.mp3"
633 path = os.path.join(self.vm.root.path, filename)
634 self.fsm.create(path, "")
635 self.fsm.set_node_id(path, "a_node_id")
636
637 # create a request and fill it with succesful information
638 request = client.PutContent(self.action_queue.client, self.share_id,
639 'node', 'prvhash', 'newhash', 'crc32',
640 'size', 'deflated', 'fd')
641 request.new_generation = 13
642
643 # trigger success in the command
644 self.command.handle_success(request)
645
646 # check for successful event
647 kwargs = dict(share_id=self.command.share_id, node_id='a_node_id',
648 hash='yadda', new_generation=13)
649
650 events = [('AQ_UPLOAD_FINISHED', (), kwargs)]
651 self.assertEqual(events, self.command.action_queue.event_queue.events)
652
653 self.assertEqual(len(self.listener.zg.events), 1)
654 event = self.listener.zg.events[0]
655
656 self.assertEqual(event.interpretation,
657 Interpretation.MODIFY_EVENT)
658 self.assertEqual(event.manifestation,
659 Manifestation.SCHEDULED_ACTIVITY)
660 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
661
662 remote_file = event.subjects[0]
663 self.assertTrue(remote_file.uri.startswith(URI_PROTOCOL_U1))
664 self.assertEqual(remote_file.interpretation, Interpretation.AUDIO)
665 self.assertEqual(remote_file.manifestation,
666 Manifestation.REMOTE_DATA_OBJECT)
667 self.assertTrue(remote_file.origin.endswith(filename))
668 self.assertEqual(remote_file.mimetype, "audio/mpeg")
669 self.assertEqual(remote_file.storage, STORAGE_NETWORK)
670
671 @defer.inlineCallbacks
672 def test_syncdaemon_deletes_file_on_server_is_logged(self):
673 """Files deleted by SD on the server are logged."""
674 d = defer.Deferred()
675 listen_for(self.main.event_q, 'AQ_UNLINK_OK', d.callback)
676
677 path = os.path.join(self.main.vm.root.path, "filename.mp3")
678 self.main.fs.create(path, "")
679 self.main.fs.set_node_id(path, "node_id")
680 self.main.event_q.push("AQ_UNLINK_OK", share_id="",
681 parent_id="parent_id",
682 node_id="node_id", new_generation=13)
683 yield d
684
685 self.assertEqual(len(self.listener.zg.events), 1)
686 event = self.listener.zg.events[0]
687
688 self.assertEqual(event.interpretation,
689 Interpretation.DELETE_EVENT)
690 self.assertEqual(event.manifestation,
691 Manifestation.SCHEDULED_ACTIVITY)
692 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
693
694 remote_folder = event.subjects[0]
695 self.assertTrue(remote_folder.uri.startswith(URI_PROTOCOL_U1))
696 self.assertEqual(remote_folder.interpretation, Interpretation.AUDIO)
697 self.assertEqual(remote_folder.manifestation,
698 Manifestation.DELETED_RESOURCE)
699 self.assertTrue(remote_folder.origin.endswith("filename.mp3"))
700 self.assertEqual(remote_folder.mimetype, "audio/mpeg")
701 self.assertEqual(remote_folder.storage, STORAGE_DELETED)
702
703 @defer.inlineCallbacks
704 def test_syncdaemon_deletes_dir_on_server_is_logged(self):
705 """Files deleted by SD on the server are logged."""
706 d = defer.Deferred()
707 listen_for(self.main.event_q, 'AQ_UNLINK_OK', d.callback)
708
709 path = os.path.join(self.main.vm.root.path, "folder name")
710 self.main.fs.create(path, "", is_dir=True)
711 self.main.fs.set_node_id(path, "node_id")
712 self.main.event_q.push("AQ_UNLINK_OK", share_id="",
713 parent_id="parent_id",
714 node_id="node_id", new_generation=13)
715 yield d
716
717 self.assertEqual(len(self.listener.zg.events), 1)
718 event = self.listener.zg.events[0]
719
720 self.assertEqual(event.interpretation,
721 Interpretation.DELETE_EVENT)
722 self.assertEqual(event.manifestation,
723 Manifestation.SCHEDULED_ACTIVITY)
724 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
725
726 remote_folder = event.subjects[0]
727 self.assertTrue(remote_folder.uri.startswith(URI_PROTOCOL_U1))
728 self.assertEqual(remote_folder.interpretation, Interpretation.FOLDER)
729 self.assertEqual(remote_folder.manifestation,
730 Manifestation.DELETED_RESOURCE)
731 self.assertTrue(remote_folder.origin.endswith("folder name"))
732 self.assertEqual(remote_folder.mimetype, DIRECTORY_MIMETYPE)
733 self.assertEqual(remote_folder.storage, STORAGE_DELETED)
734
735
736class ZeitgeistLocalFileSyncTestCase(BaseTwistedTestCase):
737 """Zeitgeist events coming from the server."""
738 timeout = 5
739
740 def setUp(self):
741 """Initialize this instance."""
742 BaseTwistedTestCase.setUp(self)
743 self.root = self.mktemp('root')
744 self.shares = self.mktemp('shares')
745 self.data = self.mktemp('data')
746 self.partials_dir = self.mktemp('partials_dir')
747 self.handler = MementoHandler()
748 self.handler.setLevel(logging.ERROR)
749 FakeMain._sync_class = Sync
750 self.main = FakeMain(root_dir=self.root, shares_dir=self.shares,
751 data_dir=self.data,
752 partials_dir=self.partials_dir)
753 self._logger = logging.getLogger('ubuntuone.SyncDaemon')
754 self._logger.addHandler(self.handler)
755
756 self.root_id = root_id = "roootid"
757 self.main.vm._got_root(root_id)
758 self.filemp3delta = delta.FileInfoDelta(
759 generation=5, is_live=True, file_type=delta.FILE,
760 parent_id=self.root_id, share_id=ROOT, node_id=uuid.uuid4(),
761 name=u"fileñ.mp3", is_public=False, content_hash="hash",
762 crc32=1, size=10, last_modified=0)
763
764 self.dirdelta = delta.FileInfoDelta(
765 generation=6, is_live=True, file_type=delta.DIRECTORY,
766 parent_id=root_id, share_id=ROOT, node_id=uuid.uuid4(),
767 name=u"directory_ñ", is_public=False, content_hash="hash",
768 crc32=1, size=10, last_modified=0)
769
770 self.patch(zglog, "ZeitgeistLogger", MockLogger)
771 self.listener = ZeitgeistListener(self.main.fs, self.main.vm)
772 self.main.event_q.subscribe(self.listener)
773
774 def tearDown(self):
775 """Clean up this instance."""
776 self._logger.removeHandler(self.handler)
777 self.main.shutdown()
778 FakeMain._sync_class = None
779 shutil.rmtree(self.root)
780 shutil.rmtree(self.shares)
781 shutil.rmtree(self.data)
782 for record in self.handler.records:
783 exc_info = getattr(record, 'exc_info', None)
784 if exc_info is not None:
785 raise exc_info[0], exc_info[1], exc_info[2]
786 BaseTwistedTestCase.tearDown(self)
787
788 @defer.inlineCallbacks
789 def test_syncdaemon_creates_file_locally_is_logged(self):
790 """Files created locally by SyncDaemon are logged."""
791 d = defer.Deferred()
792 d2 = defer.Deferred()
793 listen_for(self.main.event_q, 'SV_FILE_NEW', d.callback)
794 listen_for(self.main.event_q, 'AQ_DOWNLOAD_FINISHED', d2.callback)
795
796 deltas = [ self.filemp3delta ]
797 kwargs = dict(volume_id=ROOT, delta_content=deltas, end_generation=11,
798 full=True, free_bytes=10)
799 self.main.sync.handle_AQ_DELTA_OK(**kwargs)
800
801 # check that the file is created
802 node = self.main.fs.get_by_node_id(ROOT, self.filemp3delta.node_id)
803 self.assertEqual(node.path, self.filemp3delta.name.encode('utf8'))
804 self.assertEqual(node.is_dir, False)
805 self.assertEqual(node.generation, self.filemp3delta.generation)
806
807 yield d # wait for SV_FILE_NEW
808
809 dlargs = dict(
810 share_id=self.filemp3delta.share_id,
811 node_id=self.filemp3delta.node_id,
812 server_hash="server hash")
813 self.main.event_q.push("AQ_DOWNLOAD_FINISHED", **dlargs)
814
815 yield d2 # wait for AQ_DOWNLOAD_FINISHED
816
817 self.assertEqual(len(self.listener.zg.events), 1)
818 event = self.listener.zg.events[0]
819
820 self.assertEqual(event.interpretation,
821 Interpretation.CREATE_EVENT)
822 self.assertEqual(event.manifestation,
823 Manifestation.WORLD_ACTIVITY)
824 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
825
826 local_file = event.subjects[0]
827 self.assertTrue(local_file.uri.endswith(self.filemp3delta.name))
828 self.assertEqual(local_file.interpretation, Interpretation.AUDIO)
829 self.assertEqual(local_file.manifestation,
830 Manifestation.FILE_DATA_OBJECT)
831 self.assertTrue(local_file.origin.startswith(URI_PROTOCOL_U1))
832 self.assertEqual(local_file.mimetype, "audio/mpeg")
833 self.assertEqual(local_file.storage, STORAGE_LOCAL)
834
835 @defer.inlineCallbacks
836 def test_syncdaemon_creates_dir_locally_is_logged(self):
837 """Dirs created locally by SyncDaemon are logged."""
838 d = defer.Deferred()
839 listen_for(self.main.event_q, 'SV_DIR_NEW', d.callback)
840
841 deltas = [ self.dirdelta ]
842 kwargs = dict(volume_id=ROOT, delta_content=deltas, end_generation=11,
843 full=True, free_bytes=10)
844 self.main.sync.handle_AQ_DELTA_OK(**kwargs)
845
846 # check that the dir is created
847 node = self.main.fs.get_by_node_id(ROOT, self.dirdelta.node_id)
848 self.assertEqual(node.path, self.dirdelta.name.encode('utf8'))
849 self.assertEqual(node.is_dir, True)
850 self.assertEqual(node.generation, self.dirdelta.generation)
851
852 yield d # wait for SV_DIR_NEW
853
854 self.assertEqual(len(self.listener.zg.events), 1)
855 event = self.listener.zg.events[0]
856
857 self.assertEqual(event.interpretation,
858 Interpretation.CREATE_EVENT)
859 self.assertEqual(event.manifestation,
860 Manifestation.WORLD_ACTIVITY)
861 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
862
863 local_file = event.subjects[0]
864 self.assertTrue(local_file.uri.endswith(self.dirdelta.name))
865 self.assertEqual(local_file.interpretation, Interpretation.FOLDER)
866 self.assertEqual(local_file.manifestation,
867 Manifestation.FILE_DATA_OBJECT)
868 self.assertTrue(local_file.origin.startswith(URI_PROTOCOL_U1))
869 self.assertEqual(local_file.mimetype, DIRECTORY_MIMETYPE)
870 self.assertEqual(local_file.storage, STORAGE_LOCAL)
871
872 @defer.inlineCallbacks
873 def test_syncdaemon_modifies_locally_is_logged(self):
874 """Files modified locally by SyncDaemon are logged."""
875 d = defer.Deferred()
876 d2 = defer.Deferred()
877 listen_for(self.main.event_q, 'SV_FILE_NEW', d.callback)
878 listen_for(self.main.event_q, 'AQ_DOWNLOAD_FINISHED', d2.callback)
879
880 deltas = [ self.filemp3delta ]
881 kwargs = dict(volume_id=ROOT, delta_content=deltas, end_generation=11,
882 full=True, free_bytes=10)
883 self.main.sync.handle_AQ_DELTA_OK(**kwargs)
884
885 # check that the file is modified
886 node = self.main.fs.get_by_node_id(ROOT, self.filemp3delta.node_id)
887 self.assertEqual(node.path, self.filemp3delta.name.encode('utf8'))
888 self.assertEqual(node.is_dir, False)
889 self.assertEqual(node.generation, self.filemp3delta.generation)
890
891 yield d # wait for SV_FILE_NEW
892
893 # remove from the recent list
894 local_file_id = (self.filemp3delta.share_id, self.filemp3delta.node_id)
895 self.listener.newly_created_local_files.remove(local_file_id)
896
897 dlargs = dict(
898 share_id=self.filemp3delta.share_id,
899 node_id=self.filemp3delta.node_id,
900 server_hash="server hash")
901 self.main.event_q.push("AQ_DOWNLOAD_FINISHED", **dlargs)
902
903 yield d2 # wait for AQ_DOWNLOAD_FINISHED
904
905 self.assertEqual(len(self.listener.zg.events), 1)
906 event = self.listener.zg.events[0]
907
908 self.assertEqual(event.interpretation,
909 Interpretation.MODIFY_EVENT)
910 self.assertEqual(event.manifestation,
911 Manifestation.WORLD_ACTIVITY)
912 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
913
914 local_file = event.subjects[0]
915 self.assertTrue(local_file.uri.endswith(self.filemp3delta.name))
916 self.assertEqual(local_file.interpretation, Interpretation.AUDIO)
917 self.assertEqual(local_file.manifestation,
918 Manifestation.FILE_DATA_OBJECT)
919 self.assertTrue(local_file.origin.startswith(URI_PROTOCOL_U1))
920 self.assertEqual(local_file.mimetype, "audio/mpeg")
921 self.assertEqual(local_file.storage, STORAGE_LOCAL)
922
923 @defer.inlineCallbacks
924 def test_syncdaemon_deletes_file_locally_is_logged(self):
925 """Files deleted locally by SyncDaemon are logged."""
926 d = defer.Deferred()
927 listen_for(self.main.event_q, 'SV_FILE_DELETED', d.callback)
928
929 filename = self.filemp3delta.name.encode("utf-8")
930 path = os.path.join(self.main.vm.root.path, filename)
931 self.main.fs.create(path, "")
932 self.main.fs.set_node_id(path, "node_id")
933 self.main.event_q.push("SV_FILE_DELETED", volume_id="",
934 node_id="node_id", is_dir=False)
935
936 yield d
937
938 self.assertEqual(len(self.listener.zg.events), 1)
939 event = self.listener.zg.events[0]
940
941 self.assertEqual(event.interpretation,
942 Interpretation.DELETE_EVENT)
943 self.assertEqual(event.manifestation,
944 Manifestation.WORLD_ACTIVITY)
945 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
946
947 local_file = event.subjects[0]
948 self.assertTrue(local_file.uri.endswith(self.filemp3delta.name))
949 self.assertEqual(local_file.interpretation, Interpretation.AUDIO)
950 self.assertEqual(local_file.manifestation,
951 Manifestation.DELETED_RESOURCE)
952 self.assertTrue(local_file.origin.startswith(URI_PROTOCOL_U1))
953 self.assertEqual(local_file.mimetype, "audio/mpeg")
954 self.assertEqual(local_file.storage, STORAGE_DELETED)
955
956 @defer.inlineCallbacks
957 def test_syncdaemon_deletes_dir_locally_is_logged(self):
958 """Dirs deleted locally by SyncDaemon are logged."""
959 d = defer.Deferred()
960 listen_for(self.main.event_q, 'SV_FILE_DELETED', d.callback)
961
962 path = os.path.join(self.main.vm.root.path, "folder name")
963 self.main.fs.create(path, "", is_dir=True)
964 self.main.fs.set_node_id(path, "node_id")
965 self.main.event_q.push("SV_FILE_DELETED", volume_id="",
966 node_id="node_id", is_dir=True)
967
968 yield d
969
970 self.assertEqual(len(self.listener.zg.events), 1)
971 event = self.listener.zg.events[0]
972
973 self.assertEqual(event.interpretation,
974 Interpretation.DELETE_EVENT)
975 self.assertEqual(event.manifestation,
976 Manifestation.WORLD_ACTIVITY)
977 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
978
979 local_folder = event.subjects[0]
980 self.assertTrue(local_folder.uri.endswith("folder name"))
981 self.assertEqual(local_folder.interpretation, Interpretation.FOLDER)
982 self.assertEqual(local_folder.manifestation,
983 Manifestation.DELETED_RESOURCE)
984 self.assertTrue(local_folder.origin.startswith(URI_PROTOCOL_U1))
985 self.assertEqual(local_folder.mimetype, DIRECTORY_MIMETYPE)
986 self.assertEqual(local_folder.storage, STORAGE_DELETED)
987
988 @defer.inlineCallbacks
989 def test_file_sync_conflict_is_logged(self):
990 """Files renamed because of conflict are logged."""
991 d = defer.Deferred()
992 listen_for(self.main.event_q, 'FSM_FILE_CONFLICT', d.callback)
993
994 testfile = os.path.join(self.main.vm.root.path, 'sample.mp3')
995 mdid = self.main.fs.create(testfile, "")
996 self.main.fs.set_node_id(testfile, "uuid")
997 with open(testfile, "w") as fh:
998 fh.write("this is music!")
999
1000 self.main.fs.move_to_conflict(mdid)
1001
1002 yield d
1003
1004 self.assertEqual(len(self.listener.zg.events), 1)
1005 event = self.listener.zg.events[0]
1006
1007 self.assertEqual(event.interpretation,
1008 EVENT_INTERPRETATION_U1_CONFLICT_RENAME)
1009 self.assertEqual(event.manifestation,
1010 Manifestation.WORLD_ACTIVITY)
1011 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
1012
1013 local_file = event.subjects[0]
1014 new_name = testfile + self.main.fs.CONFLICT_SUFFIX
1015 self.assertTrue(local_file.uri.endswith(new_name))
1016 self.assertEqual(local_file.interpretation, Interpretation.AUDIO)
1017 self.assertEqual(local_file.manifestation,
1018 Manifestation.FILE_DATA_OBJECT)
1019 self.assertTrue(local_file.origin.endswith(testfile))
1020 self.assertEqual(local_file.mimetype, "audio/mpeg")
1021 self.assertEqual(local_file.storage, STORAGE_LOCAL)
1022
1023 @defer.inlineCallbacks
1024 def test_dir_sync_conflict_is_logged(self):
1025 """Dirs renamed because of conflict are logged."""
1026 d = defer.Deferred()
1027 listen_for(self.main.event_q, 'FSM_DIR_CONFLICT', d.callback)
1028
1029 testdir = os.path.join(self.main.vm.root.path, 'sampledir')
1030 mdid = self.main.fs.create(testdir, "", is_dir=True)
1031 self.main.fs.set_node_id(testdir, "uuid")
1032 os.mkdir(testdir)
1033
1034 self.main.fs.move_to_conflict(mdid)
1035
1036 yield d
1037
1038 self.assertEqual(len(self.listener.zg.events), 1)
1039 event = self.listener.zg.events[0]
1040
1041 self.assertEqual(event.interpretation,
1042 EVENT_INTERPRETATION_U1_CONFLICT_RENAME)
1043 self.assertEqual(event.manifestation,
1044 Manifestation.WORLD_ACTIVITY)
1045 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
1046
1047 local_file = event.subjects[0]
1048 new_name = testdir + self.main.fs.CONFLICT_SUFFIX
1049 self.assertTrue(local_file.uri.endswith(new_name))
1050 self.assertEqual(local_file.interpretation, Interpretation.FOLDER)
1051 self.assertEqual(local_file.manifestation,
1052 Manifestation.FILE_DATA_OBJECT)
1053 self.assertTrue(local_file.origin.endswith(testdir))
1054 self.assertEqual(local_file.mimetype, DIRECTORY_MIMETYPE)
1055 self.assertEqual(local_file.storage, STORAGE_LOCAL)
1056
1057class ZeitgeistPublicFilesTestCase(ZeitgeistListenerTestCase):
1058 """Public files events are logged into Zeitgeist."""
1059
1060 @defer.inlineCallbacks
1061 def test_publish_url_is_logged(self):
1062 """Publishing a file with a url is logged."""
1063 share_id = "share"
1064 node_id = "node_id"
1065 is_public = True
1066 public_url = 'http://example.com/foo.mp3'
1067
1068 share_path = os.path.join(self.shares_dir, 'share')
1069 self.main.vm.add_share(Share(path=share_path, volume_id='share',
1070 other_username='other username'))
1071 path = os.path.join(share_path, "foo.mp3")
1072 self.main.fs.create(path, str(share_id))
1073 self.main.fs.set_node_id(path, str(node_id))
1074
1075 d = defer.Deferred()
1076 self._listen_for('AQ_CHANGE_PUBLIC_ACCESS_OK', d.callback)
1077 self.main.event_q.push('AQ_CHANGE_PUBLIC_ACCESS_OK',
1078 share_id=share_id, node_id=node_id,
1079 is_public=is_public, public_url=public_url)
1080 yield d
1081
1082 self.assertEqual(len(self.listener.zg.events), 2)
1083 event = self.listener.zg.events[1]
1084
1085 self.assertEqual(event.interpretation,
1086 Interpretation.CREATE_EVENT)
1087 self.assertEqual(event.manifestation,
1088 Manifestation.USER_ACTIVITY)
1089 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
1090
1091 public_file = event.subjects[0]
1092 self.assertEqual(public_file.uri, public_url)
1093 self.assertEqual(public_file.interpretation, Interpretation.AUDIO)
1094 self.assertEqual(public_file.manifestation,
1095 Manifestation.REMOTE_DATA_OBJECT)
1096 self.assertTrue(public_file.origin.endswith(node_id))
1097 self.assertEqual(public_file.mimetype, "audio/mpeg")
1098 self.assertEqual(public_file.storage, STORAGE_NETWORK)
1099
1100 @defer.inlineCallbacks
1101 def test_unpublish_url_is_logged(self):
1102 """Unpublishing a file with a url is logged."""
1103 share_id = "share"
1104 node_id = "node_id"
1105 is_public = False
1106 public_url = 'http://example.com/foo.mp3'
1107
1108 share_path = os.path.join(self.shares_dir, 'share')
1109 self.main.vm.add_share(Share(path=share_path, volume_id='share',
1110 other_username='other username'))
1111 path = os.path.join(share_path, "foo.mp3")
1112 self.main.fs.create(path, str(share_id))
1113 self.main.fs.set_node_id(path, str(node_id))
1114
1115 d = defer.Deferred()
1116 self._listen_for('AQ_CHANGE_PUBLIC_ACCESS_OK', d.callback)
1117 self.main.event_q.push('AQ_CHANGE_PUBLIC_ACCESS_OK',
1118 share_id=share_id, node_id=node_id,
1119 is_public=is_public, public_url=public_url)
1120 yield d
1121
1122 self.assertEqual(len(self.listener.zg.events), 2)
1123 event = self.listener.zg.events[1]
1124
1125 self.assertEqual(event.interpretation,
1126 Interpretation.DELETE_EVENT)
1127 self.assertEqual(event.manifestation,
1128 Manifestation.USER_ACTIVITY)
1129 self.assertEqual(event.actor, ACTOR_UBUNTUONE)
1130
1131 public_file = event.subjects[0]
1132 self.assertEqual(public_file.uri, public_url)
1133 self.assertEqual(public_file.interpretation, Interpretation.AUDIO)
1134 self.assertEqual(public_file.manifestation,
1135 Manifestation.DELETED_RESOURCE)
1136 self.assertTrue(public_file.origin.endswith(node_id))
1137 self.assertEqual(public_file.mimetype, "audio/mpeg")
1138 self.assertEqual(public_file.storage, STORAGE_DELETED)
2831139
=== modified file 'tests/syncdaemon/test_fsm.py'
--- tests/syncdaemon/test_fsm.py 2010-11-30 19:06:17 +0000
+++ tests/syncdaemon/test_fsm.py 2010-12-13 23:42:15 +0000
@@ -29,6 +29,7 @@
29from contrib.testing.testcase import (29from contrib.testing.testcase import (
30 FakeVolumeManager,30 FakeVolumeManager,
31 FakeMain,31 FakeMain,
32 Listener,
32 MementoHandler33 MementoHandler
33)34)
3435
@@ -1874,6 +1875,41 @@
1874 # invalid uuid1875 # invalid uuid
1875 self.assertRaises(KeyError, self.fsm.move_to_conflict, "no-such-mdid")1876 self.assertRaises(KeyError, self.fsm.move_to_conflict, "no-such-mdid")
18761877
1878 def test_conflict_file_pushes_event(self):
1879 """A conflict with a file pushes FSM_FILE_CONFLICT."""
1880 listener = Listener()
1881 self.eq.subscribe(listener)
1882
1883 testfile = os.path.join(self.share_path, "path")
1884 mdid = self.fsm.create(testfile, "share")
1885 self.fsm.set_node_id(testfile, "uuid")
1886 with open(testfile, "w") as fh:
1887 fh.write("test!")
1888
1889 self.fsm.move_to_conflict(mdid)
1890
1891 new_name = testfile + self.fsm.CONFLICT_SUFFIX
1892 kwargs = dict(old_name=testfile, new_name=new_name)
1893
1894 self.assertTrue(("FSM_FILE_CONFLICT", (), kwargs) in listener.events)
1895
1896 def test_conflict_dir_pushes_event(self):
1897 """A conflict with a dir pushes FSM_DIR_CONFLICT."""
1898 listener = Listener()
1899 self.eq.subscribe(listener)
1900
1901 testdir = os.path.join(self.share_path, "path")
1902 mdid = self.fsm.create(testdir, "share", is_dir=True)
1903 self.fsm.set_node_id(testdir, "uuid")
1904 os.mkdir(testdir)
1905
1906 self.fsm.move_to_conflict(mdid)
1907
1908 new_name = testdir + self.fsm.CONFLICT_SUFFIX
1909 kwargs = dict(old_name=testdir, new_name=new_name)
1910
1911 self.assertTrue(("FSM_DIR_CONFLICT", (), kwargs) in listener.events)
1912
1877 def test_upload_finished(self):1913 def test_upload_finished(self):
1878 """Test upload finished."""1914 """Test upload finished."""
1879 path = os.path.join(self.share_path, "path")1915 path = os.path.join(self.share_path, "path")
18801916
=== modified file 'tests/syncdaemon/test_sync.py'
--- tests/syncdaemon/test_sync.py 2010-11-30 17:53:14 +0000
+++ tests/syncdaemon/test_sync.py 2010-12-13 23:42:15 +0000
@@ -1518,7 +1518,8 @@
1518 self.sync.handle_AQ_RESCAN_FROM_SCRATCH_OK(ROOT, [self.rootdt],1518 self.sync.handle_AQ_RESCAN_FROM_SCRATCH_OK(ROOT, [self.rootdt],
1519 100, 100)1519 100, 100)
15201520
1521 self.assertEqual(called, [((ROOT, self.filetxtdelta.node_id), {})])1521 args = (ROOT, self.filetxtdelta.node_id, False)
1522 self.assertEqual(called, [(args, {})])
15221523
1523 def test_deletes_file_in_delta(self):1524 def test_deletes_file_in_delta(self):
1524 """Files in delta should not be deleted."""1525 """Files in delta should not be deleted."""
@@ -1569,5 +1570,56 @@
1569 [self.rootdt], 100, 100)1570 [self.rootdt], 100, 100)
15701571
1571 self.assertEqual(called, [1572 self.assertEqual(called, [
1572 (ROOT, dt.node_id),1573 (ROOT, dt.node_id, False),
1573 (ROOT, self.dirdelta.node_id)])1574 (ROOT, self.dirdelta.node_id, True)])
1575
1576
1577class TestSyncEvents(BaseSync):
1578 """Testing sync stuff related to events."""
1579
1580 def setUp(self):
1581 """Do the setUp."""
1582 super(TestSyncEvents, self).setUp()
1583 self.sync = Sync(main=self.main)
1584 self.handler.setLevel(logging.DEBUG)
1585
1586 key = FSKey(self.main.fs, share_id='', node_id='node_id')
1587 self.ssmr = SyncStateMachineRunner(fsm=self.main.fs, main=self.main,
1588 key=key, logger=None)
1589 self.vm = self.main.vm
1590 self.listener = Listener()
1591 self.main.event_q.subscribe(self.listener)
1592
1593 def test_server_new_file_sends_event(self):
1594 """When a new file is created on the server, an event is sent."""
1595 # create the fake file
1596 self.main.vm._got_root("parent_id")
1597 self.sync._handle_SV_FILE_NEW(ROOT, "node_id", "parent_id", "file")
1598
1599 # check event
1600 kwargs = dict(volume_id=ROOT, node_id='node_id', parent_id="parent_id",
1601 name="file")
1602 self.assertIn(("SV_FILE_NEW", (), kwargs), self.listener.events)
1603
1604 def test_server_new_dir_sends_event(self):
1605 """When a new directory is created on the server, an event is sent."""
1606
1607 # create the fake dir
1608 self.main.vm._got_root("parent_id")
1609 self.sync._handle_SV_DIR_NEW(ROOT, "node_id", "parent_id", "file")
1610
1611 # check event
1612 kwargs = dict(volume_id=ROOT, node_id='node_id', parent_id="parent_id",
1613 name="file")
1614 self.assertIn(("SV_DIR_NEW", (), kwargs), self.listener.events)
1615
1616 def test_server_file_deleted_sends_event(self):
1617 """When a file is deleted, an event is sent."""
1618
1619 # delete the fake file
1620 self.main.vm._got_root("parent_id")
1621 self.sync._handle_SV_FILE_DELETED(ROOT, "node_id", True)
1622
1623 # check event
1624 kwargs = dict(volume_id=ROOT, node_id='node_id', is_dir=True)
1625 self.assertIn(("SV_FILE_DELETED", (), kwargs), self.listener.events)
15741626
=== modified file 'ubuntuone/syncdaemon/event_logging.py'
--- ubuntuone/syncdaemon/event_logging.py 2010-12-09 14:28:21 +0000
+++ ubuntuone/syncdaemon/event_logging.py 2010-12-13 23:42:15 +0000
@@ -17,21 +17,33 @@
17# with this program. If not, see <http://www.gnu.org/licenses/>.17# with this program. If not, see <http://www.gnu.org/licenses/>.
18"""Event logging from SyncDaemon into Zeitgeist."""18"""Event logging from SyncDaemon into Zeitgeist."""
1919
20import mimetypes
21
20from zeitgeist.datamodel import Event, Interpretation, Manifestation, Subject22from zeitgeist.datamodel import Event, Interpretation, Manifestation, Subject
23from zeitgeist.mimetypes import get_interpretation_for_mimetype
2124
22from ubuntuone.eventlog import zglog25from ubuntuone.eventlog import zglog
23from ubuntuone.syncdaemon.volume_manager import Share26from ubuntuone.syncdaemon.volume_manager import Share, UDF
2427
25ACTOR_UBUNTUONE = "dbus://com.ubuntuone.SyncDaemon.service"28ACTOR_UBUNTUONE = "dbus://com.ubuntuone.SyncDaemon.service"
26DIRECTORY_MIMETYPE = "inode/directory"29DIRECTORY_MIMETYPE = "inode/directory"
30DEFAULT_MIME = "application/octet-stream"
31DEFAULT_INTERPRETATION = Interpretation.DOCUMENT
27EVENT_INTERPRETATION_U1_FOLDER_SHARED = "u1://FolderShared"32EVENT_INTERPRETATION_U1_FOLDER_SHARED = "u1://FolderShared"
28EVENT_INTERPRETATION_U1_FOLDER_UNSHARED = "u1://FolderUnshared"33EVENT_INTERPRETATION_U1_FOLDER_UNSHARED = "u1://FolderUnshared"
29EVENT_INTERPRETATION_U1_SHARE_ACCEPTED = "u1://ShareAccepted"34EVENT_INTERPRETATION_U1_SHARE_ACCEPTED = "u1://ShareAccepted"
30EVENT_INTERPRETATION_U1_SHARE_UNACCEPTED = "u1://ShareUnaccepted"35EVENT_INTERPRETATION_U1_SHARE_UNACCEPTED = "u1://ShareUnaccepted"
36EVENT_INTERPRETATION_U1_CONFLICT_RENAME = "u1://ConflictRename"
37EVENT_INTERPRETATION_U1_UDF_CREATED = "u1://UserFolderCreated"
38EVENT_INTERPRETATION_U1_UDF_DELETED = "u1://UserFolderDeleted"
39EVENT_INTERPRETATION_U1_UDF_SUBSCRIBED = "u1://UserFolderSubscribed"
40EVENT_INTERPRETATION_U1_UDF_UNSUBSCRIBED = "u1://UserFolderUnsubscribed"
31MANIFESTATION_U1_CONTACT_DATA_OBJECT = "u1://ContactDataObject"41MANIFESTATION_U1_CONTACT_DATA_OBJECT = "u1://ContactDataObject"
32INTERPRETATION_U1_CONTACT = "u1://Contact"42INTERPRETATION_U1_CONTACT = "u1://Contact"
33URI_PROTOCOL_U1 = "ubuntuone:"43URI_PROTOCOL_U1 = "ubuntuone:"
44STORAGE_LOCAL = ""
34STORAGE_NETWORK = "net"45STORAGE_NETWORK = "net"
46STORAGE_DELETED = "deleted"
3547
36class ZeitgeistListener(object):48class ZeitgeistListener(object):
37 """An Event Queue listener that logs into ZG."""49 """An Event Queue listener that logs into ZG."""
@@ -41,6 +53,8 @@
41 self.fsm = fsm53 self.fsm = fsm
42 self.vm = vm54 self.vm = vm
43 self.zg = zglog.ZeitgeistLogger()55 self.zg = zglog.ZeitgeistLogger()
56 self.newly_created_server_files = set()
57 self.newly_created_local_files = set()
4458
45 def handle_AQ_CREATE_SHARE_OK(self, share_id=None, marker=None):59 def handle_AQ_CREATE_SHARE_OK(self, share_id=None, marker=None):
46 """Log the 'directory shared thru the server' event."""60 """Log the 'directory shared thru the server' event."""
@@ -153,7 +167,351 @@
153167
154 self.zg.log(event)168 self.zg.log(event)
155169
170 def log_udf_deleted(self, volume):
171 """Log the udf deleted event."""
172 folder = Subject.new_for_values(
173 uri=URI_PROTOCOL_U1 + str(volume.node_id),
174 interpretation=Interpretation.FOLDER,
175 manifestation=Manifestation.DELETED_RESOURCE,
176 origin="file:///" + volume.path,
177 mimetype=DIRECTORY_MIMETYPE,
178 storage=STORAGE_DELETED)
179
180 event = Event.new_for_values(
181 interpretation=EVENT_INTERPRETATION_U1_UDF_DELETED,
182 manifestation=Manifestation.USER_ACTIVITY,
183 actor=ACTOR_UBUNTUONE,
184 subjects=[folder])
185
186 self.zg.log(event)
187
156 def handle_VM_VOLUME_DELETED(self, volume):188 def handle_VM_VOLUME_DELETED(self, volume):
157 """Log the share/UDF unaccepted event."""189 """Log the share/UDF unaccepted event."""
158 if isinstance(volume, Share):190 if isinstance(volume, Share):
159 self.log_share_unaccepted(volume)191 self.log_share_unaccepted(volume)
192 if isinstance(volume, UDF):
193 self.log_udf_deleted(volume)
194
195 def handle_VM_UDF_CREATED(self, udf):
196 """An udf was created. Log it into Zeitgeist."""
197 folder = Subject.new_for_values(
198 uri=URI_PROTOCOL_U1 + str(udf.node_id),
199 interpretation=Interpretation.FOLDER,
200 manifestation=Manifestation.REMOTE_DATA_OBJECT,
201 origin="file:///" + udf.path,
202 mimetype=DIRECTORY_MIMETYPE,
203 storage=STORAGE_NETWORK)
204
205 event = Event.new_for_values(
206 interpretation=EVENT_INTERPRETATION_U1_UDF_CREATED,
207 manifestation=Manifestation.USER_ACTIVITY,
208 actor=ACTOR_UBUNTUONE,
209 subjects=[folder])
210
211 self.zg.log(event)
212
213 def handle_VM_UDF_SUBSCRIBED(self, udf):
214 """An udf was subscribed."""
215
216 folder = Subject.new_for_values(
217 uri="file:///" + udf.path,
218 interpretation=Interpretation.FOLDER,
219 manifestation=Manifestation.FILE_DATA_OBJECT,
220 origin=URI_PROTOCOL_U1 + str(udf.node_id),
221 mimetype=DIRECTORY_MIMETYPE,
222 storage=STORAGE_LOCAL)
223
224 event = Event.new_for_values(
225 interpretation=EVENT_INTERPRETATION_U1_UDF_SUBSCRIBED,
226 manifestation=Manifestation.USER_ACTIVITY,
227 actor=ACTOR_UBUNTUONE,
228 subjects=[folder])
229
230 self.zg.log(event)
231
232 def handle_VM_UDF_UNSUBSCRIBED(self, udf):
233 """An udf was unsubscribed."""
234
235 folder = Subject.new_for_values(
236 uri="file:///" + udf.path,
237 interpretation=Interpretation.FOLDER,
238 manifestation=Manifestation.DELETED_RESOURCE,
239 origin=URI_PROTOCOL_U1 + str(udf.node_id),
240 mimetype=DIRECTORY_MIMETYPE,
241 storage=STORAGE_DELETED)
242
243 event = Event.new_for_values(
244 interpretation=EVENT_INTERPRETATION_U1_UDF_UNSUBSCRIBED,
245 manifestation=Manifestation.USER_ACTIVITY,
246 actor=ACTOR_UBUNTUONE,
247 subjects=[folder])
248
249 self.zg.log(event)
250
251 def handle_AQ_FILE_NEW_OK(self, volume_id, marker, new_id, new_generation):
252 """A new file was created on server. Store and wait till it uploads."""
253 self.newly_created_server_files.add((volume_id, new_id))
254
255 def get_mime_and_interpretation_for_filepath(self, filepath):
256 """Try to guess the mime and the interpretation from the path."""
257 mime, encoding = mimetypes.guess_type(filepath)
258 if mime is None:
259 return DEFAULT_MIME, DEFAULT_INTERPRETATION
260 interpret = get_interpretation_for_mimetype(mime)
261 if interpret is None:
262 return DEFAULT_MIME, Interpretation.DOCUMENT
263 return mime, interpret
264
265 def handle_AQ_UPLOAD_FINISHED(self, share_id, node_id, hash,
266 new_generation):
267 """A file finished uploading to the server."""
268
269 mdo = self.fsm.get_by_node_id(share_id, node_id)
270 path = self.fsm.get_abspath(share_id, mdo.path)
271
272 if (share_id, node_id) in self.newly_created_server_files:
273 self.newly_created_server_files.remove((share_id, node_id))
274 event_interpretation = Interpretation.CREATE_EVENT
275 else:
276 event_interpretation = Interpretation.MODIFY_EVENT
277
278 mime, interp = self.get_mime_and_interpretation_for_filepath(path)
279
280 file_subject = Subject.new_for_values(
281 uri=URI_PROTOCOL_U1 + str(node_id),
282 interpretation=interp,
283 manifestation=Manifestation.REMOTE_DATA_OBJECT,
284 origin="file:///" + path,
285 mimetype=mime,
286 storage=STORAGE_NETWORK)
287
288 event = Event.new_for_values(
289 interpretation=event_interpretation,
290 manifestation=Manifestation.SCHEDULED_ACTIVITY,
291 actor=ACTOR_UBUNTUONE,
292 subjects=[file_subject])
293
294 self.zg.log(event)
295
296 def handle_AQ_DIR_NEW_OK(self, volume_id, marker, new_id, new_generation):
297 """A dir was created on the server."""
298
299 mdo = self.fsm.get_by_node_id(volume_id, new_id)
300 path = self.fsm.get_abspath(volume_id, mdo.path)
301
302 file_subject = Subject.new_for_values(
303 uri=URI_PROTOCOL_U1 + str(new_id),
304 interpretation=Interpretation.FOLDER,
305 manifestation=Manifestation.REMOTE_DATA_OBJECT,
306 origin="file:///" + path,
307 mimetype=DIRECTORY_MIMETYPE,
308 storage=STORAGE_NETWORK)
309
310 event = Event.new_for_values(
311 interpretation=Interpretation.CREATE_EVENT,
312 manifestation=Manifestation.SCHEDULED_ACTIVITY,
313 actor=ACTOR_UBUNTUONE,
314 subjects=[file_subject])
315
316 self.zg.log(event)
317
318 def handle_SV_FILE_NEW(self, volume_id, node_id, parent_id, name):
319 """A file was created locally by Syncdaemon."""
320 self.newly_created_local_files.add((volume_id, node_id))
321
322 def handle_AQ_DOWNLOAD_FINISHED(self, share_id, node_id, server_hash):
323 """A file finished downloading from the server."""
324
325 mdo = self.fsm.get_by_node_id(share_id, node_id)
326 path = self.fsm.get_abspath(share_id, mdo.path)
327
328 if (share_id, node_id) in self.newly_created_local_files:
329 self.newly_created_local_files.remove((share_id, node_id))
330 event_interpretation = Interpretation.CREATE_EVENT
331 else:
332 event_interpretation = Interpretation.MODIFY_EVENT
333
334 mime, interp = self.get_mime_and_interpretation_for_filepath(path)
335
336 file_subject = Subject.new_for_values(
337 uri="file:///" + path,
338 interpretation=interp,
339 manifestation=Manifestation.FILE_DATA_OBJECT,
340 origin=URI_PROTOCOL_U1 + str(node_id),
341 mimetype=mime,
342 storage=STORAGE_LOCAL)
343
344 event = Event.new_for_values(
345 interpretation=event_interpretation,
346 manifestation=Manifestation.WORLD_ACTIVITY,
347 actor=ACTOR_UBUNTUONE,
348 subjects=[file_subject])
349
350 self.zg.log(event)
351
352 def handle_SV_DIR_NEW(self, volume_id, node_id, parent_id, name):
353 """A file finished downloading from the server."""
354
355 mdo = self.fsm.get_by_node_id(volume_id, node_id)
356 path = self.fsm.get_abspath(volume_id, mdo.path)
357
358 file_subject = Subject.new_for_values(
359 uri="file:///" + path,
360 interpretation=Interpretation.FOLDER,
361 manifestation=Manifestation.FILE_DATA_OBJECT,
362 origin=URI_PROTOCOL_U1 + str(node_id),
363 mimetype=DIRECTORY_MIMETYPE,
364 storage=STORAGE_LOCAL)
365
366 event = Event.new_for_values(
367 interpretation=Interpretation.CREATE_EVENT,
368 manifestation=Manifestation.WORLD_ACTIVITY,
369 actor=ACTOR_UBUNTUONE,
370 subjects=[file_subject])
371
372 self.zg.log(event)
373
374 def handle_SV_FILE_DELETED(self, volume_id, node_id, is_dir):
375 """A file or folder was deleted locally by Syncdaemon."""
376 mdo = self.fsm.get_by_node_id(volume_id, node_id)
377 path = self.fsm.get_abspath(volume_id, mdo.path)
378
379 if is_dir:
380 mime, interp = DIRECTORY_MIMETYPE, Interpretation.FOLDER
381 else:
382 mime, interp = self.get_mime_and_interpretation_for_filepath(path)
383
384 file_subject = Subject.new_for_values(
385 uri="file:///" + path,
386 interpretation=interp,
387 manifestation=Manifestation.DELETED_RESOURCE,
388 origin=URI_PROTOCOL_U1 + str(node_id),
389 mimetype=mime,
390 storage=STORAGE_DELETED)
391
392 event = Event.new_for_values(
393 interpretation=Interpretation.DELETE_EVENT,
394 manifestation=Manifestation.WORLD_ACTIVITY,
395 actor=ACTOR_UBUNTUONE,
396 subjects=[file_subject])
397
398 self.zg.log(event)
399
400 def handle_AQ_UNLINK_OK(self, share_id, parent_id, node_id,
401 new_generation):
402 """A file or folder was deleted on the server by Syncdaemon,"""
403 mdo = self.fsm.get_by_node_id(share_id, node_id)
404 path = self.fsm.get_abspath(share_id, mdo.path)
405
406 if mdo.is_dir:
407 mime, interp = DIRECTORY_MIMETYPE, Interpretation.FOLDER
408 else:
409 mime, interp = self.get_mime_and_interpretation_for_filepath(path)
410
411 file_subject = Subject.new_for_values(
412 uri=URI_PROTOCOL_U1 + str(node_id),
413 interpretation=interp,
414 manifestation=Manifestation.DELETED_RESOURCE,
415 origin="file:///" + path,
416 mimetype=mime,
417 storage=STORAGE_DELETED)
418
419 event = Event.new_for_values(
420 interpretation=Interpretation.DELETE_EVENT,
421 manifestation=Manifestation.SCHEDULED_ACTIVITY,
422 actor=ACTOR_UBUNTUONE,
423 subjects=[file_subject])
424
425 self.zg.log(event)
426
427 def handle_FSM_FILE_CONFLICT(self, old_name, new_name):
428 """A file was renamed because of conflict."""
429 mime, interp = self.get_mime_and_interpretation_for_filepath(old_name)
430
431 file_subject = Subject.new_for_values(
432 uri="file:///" + new_name,
433 interpretation=interp,
434 manifestation=Manifestation.FILE_DATA_OBJECT,
435 origin="file:///" + old_name,
436 mimetype=mime,
437 storage=STORAGE_LOCAL)
438
439 event = Event.new_for_values(
440 interpretation=EVENT_INTERPRETATION_U1_CONFLICT_RENAME,
441 manifestation=Manifestation.WORLD_ACTIVITY,
442 actor=ACTOR_UBUNTUONE,
443 subjects=[file_subject])
444
445 self.zg.log(event)
446
447 def handle_FSM_DIR_CONFLICT(self, old_name, new_name):
448 """A dir was renamed because of conflict."""
449 folder_subject = Subject.new_for_values(
450 uri="file:///" + new_name,
451 interpretation=Interpretation.FOLDER,
452 manifestation=Manifestation.FILE_DATA_OBJECT,
453 origin="file:///" + old_name,
454 mimetype=DIRECTORY_MIMETYPE,
455 storage=STORAGE_LOCAL)
456
457 event = Event.new_for_values(
458 interpretation=EVENT_INTERPRETATION_U1_CONFLICT_RENAME,
459 manifestation=Manifestation.WORLD_ACTIVITY,
460 actor=ACTOR_UBUNTUONE,
461 subjects=[folder_subject])
462
463 self.zg.log(event)
464
465 def handle_AQ_CHANGE_PUBLIC_ACCESS_OK(self, share_id, node_id, is_public,
466 public_url):
467 """The status of a published resource changed. Log it!"""
468 if is_public:
469 self.log_publishing(share_id, node_id, is_public, public_url)
470 else:
471 self.log_unpublishing(share_id, node_id, is_public, public_url)
472
473 def log_publishing(self, share_id, node_id, is_public, public_url):
474 """Log the publishing of a resource."""
475 mime, interp = self.get_mime_and_interpretation_for_filepath(
476 public_url)
477
478 origin = "" if node_id is None else URI_PROTOCOL_U1 + str(node_id)
479
480 public_file = Subject.new_for_values(
481 uri=public_url,
482 interpretation=interp,
483 manifestation=Manifestation.REMOTE_DATA_OBJECT,
484 origin=origin,
485 mimetype=mime,
486 storage=STORAGE_NETWORK)
487
488 event = Event.new_for_values(
489 interpretation=Interpretation.CREATE_EVENT,
490 manifestation=Manifestation.USER_ACTIVITY,
491 actor=ACTOR_UBUNTUONE,
492 subjects=[public_file])
493
494 self.zg.log(event)
495
496 def log_unpublishing(self, share_id, node_id, is_public, public_url):
497 """Log the unpublishing of a resource."""
498 mime, interp = self.get_mime_and_interpretation_for_filepath(
499 public_url)
500
501 origin = "" if node_id is None else URI_PROTOCOL_U1 + str(node_id)
502
503 public_file = Subject.new_for_values(
504 uri=public_url,
505 interpretation=interp,
506 manifestation=Manifestation.DELETED_RESOURCE,
507 origin=origin,
508 mimetype=mime,
509 storage=STORAGE_DELETED)
510
511 event = Event.new_for_values(
512 interpretation=Interpretation.DELETE_EVENT,
513 manifestation=Manifestation.USER_ACTIVITY,
514 actor=ACTOR_UBUNTUONE,
515 subjects=[public_file])
516
517 self.zg.log(event)
160518
=== modified file 'ubuntuone/syncdaemon/event_queue.py'
--- ubuntuone/syncdaemon/event_queue.py 2010-12-09 21:05:56 +0000
+++ ubuntuone/syncdaemon/event_queue.py 2010-12-13 23:42:15 +0000
@@ -101,6 +101,9 @@
101 'SV_VOLUME_CREATED': ('volume',),101 'SV_VOLUME_CREATED': ('volume',),
102 'SV_VOLUME_DELETED': ('volume_id',),102 'SV_VOLUME_DELETED': ('volume_id',),
103 'SV_VOLUME_NEW_GENERATION': ('volume_id', 'generation'),103 'SV_VOLUME_NEW_GENERATION': ('volume_id', 'generation'),
104 'SV_FILE_NEW': ('volume_id', 'node_id', 'parent_id', 'name'),
105 'SV_DIR_NEW': ('volume_id', 'node_id', 'parent_id', 'name'),
106 'SV_FILE_DELETED': ('volume_id', 'node_id', 'is_dir'),
104107
105 'HQ_HASH_NEW': ('path', 'hash', 'crc32', 'size', 'stat'),108 'HQ_HASH_NEW': ('path', 'hash', 'crc32', 'size', 'stat'),
106 'HQ_HASH_ERROR': ('mdid',),109 'HQ_HASH_ERROR': ('mdid',),
@@ -138,6 +141,9 @@
138 'SYS_QUOTA_EXCEEDED': ('volume_id', 'free_bytes'),141 'SYS_QUOTA_EXCEEDED': ('volume_id', 'free_bytes'),
139 'SYS_BROKEN_NODE': ('volume_id', 'node_id', 'path', 'mdid'),142 'SYS_BROKEN_NODE': ('volume_id', 'node_id', 'path', 'mdid'),
140143
144 'FSM_FILE_CONFLICT': ('old_name', 'new_name'),
145 'FSM_DIR_CONFLICT': ('old_name', 'new_name'),
146
141 'VM_UDF_SUBSCRIBED': ('udf',),147 'VM_UDF_SUBSCRIBED': ('udf',),
142 'VM_UDF_SUBSCRIBE_ERROR': ('udf_id', 'error'),148 'VM_UDF_SUBSCRIBE_ERROR': ('udf_id', 'error'),
143 'VM_UDF_UNSUBSCRIBED': ('udf',),149 'VM_UDF_UNSUBSCRIBED': ('udf',),
144150
=== modified file 'ubuntuone/syncdaemon/filesystem_manager.py'
--- ubuntuone/syncdaemon/filesystem_manager.py 2010-12-02 21:27:56 +0000
+++ ubuntuone/syncdaemon/filesystem_manager.py 2010-12-13 23:42:15 +0000
@@ -840,7 +840,8 @@
840 while path_exists(to_path):840 while path_exists(to_path):
841 ind += 1841 ind += 1
842 to_path = base_to_path + "." + str(ind)842 to_path = base_to_path + "." + str(ind)
843 if mdobj["is_dir"]:843 is_dir = mdobj["is_dir"]
844 if is_dir:
844 expected_event = "FS_DIR_MOVE"845 expected_event = "FS_DIR_MOVE"
845 else:846 else:
846 expected_event = "FS_FILE_MOVE"847 expected_event = "FS_FILE_MOVE"
@@ -848,6 +849,8 @@
848 try:849 try:
849 self.eq.add_to_mute_filter(expected_event, path, to_path)850 self.eq.add_to_mute_filter(expected_event, path, to_path)
850 rename(path, to_path)851 rename(path, to_path)
852 event = "FSM_DIR_CONFLICT" if is_dir else "FSM_FILE_CONFLICT"
853 self.eq.push(event, old_name=path, new_name=to_path)
851 except OSError, e:854 except OSError, e:
852 self.eq.rm_from_mute_filter(expected_event, path, to_path)855 self.eq.rm_from_mute_filter(expected_event, path, to_path)
853 if e.errno == errno.ENOENT:856 if e.errno == errno.ENOENT:
854857
=== modified file 'ubuntuone/syncdaemon/sync.py'
--- ubuntuone/syncdaemon/sync.py 2010-11-08 16:45:50 +0000
+++ ubuntuone/syncdaemon/sync.py 2010-12-13 23:42:15 +0000
@@ -830,6 +830,8 @@
830 log = FileLogger(self.logger, key)830 log = FileLogger(self.logger, key)
831 ssmr = SyncStateMachineRunner(self.fsm, self.m, key, log)831 ssmr = SyncStateMachineRunner(self.fsm, self.m, key, log)
832 ssmr.on_event("SV_FILE_NEW", {}, share_id, node_id, parent_id, name)832 ssmr.on_event("SV_FILE_NEW", {}, share_id, node_id, parent_id, name)
833 self.m.event_q.push('SV_FILE_NEW', volume_id=share_id,
834 node_id=node_id, parent_id=parent_id, name=name)
833835
834 def _handle_SV_DIR_NEW(self, share_id, node_id, parent_id, name):836 def _handle_SV_DIR_NEW(self, share_id, node_id, parent_id, name):
835 """on SV_DIR_NEW"""837 """on SV_DIR_NEW"""
@@ -839,13 +841,17 @@
839 log = FileLogger(self.logger, key)841 log = FileLogger(self.logger, key)
840 ssmr = SyncStateMachineRunner(self.fsm, self.m, key, log)842 ssmr = SyncStateMachineRunner(self.fsm, self.m, key, log)
841 ssmr.on_event("SV_DIR_NEW", {}, share_id, node_id, parent_id, name)843 ssmr.on_event("SV_DIR_NEW", {}, share_id, node_id, parent_id, name)
844 self.m.event_q.push('SV_DIR_NEW', volume_id=share_id,
845 node_id=node_id, parent_id=parent_id, name=name)
842846
843 def _handle_SV_FILE_DELETED(self, share_id, node_id):847 def _handle_SV_FILE_DELETED(self, share_id, node_id, is_dir):
844 """on SV_FILE_DELETED. Not called by EQ anymore."""848 """on SV_FILE_DELETED. Not called by EQ anymore."""
845 key = FSKey(self.m.fs, share_id=share_id, node_id=node_id)849 key = FSKey(self.m.fs, share_id=share_id, node_id=node_id)
846 log = FileLogger(self.logger, key)850 log = FileLogger(self.logger, key)
847 ssmr = SyncStateMachineRunner(self.fsm, self.m, key, log)851 ssmr = SyncStateMachineRunner(self.fsm, self.m, key, log)
848 ssmr.on_event("SV_FILE_DELETED", {})852 ssmr.on_event("SV_FILE_DELETED", {})
853 self.m.event_q.push('SV_FILE_DELETED', volume_id=share_id,
854 node_id=node_id, is_dir=is_dir)
849855
850 def handle_AQ_DOWNLOAD_FINISHED(self, share_id, node_id, server_hash):856 def handle_AQ_DOWNLOAD_FINISHED(self, share_id, node_id, server_hash):
851 """on AQ_DOWNLOAD_FINISHED"""857 """on AQ_DOWNLOAD_FINISHED"""
@@ -1078,7 +1084,8 @@
1078 # about it1084 # about it
1079 if not dt.is_live:1085 if not dt.is_live:
1080 to_delete.append(dt)1086 to_delete.append(dt)
1081 self._handle_SV_FILE_DELETED(dt.share_id, dt.node_id)1087 self._handle_SV_FILE_DELETED(dt.share_id, dt.node_id,
1088 is_dir)
1082 # nothing else should happen with this1089 # nothing else should happen with this
1083 continue1090 continue
10841091
@@ -1194,7 +1201,7 @@
1194 continue1201 continue
11951202
1196 if not node_id in live_nodes:1203 if not node_id in live_nodes:
1197 self._handle_SV_FILE_DELETED(volume_id, node_id)1204 self._handle_SV_FILE_DELETED(volume_id, node_id, node.is_dir)
1198 deletes += 11205 deletes += 1
11991206
1200 self.logger.info(1207 self.logger.info(

Subscribers

People subscribed via source and target branches