Merge lp:~nataliabidart/ubuntuone-client/fix-tests into lp:ubuntuone-client

Proposed by Natalia Bidart
Status: Merged
Approved by: Natalia Bidart
Approved revision: 1100
Merged at revision: 1086
Proposed branch: lp:~nataliabidart/ubuntuone-client/fix-tests
Merge into: lp:ubuntuone-client
Diff against target: 3882 lines (+781/-678)
13 files modified
contrib/testing/testcase.py (+33/-13)
tests/platform/linux/test_dbus.py (+3/-2)
tests/syncdaemon/test_eq_inotify.py (+271/-199)
tests/syncdaemon/test_fsm.py (+23/-11)
tests/syncdaemon/test_localrescan.py (+12/-17)
tests/syncdaemon/test_main.py (+5/-6)
tests/syncdaemon/test_sync.py (+29/-24)
tests/syncdaemon/test_vm.py (+326/-331)
tests/syncdaemon/test_vm_helper.py (+32/-40)
ubuntuone/platform/windows/filesystem_notifications.py (+5/-3)
ubuntuone/platform/windows/os_helper.py (+6/-2)
ubuntuone/syncdaemon/vm_helper.py (+19/-16)
ubuntuone/syncdaemon/volume_manager.py (+17/-14)
To merge this branch: bzr merge lp:~nataliabidart/ubuntuone-client/fix-tests
Reviewer Review Type Date Requested Status
Alejandro J. Cura (community) Approve
Roberto Alsina (community) Approve
Review via email: mp+70373@code.launchpad.net

Commit message

- Lots of cleanup to have more tests passing in windows. Part of LP: #817543.

Description of the change

Among other changes, this branch do the following:

* it fixes the UDF creation (not tested IRL but tests regarding that now pass).
* it replaces calls to os.path.exists for path_exists.
* removes all the unneeded calls to os.path.abspath in test_vm.py.
* removes all the calls to environ to set a custom HOME by doing this replacement in the root testcase setUp.
* it replaces all the calls to os.symlink for make_link.

I was trying to compare the improvement in test passing vs failing from trunk to this branch, and I realized etst suite does not finish in trunk due to test_eq_inotify getting stuck.

So you'll have to trust me that tons of tests are fixed ;-).

To run the suite, in linux:

./autogen.sh
make check

In windows:

set PYTHONPATH=<ubuntu sso client trunk>;.
run-tests.bat

Running this branch test suite in windows returns:

FAILED (skips=13, failures=40, errors=146, successes=1982)

To post a comment you must log in.
Revision history for this message
Roberto Alsina (ralsina) :
review: Approve
Revision history for this message
Alejandro J. Cura (alecu) wrote :

And yet again, nessita delivers with surgical precision a huge and very needed branch with lots of fixes, without missing a comma, and with no extra spaces.
Bravo! Great! Impressive!

---

Just a few style changes, and I'll approve:

I really don't like abusing the "or" operator in lambdas, like this:
  lambda p: called.append(p) or defer.succeed(True)
What should be the result of "None or Deferred()"???
I'd like to see the new cases of this changed into proper functions.

Please change:
 'we can\'t create filenames with invalid utf8 bytes sequence.'
to:
 "we can't create filenames with invalid utf8 bytes sequence."

There's a dangling: "#[] " in a comment

review: Needs Fixing
Revision history for this message
Alejandro J. Cura (alecu) wrote :

Approve!!!! \Q/

review: Approve
Revision history for this message
Ubuntu One Auto Pilot (otto-pilot) wrote :
Download full text (293.8 KiB)

The attempt to merge lp:~nataliabidart/ubuntuone-client/fix-tests into lp:ubuntuone-client failed. Below is the output from the failed tests.

/usr/bin/gnome-autogen.sh
checking for autoconf >= 2.53...
  testing autoconf2.50... not found.
  testing autoconf... found 2.67
checking for automake >= 1.10...
  testing automake-1.11... found 1.11.1
checking for libtool >= 1.5...
  testing libtoolize... found 2.2.6b
checking for intltool >= 0.30...
  testing intltoolize... found 0.41.1
checking for pkg-config >= 0.14.0...
  testing pkg-config... found 0.25
checking for gtk-doc >= 1.0...
  testing gtkdocize... found 1.17
Checking for required M4 macros...
Checking for forbidden M4 macros...
Processing ./configure.ac
Running libtoolize...
libtoolize: putting auxiliary files in `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `m4'.
libtoolize: copying file `m4/libtool.m4'
libtoolize: copying file `m4/ltoptions.m4'
libtoolize: copying file `m4/ltsugar.m4'
libtoolize: copying file `m4/ltversion.m4'
libtoolize: copying file `m4/lt~obsolete.m4'
Running intltoolize...
Running gtkdocize...
Running aclocal-1.11...
Running autoconf...
Running autoheader...
Running automake-1.11...
Running ./configure --enable-gtk-doc --enable-debug ...
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking for library containing strerror... none required
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking dependency style of gcc... (cached) gcc3
checking build system type... i686-pc-linux-gnu
checking host system type... i686-pc-linux-gnu
checking for a sed that does not truncate output... /bin/sed
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking whether the shell understands some XSI constructs... yes
checking whether the shell understands "+="... yes
checking for /usr/bin/ld option to reload object files... -r
checking for objdump... objdump
checking how to recognize dependent libr...

1100. By Natalia Bidart

Using bytes when calling os.* in linux.

Revision history for this message
Alejandro J. Cura (alecu) wrote :

Tests now pass when LANG and LANGUAGE are unset.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'contrib/testing/testcase.py'
2--- contrib/testing/testcase.py 2011-07-27 17:31:25 +0000
3+++ contrib/testing/testcase.py 2011-08-04 19:19:24 +0000
4@@ -18,10 +18,11 @@
5 from __future__ import with_statement
6
7 import contextlib
8+import itertools
9 import logging
10 import os
11 import shutil
12-import itertools
13+import sys
14
15 from ubuntuone.syncdaemon import (
16 config,
17@@ -35,7 +36,14 @@
18 tritcask,
19 )
20 from ubuntuone.syncdaemon import logger
21-from ubuntuone.platform import make_dir, set_dir_readwrite, can_write
22+from ubuntuone.platform import (
23+ can_write,
24+ make_dir,
25+ path_exists,
26+ set_dir_readonly,
27+ set_dir_readwrite,
28+ stat_path,
29+)
30 logger.init()
31 from twisted.internet import defer
32 from twisted.trial.unittest import TestCase as TwistedTestCase
33@@ -82,7 +90,7 @@
34 def insert(self, path, mdid):
35 """Fake insert."""
36 self.eq.push('HQ_HASH_NEW', path=path, hash='',
37- crc32='', size=0, stat=os.stat(path))
38+ crc32='', size=0, stat=stat_path(path))
39
40
41 class FakeMark(object):
42@@ -185,7 +193,7 @@
43 _sync_class = None
44
45 # don't call Main.__init__ we take care of creating a fake main and
46- # all its attributes. pylint: disable-msg=W0231
47+ # all its attributes. pylint: disable=W0231
48 def __init__(self, root_dir, shares_dir, data_dir, partials_dir):
49 """ create the instance. """
50 self.logger = logging.getLogger('ubuntuone.SyncDaemon.FakeMain')
51@@ -261,10 +269,11 @@
52 def mktemp(self, name='temp'):
53 """ Customized mktemp that accepts an optional name argument. """
54 tempdir = os.path.join(self.tmpdir, name)
55- if os.path.exists(tempdir):
56+ if path_exists(tempdir):
57 self.rmtree(tempdir)
58 self.makedirs(tempdir)
59 self.addCleanup(self.rmtree, tempdir)
60+ assert isinstance(tempdir, str)
61 return tempdir
62
63 @property
64@@ -278,33 +287,44 @@
65 self.__class__.__name__,
66 self._testMethodName)
67 # use _trial_temp dir, it should be os.gwtcwd()
68- # define the root temp dir of the testcase, pylint: disable-msg=W0201
69- self.__root = os.path.join(os.getcwd(), base)
70+ # define the root temp dir of the testcase, pylint: disable=W0201
71+ root_tmp = os.environ.get('TRIAL_TEMP_DIR', os.getcwd())
72+ self.__root = os.path.join(root_tmp, '_trial_temp', base)
73 return self.__root
74
75 def rmtree(self, path):
76 """Custom rmtree that handle ro parent(s) and childs."""
77+ assert isinstance(path, str)
78 # on windows the paths cannot be removed because the process running
79 # them has the ownership and therefore are locked.
80- if not os.path.exists(path):
81+ if not path_exists(path):
82 return
83 # change perms to rw, so we can delete the temp dir
84 if path != getattr(self, '__root', None):
85 set_dir_readwrite(os.path.dirname(path))
86 if not can_write(path):
87 set_dir_readwrite(path)
88- # pylint: disable-msg=W0612
89+
90+ if sys.platform == 'win32':
91+ # path is a byte sequence encoded with utf8. If we pass this to
92+ # os.walk, in windows, we'll get results encoded with mbcs
93+ path = path.decode('utf8')
94+
95 for dirpath, dirs, files in os.walk(path):
96 for adir in dirs:
97 adir = os.path.join(dirpath, adir)
98+ if sys.platform == 'win32':
99+ assert isinstance(adir, unicode)
100+ adir = adir.encode('utf8')
101 if not can_write(adir):
102 set_dir_readwrite(adir)
103+ # XXX: We should not be ignoring errors
104 shutil.rmtree(path, ignore_errors=True)
105
106 def makedirs(self, path):
107 """Custom makedirs that handle ro parent."""
108 parent = os.path.dirname(path)
109- if os.path.exists(parent):
110+ if path_exists(parent):
111 set_dir_readwrite(parent)
112 make_dir(path, recursive=True)
113
114@@ -385,11 +405,11 @@
115 """Add share to the shares dict."""
116 self.shares[share.id] = share
117 # if the share don't exists, create it
118- if not os.path.exists(share.path):
119- os.mkdir(share.path)
120+ if not path_exists(share.path):
121+ make_dir(share.path)
122 # if it's a ro share, change the perms
123 if not share.can_write():
124- os.chmod(share.path, 0555)
125+ set_dir_readonly(share.path)
126
127 def add_udf(self, udf):
128 """Add udf to the udfs dict."""
129
130=== modified file 'tests/platform/linux/test_dbus.py'
131--- tests/platform/linux/test_dbus.py 2011-07-27 20:10:33 +0000
132+++ tests/platform/linux/test_dbus.py 2011-08-04 19:19:24 +0000
133@@ -1333,8 +1333,9 @@
134 public_files = []
135 expected = []
136 udf_id = str(uuid.uuid4())
137- udf_path = get_udf_path('~/foo/bar')
138- udf = UDF(str(udf_id), str('udf_node_id'), u'~/foo/bar', udf_path, True)
139+ suggested_path = u'~/foo/bar'
140+ udf_path = get_udf_path(suggested_path)
141+ udf = UDF(udf_id, 'udf_node_id', suggested_path, udf_path, True)
142 self.main.vm.udfs[udf_id] = udf
143 for i in xrange(5):
144 if i % 2:
145
146=== modified file 'tests/syncdaemon/test_eq_inotify.py'
147--- tests/syncdaemon/test_eq_inotify.py 2011-07-27 19:30:37 +0000
148+++ tests/syncdaemon/test_eq_inotify.py 2011-08-04 19:19:24 +0000
149@@ -23,12 +23,13 @@
150 import os
151
152 from twisted.internet import defer, reactor
153+from ubuntuone.devtools.testcase import skipIfOS
154+from ubuntuone.devtools.handlers import MementoHandler
155
156 from contrib.testing.testcase import BaseTwistedTestCase, FakeMain, Listener
157+from tests.syncdaemon.test_eventqueue import BaseEQTestCase
158+from ubuntuone.platform import make_dir, make_link, open_file, path_exists
159 from ubuntuone.syncdaemon import volume_manager
160-from tests.syncdaemon.test_eventqueue import BaseEQTestCase
161-
162-from ubuntuone.devtools.handlers import MementoHandler
163
164 # our logging level
165 TRACE = logging.getLevelName('TRACE')
166@@ -42,19 +43,27 @@
167
168 def handle_default(self, *a, **k):
169 """Something here? Error!"""
170- self.test_instance.finished_error("Don't hit me! Received %s %s" % (a, k))
171+ self.test_instance.finished_error("Don't hit me! Received %s %s" %
172+ (a, k))
173
174
175 class WatchTests(BaseEQTestCase):
176 """Test the EQ API to add and remove watchs."""
177
178+ @defer.inlineCallbacks
179 def test_add_watch(self):
180 """Test that watchs can be added."""
181 called = []
182 method_resp = object()
183 method_arg = object()
184- self.eq.monitor.add_watch = lambda a: called.append(a) or method_resp
185- res = self.eq.add_watch(method_arg)
186+
187+ def add_watch(path):
188+ """Fake it."""
189+ called.append(path)
190+ return defer.succeed(method_resp)
191+
192+ self.eq.monitor.add_watch = add_watch
193+ res = yield self.eq.add_watch(method_arg)
194 self.assertEqual(called, [method_arg])
195 self.assertEqual(res, method_resp)
196
197@@ -221,6 +230,7 @@
198 self.assertTrue(self.handler.check_debug("Dirty by", "otherpth",
199 "event"))
200
201+ @defer.inlineCallbacks
202 def test_commit_no_middle_events(self):
203 """Commit behaviour when nothing happened in the middle."""
204 testdir = os.path.join(self.root_dir, "foo")
205@@ -246,13 +256,14 @@
206 d.addCallback(check)
207
208 # set up everything and freeze
209- self.eq.add_watch(testdir)
210+ yield self.eq.add_watch(testdir)
211 self.eq.subscribe(HitMe())
212 self.eq.freeze_begin(testdir)
213
214 reactor.callLater(.1, freeze_commit)
215- return self._deferred
216+ yield self._deferred
217
218+ @defer.inlineCallbacks
219 def test_commit_middle_events(self):
220 """Commit behaviour when something happened in the middle."""
221 testdir = os.path.join(self.root_dir, "foo")
222@@ -271,14 +282,15 @@
223 d.addCallback(check)
224
225 # set up everything and freeze
226- self.eq.add_watch(testdir)
227+ yield self.eq.add_watch(testdir)
228 self.eq.subscribe(DontHitMe(self))
229 self.eq.freeze_begin(testdir)
230
231- open(testfile, "w").close()
232+ open_file(testfile, "w").close()
233 reactor.callLater(.1, freeze_commit)
234- return self._deferred
235+ yield self._deferred
236
237+ @defer.inlineCallbacks
238 def test_rollback(self):
239 """Check rollback."""
240 testdir = os.path.join(self.root_dir, "foo")
241@@ -303,15 +315,16 @@
242 self.eq.freeze_commit, [("FS_DIR_DELETE", "foobar")])
243
244 # set up everything and freeze
245- self.eq.add_watch(testdir)
246+ yield self.eq.add_watch(testdir)
247 self.eq.subscribe(HitMe())
248 self.eq.freeze_begin(testdir)
249
250 # don't matter if had changes, rollback cleans them
251- open(testfile, "w").close()
252+ open_file(testfile, "w").close()
253 reactor.callLater(.1, freeze_rollback)
254- return self._deferred
255+ yield self._deferred
256
257+ @defer.inlineCallbacks
258 def test_selective(self):
259 """Check that it's frozen only for a path."""
260 testdir = os.path.join(self.root_dir, "foo")
261@@ -342,17 +355,17 @@
262 self.finished_error(msg)
263
264 # set up everything
265- self.eq.add_watch(self.root_dir)
266- self.eq.add_watch(testdir)
267+ yield self.eq.add_watch(self.root_dir)
268+ yield self.eq.add_watch(testdir)
269 self.eq.subscribe(HitMe())
270
271 # only freeze one path
272 self.eq.freeze_begin(testdir)
273
274 # generate events in the nonfrozen path
275- open(testfile, "w").close()
276+ open_file(testfile, "w").close()
277
278- return self._deferred
279+ yield self._deferred
280
281
282 class MutedSignalsTests(BaseTwisted):
283@@ -362,36 +375,39 @@
284 self.assertFalse(self.eq._processor._to_mute._cnt)
285 self.finished_ok()
286
287+ @defer.inlineCallbacks
288 def test_file_open(self):
289 """Test receiving the open signal on files."""
290 testfile = os.path.join(self.root_dir, "foo")
291- open(testfile, "w").close()
292+ open_file(testfile, "w").close()
293 self.eq.add_to_mute_filter("FS_FILE_OPEN", path=testfile)
294 self.eq.add_to_mute_filter("FS_FILE_CLOSE_NOWRITE", path=testfile)
295
296- self.eq.add_watch(self.root_dir)
297+ yield self.eq.add_watch(self.root_dir)
298 self.eq.subscribe(DontHitMe(self))
299
300 # generate the event
301- open(testfile)
302+ open_file(testfile)
303 reactor.callLater(.1, self._deferred.callback, True)
304- return self._deferred
305+ yield self._deferred
306
307+ @defer.inlineCallbacks
308 def test_file_close_nowrite(self):
309 """Test receiving the close_nowrite signal on files."""
310 testfile = os.path.join(self.root_dir, "foo")
311- open(testfile, "w").close()
312- fh = open(testfile)
313+ open_file(testfile, "w").close()
314+ fh = open_file(testfile)
315 self.eq.add_to_mute_filter("FS_FILE_CLOSE_NOWRITE", path=testfile)
316
317- self.eq.add_watch(self.root_dir)
318+ yield self.eq.add_watch(self.root_dir)
319 self.eq.subscribe(DontHitMe(self))
320
321 # generate the event
322 fh.close()
323 reactor.callLater(.1, self._deferred.callback, True)
324- return self._deferred
325+ yield self._deferred
326
327+ @defer.inlineCallbacks
328 def test_file_create_close_write(self):
329 """Test receiving the create and close_write signals on files."""
330 testfile = os.path.join(self.root_dir, "foo")
331@@ -399,55 +415,59 @@
332 self.eq.add_to_mute_filter("FS_FILE_OPEN", path=testfile)
333 self.eq.add_to_mute_filter("FS_FILE_CLOSE_WRITE", path=testfile)
334
335- self.eq.add_watch(self.root_dir)
336+ yield self.eq.add_watch(self.root_dir)
337 self.eq.subscribe(DontHitMe(self))
338
339 # generate the event
340- open(testfile, "w").close()
341+ open_file(testfile, "w").close()
342 reactor.callLater(.1, self._deferred.callback, True)
343- return self._deferred
344+ yield self._deferred
345
346+ @defer.inlineCallbacks
347 def test_dir_create(self):
348 """Test receiving the create signal on dirs."""
349 testdir = os.path.join(self.root_dir, "foo")
350 self.eq.add_to_mute_filter("FS_DIR_CREATE", path=testdir)
351
352- self.eq.add_watch(self.root_dir)
353+ yield self.eq.add_watch(self.root_dir)
354 self.eq.subscribe(DontHitMe(self))
355
356 # generate the event
357 os.mkdir(testdir)
358 reactor.callLater(.1, self._deferred.callback, True)
359- return self._deferred
360+ yield self._deferred
361
362+ @defer.inlineCallbacks
363 def test_file_delete(self):
364 """Test the delete signal on a file."""
365 testfile = os.path.join(self.root_dir, "foo")
366- open(testfile, "w").close()
367+ open_file(testfile, "w").close()
368 self.eq.add_to_mute_filter("FS_FILE_DELETE", path=testfile)
369
370- self.eq.add_watch(self.root_dir)
371+ yield self.eq.add_watch(self.root_dir)
372 self.eq.subscribe(DontHitMe(self))
373
374 # generate the event
375 os.remove(testfile)
376 reactor.callLater(.1, self._deferred.callback, True)
377- return self._deferred
378+ yield self._deferred
379
380+ @defer.inlineCallbacks
381 def test_dir_delete(self):
382 """Test the delete signal on a dir."""
383 testdir = os.path.join(self.root_dir, "foo")
384 os.mkdir(testdir)
385 self.eq.add_to_mute_filter("FS_DIR_DELETE", path=testdir)
386
387- self.eq.add_watch(self.root_dir)
388+ yield self.eq.add_watch(self.root_dir)
389 self.eq.subscribe(DontHitMe(self))
390
391 # generate the event
392 os.rmdir(testdir)
393 reactor.callLater(.1, self._deferred.callback, True)
394- return self._deferred
395+ yield self._deferred
396
397+ @defer.inlineCallbacks
398 def test_file_moved_inside(self):
399 """Test the synthesis of the FILE_MOVE event."""
400 fromfile = os.path.join(self.root_dir, "foo")
401@@ -456,18 +476,19 @@
402 tofile = os.path.join(self.root_dir, "bar")
403 self.fs.create(tofile, "")
404 self.fs.set_node_id(tofile, "to_node_id")
405- open(fromfile, "w").close()
406+ open_file(fromfile, "w").close()
407 self.eq.add_to_mute_filter("FS_FILE_MOVE",
408 path_from=fromfile, path_to=tofile)
409
410- self.eq.add_watch(self.root_dir)
411+ yield self.eq.add_watch(self.root_dir)
412 self.eq.subscribe(DontHitMe(self))
413
414 # generate the event
415 os.rename(fromfile, tofile)
416 reactor.callLater(.1, self._deferred.callback, True)
417- return self._deferred
418+ yield self._deferred
419
420+ @defer.inlineCallbacks
421 def test_dir_moved_inside(self):
422 """Test the synthesis of the DIR_MOVE event."""
423 fromdir = os.path.join(self.root_dir, "foo")
424@@ -480,14 +501,15 @@
425 self.eq.add_to_mute_filter("FS_DIR_MOVE",
426 path_from=fromdir, path_to=todir)
427
428- self.eq.add_watch(self.root_dir)
429+ yield self.eq.add_watch(self.root_dir)
430 self.eq.subscribe(DontHitMe(self))
431
432 # generate the event
433 os.rename(fromdir, todir)
434 reactor.callLater(.1, self._deferred.callback, True)
435- return self._deferred
436+ yield self._deferred
437
438+ @defer.inlineCallbacks
439 def test_file_moved_to_conflict(self):
440 """Test the handling of the FILE_MOVE event when dest is conflict."""
441 fromfile = os.path.join(self.root_dir, "foo")
442@@ -496,34 +518,35 @@
443 tofile = os.path.join(self.root_dir, "foo.u1conflict")
444 self.fs.create(tofile, "")
445 self.fs.set_node_id(tofile, "to_node_id")
446- open(fromfile, "w").close()
447+ open_file(fromfile, "w").close()
448 self.eq.add_to_mute_filter("FS_FILE_DELETE", path=fromfile)
449
450- self.eq.add_watch(self.root_dir)
451+ yield self.eq.add_watch(self.root_dir)
452 self.eq.subscribe(DontHitMe(self))
453
454 # generate the event
455 os.rename(fromfile, tofile)
456 reactor.callLater(.1, self._deferred.callback, True)
457- return self._deferred
458+ yield self._deferred
459
460+ @defer.inlineCallbacks
461 def test_file_moved_from_partial(self):
462 """Test the handling of the FILE_MOVE event when source is partial."""
463 fromfile = os.path.join(self.root_dir, "mdid.u1partial.foo")
464 root_dir = os.path.join(self.root_dir, "my_files")
465 tofile = os.path.join(root_dir, "foo")
466 os.mkdir(root_dir)
467- open(fromfile, "w").close()
468+ open_file(fromfile, "w").close()
469 self.eq.add_to_mute_filter("FS_FILE_CREATE", path=tofile)
470 self.eq.add_to_mute_filter("FS_FILE_CLOSE_WRITE", path=tofile)
471
472- self.eq.add_watch(root_dir)
473+ yield self.eq.add_watch(root_dir)
474 self.eq.subscribe(DontHitMe(self))
475
476 # generate the event
477 os.rename(fromfile, tofile)
478 reactor.callLater(.1, self._deferred.callback, True)
479- return self._deferred
480+ yield self._deferred
481
482
483 class AncestorsUDFTestCase(BaseTwistedTestCase):
484@@ -556,29 +579,29 @@
485 # create UDF
486 suggested_path = u'~/Documents/Reading/Books/PDFs'
487 udf_id, node_id = 'udf_id', 'node_id'
488- path = os.path.expanduser(suggested_path).encode("utf8")
489+ path = volume_manager.get_udf_path(suggested_path)
490 self.udf = volume_manager.UDF(udf_id, node_id,
491 suggested_path, path, True)
492- os.makedirs(path)
493+ make_dir(path, recursive=True)
494 yield self.eq.fs.vm.add_udf(self.udf)
495
496 # create a second UDF
497 suggested_path = u'~/Documents/Reading/Magazines/Text'
498 udf_id2, node_id2 = 'udf_id_2', 'node_id_2'
499- path = os.path.expanduser(suggested_path).encode("utf8")
500+ path = volume_manager.get_udf_path(suggested_path)
501 self.udf2 = volume_manager.UDF(udf_id2, node_id2,
502 suggested_path, path, True)
503- os.makedirs(path)
504+ make_dir(path, recursive=True)
505 yield self.eq.fs.vm.add_udf(self.udf2)
506
507 # every ancestor has a watch already, added by LocalRescan. Copy that.
508- self.eq.add_watch(self.udf.path)
509+ yield self.eq.add_watch(self.udf.path)
510 for path in self.udf.ancestors:
511- self.eq.add_watch(path)
512+ yield self.eq.add_watch(path)
513
514- self.eq.add_watch(self.udf2.path)
515+ yield self.eq.add_watch(self.udf2.path)
516 for path in self.udf2.ancestors:
517- self.eq.add_watch(path)
518+ yield self.eq.add_watch(path)
519
520 # reset events up to now
521 self.listener.events = []
522@@ -615,6 +638,7 @@
523 return first
524 assertEqual = failUnlessEqual
525
526+ @defer.inlineCallbacks
527 def test_file_events_are_ignored_on_udf_ancestor(self):
528 """Events on UDF ancestors are ignored."""
529 for path in self.udf.ancestors:
530@@ -622,10 +646,11 @@
531
532 fname = os.path.join(path, 'testit')
533 # generate FS_FILE_CREATE, FS_FILE_OPEN, FS_FILE_CLOSE_WRITE
534- open(fname, 'w').close()
535+ open_file(fname, 'w').close()
536 # generate FS_FILE_CLOSE_NOWRITE
537- with open(fname) as f:
538- f.read()
539+ f = open_file(fname)
540+ f.read()
541+ f.close()
542 # generate FS_FILE_DELETE
543 os.remove(fname)
544
545@@ -641,8 +666,9 @@
546 self._deferred.callback(True)
547
548 reactor.callLater(.1, check)
549- return self._deferred
550+ yield self._deferred
551
552+ @defer.inlineCallbacks
553 def test_file_events_are_not_ignored_on_others(self):
554 """Events in the UDF are not ignored."""
555 path = self.udf.path
556@@ -650,10 +676,11 @@
557
558 fname = os.path.join(path, 'testit')
559 # generate FS_FILE_CREATE, FS_FILE_OPEN, FS_FILE_CLOSE_WRITE
560- open(fname, 'w').close()
561+ open_file(fname, 'w').close()
562 # generate FS_FILE_CLOSE_NOWRITE
563- with open(fname) as f:
564- f.read()
565+ f = open_file(fname)
566+ f.read()
567+ f.close()
568 # generate FS_FILE_DELETE
569 os.remove(fname)
570
571@@ -678,8 +705,9 @@
572 self._deferred.callback(True)
573
574 reactor.callLater(.1, check)
575- return self._deferred
576+ yield self._deferred
577
578+ @defer.inlineCallbacks
579 def test_file_events_are_not_ignored_on_common_prefix_name(self):
580 """Events in a UDF with similar name to ancestor are not ignored."""
581 fname = os.path.join(self.udf2.path, 'testit')
582@@ -695,14 +723,15 @@
583 self._deferred.callback(True)
584
585 # generate FS_FILE_CREATE, FS_FILE_OPEN, FS_FILE_CLOSE_WRITE
586- open(fname, 'w').close()
587+ open_file(fname, 'w').close()
588
589 # generate FS_FILE_CLOSE_NOWRITE
590- with open(fname) as f:
591- f.read()
592+ f = open_file(fname)
593+ f.read()
594+ f.close()
595
596 reactor.callLater(.1, check)
597- return self._deferred
598+ yield self._deferred
599
600 def test_move_udf_ancestor(self):
601 """UDF is unsubscribed on ancestor move."""
602@@ -711,7 +740,7 @@
603 # generate IN_MOVED_FROM and IN_MOVED_TO
604 newpath = path + u'.old'
605 os.rename(path, newpath)
606- assert os.path.exists(newpath)
607+ assert path_exists(newpath)
608
609 unsubscribed = []
610 def check():
611@@ -746,7 +775,7 @@
612 original = self.eq.fs.vm.unsubscribe_udf
613 newpath = self.udf.path + u'.old'
614 os.rename(self.udf.path, newpath)
615- assert os.path.exists(newpath)
616+ assert path_exists(newpath)
617
618 unsubscribed = []
619 def check():
620@@ -913,17 +942,24 @@
621 return self._deferred
622
623
624+@skipIfOS('win32', "we can't create files with invalid utf8 byte sequences.")
625 class NonUTF8NamesTests(BaseTwisted):
626 """Test the non-utf8 name handling."""
627
628 invalid_name = "invalid \xff\xff name"
629
630+ @defer.inlineCallbacks
631+ def setUp(self):
632+ yield super(NonUTF8NamesTests, self).setUp()
633+ self.invalid_path = os.path.join(self.root_dir, self.invalid_name)
634+
635+ @defer.inlineCallbacks
636 def test_file_open(self):
637 """Test invalid_filename after open a file."""
638- testfile = os.path.join(self.root_dir, self.invalid_name)
639- open(testfile, "w").close()
640+ open_file(self.invalid_path, "w").close()
641+ self.addCleanup(os.remove, self.invalid_path)
642
643- self.eq.add_watch(self.root_dir)
644+ yield self.eq.add_watch(self.root_dir)
645 should_events = [
646 ('FS_INVALID_NAME', dict(dirname=self.root_dir,
647 filename=self.invalid_name)), # open
648@@ -933,16 +969,17 @@
649 self.eq.subscribe(DynamicHitMe(should_events, self))
650
651 # generate the event
652- open(testfile)
653- return self._deferred
654+ open_file(self.invalid_path)
655+ yield self._deferred
656
657+ @defer.inlineCallbacks
658 def test_file_close_nowrite(self):
659 """Test invalid_filename after a close no write."""
660- testfile = os.path.join(self.root_dir, self.invalid_name)
661- open(testfile, "w").close()
662- fh = open(testfile)
663+ open_file(self.invalid_path, "w").close()
664+ self.addCleanup(os.remove, self.invalid_path)
665+ fh = open_file(self.invalid_path)
666
667- self.eq.add_watch(self.root_dir)
668+ yield self.eq.add_watch(self.root_dir)
669 should_events = [
670 ('FS_INVALID_NAME', dict(dirname=self.root_dir,
671 filename=self.invalid_name)), # close no w
672@@ -951,13 +988,12 @@
673
674 # generate the event
675 fh.close()
676- return self._deferred
677+ yield self._deferred
678
679+ @defer.inlineCallbacks
680 def test_file_create_close_write(self):
681 """Test invalid_filename after a create, open and close write."""
682- testfile = os.path.join(self.root_dir, self.invalid_name)
683-
684- self.eq.add_watch(self.root_dir)
685+ yield self.eq.add_watch(self.root_dir)
686 should_events = [
687 ('FS_INVALID_NAME', dict(dirname=self.root_dir,
688 filename=self.invalid_name)), # create
689@@ -969,14 +1005,14 @@
690 self.eq.subscribe(DynamicHitMe(should_events, self))
691
692 # generate the event
693- open(testfile, "w").close()
694- return self._deferred
695+ open_file(self.invalid_path, "w").close()
696+ self.addCleanup(os.remove, self.invalid_path)
697+ yield self._deferred
698
699+ @defer.inlineCallbacks
700 def test_dir_create(self):
701 """Test invalid_filename after a dir create."""
702- testdir = os.path.join(self.root_dir, self.invalid_name)
703-
704- self.eq.add_watch(self.root_dir)
705+ yield self.eq.add_watch(self.root_dir)
706 should_events = [
707 ('FS_INVALID_NAME', dict(dirname=self.root_dir,
708 filename=self.invalid_name)), # create
709@@ -984,15 +1020,16 @@
710 self.eq.subscribe(DynamicHitMe(should_events, self))
711
712 # generate the event
713- os.mkdir(testdir)
714- return self._deferred
715+ os.mkdir(self.invalid_path)
716+ self.addCleanup(os.rmdir, self.invalid_path)
717+ yield self._deferred
718
719+ @defer.inlineCallbacks
720 def test_file_delete(self):
721 """Test invalid_filename after a file delete."""
722- testfile = os.path.join(self.root_dir, self.invalid_name)
723- open(testfile, "w").close()
724+ open_file(self.invalid_path, "w").close()
725
726- self.eq.add_watch(self.root_dir)
727+ yield self.eq.add_watch(self.root_dir)
728 should_events = [
729 ('FS_INVALID_NAME', dict(dirname=self.root_dir,
730 filename=self.invalid_name)), # delete
731@@ -1000,15 +1037,15 @@
732 self.eq.subscribe(DynamicHitMe(should_events, self))
733
734 # generate the event
735- os.remove(testfile)
736- return self._deferred
737+ os.remove(self.invalid_path)
738+ yield self._deferred
739
740+ @defer.inlineCallbacks
741 def test_dir_delete(self):
742 """Test invalid_filename after a dir delete."""
743- testdir = os.path.join(self.root_dir, self.invalid_name)
744- os.mkdir(testdir)
745+ os.mkdir(self.invalid_path)
746
747- self.eq.add_watch(self.root_dir)
748+ yield self.eq.add_watch(self.root_dir)
749 should_events = [
750 ('FS_INVALID_NAME', dict(dirname=self.root_dir,
751 filename=self.invalid_name)), # delete
752@@ -1016,18 +1053,18 @@
753 self.eq.subscribe(DynamicHitMe(should_events, self))
754
755 # generate the event
756- os.rmdir(testdir)
757- return self._deferred
758+ os.rmdir(self.invalid_path)
759+ yield self._deferred
760
761+ @defer.inlineCallbacks
762 def test_file_move_to(self):
763 """Test invalid_filename after moving a file into a watched dir."""
764- fromfile = os.path.join(self.root_dir, self.invalid_name)
765- open(fromfile, "w").close()
766+ open_file(self.invalid_path, "w").close()
767 destdir = os.path.join(self.root_dir, "watched_dir")
768 os.mkdir(destdir)
769 destfile = os.path.join(destdir, self.invalid_name)
770
771- self.eq.add_watch(destdir)
772+ yield self.eq.add_watch(destdir)
773 should_events = [
774 ('FS_INVALID_NAME', dict(dirname=destdir,
775 filename=self.invalid_name)), # move to
776@@ -1035,18 +1072,20 @@
777 self.eq.subscribe(DynamicHitMe(should_events, self))
778
779 # generate the event
780- os.rename(fromfile, destfile)
781- return self._deferred
782+ os.rename(self.invalid_path, destfile)
783+ self.addCleanup(os.remove, destfile)
784+ yield self._deferred
785
786+ @defer.inlineCallbacks
787 def test_file_move_from(self):
788 """Test invalid_filename after moving a file from a watched dir."""
789 fromdir = os.path.join(self.root_dir, "watched_dir")
790 os.mkdir(fromdir)
791 fromfile = os.path.join(fromdir, self.invalid_name)
792- open(fromfile, "w").close()
793+ open_file(fromfile, "w").close()
794 destfile = os.path.join(self.root_dir, self.invalid_name)
795
796- self.eq.add_watch(fromdir)
797+ yield self.eq.add_watch(fromdir)
798 should_events = [
799 ('FS_INVALID_NAME', dict(dirname=fromdir,
800 filename=self.invalid_name)), # move from
801@@ -1055,12 +1094,14 @@
802
803 # generate the event
804 os.rename(fromfile, destfile)
805- return self._deferred
806+ self.addCleanup(os.remove, destfile)
807+ yield self._deferred
808
809
810 class SignalingTests(BaseTwisted):
811 """Test the whole stuff to receive signals."""
812
813+ @defer.inlineCallbacks
814 def test_file_open(self):
815 """Test receiving the open signal on files."""
816 testfile = os.path.join(self.root_dir, "foo")
817@@ -1075,18 +1116,19 @@
818 os.remove(testfile)
819 self.finished_ok()
820
821- self.eq.add_watch(self.root_dir)
822+ yield self.eq.add_watch(self.root_dir)
823 self.eq.subscribe(HitMe())
824
825 # generate the event
826- open(testfile, "w")
827- return self._deferred
828+ open_file(testfile, "w")
829+ yield self._deferred
830
831+ @defer.inlineCallbacks
832 def test_file_close_nowrite(self):
833 """Test receiving the close_nowrite signal on files."""
834 testfile = os.path.join(self.root_dir, "foo")
835- open(testfile, "w").close()
836- fh = open(testfile)
837+ open_file(testfile, "w").close()
838+ fh = open_file(testfile)
839
840 # helper class, pylint: disable-msg=C0111
841 class HitMe(object):
842@@ -1098,13 +1140,14 @@
843 os.remove(testfile)
844 self.finished_ok()
845
846- self.eq.add_watch(self.root_dir)
847+ yield self.eq.add_watch(self.root_dir)
848 self.eq.subscribe(HitMe())
849
850 # generate the event
851 fh.close()
852- return self._deferred
853+ yield self._deferred
854
855+ @defer.inlineCallbacks
856 def test_file_create_close_write(self):
857 """Test receiving the create and close_write signals on files."""
858 testfile = os.path.join(self.root_dir, "foo")
859@@ -1132,13 +1175,14 @@
860 msg = "Finished in bad condition: %s" % innerself.hist
861 self.finished_error(msg)
862
863- self.eq.add_watch(self.root_dir)
864+ yield self.eq.add_watch(self.root_dir)
865 self.eq.subscribe(HitMe())
866
867 # generate the event
868- open(testfile, "w").close()
869- return self._deferred
870+ open_file(testfile, "w").close()
871+ yield self._deferred
872
873+ @defer.inlineCallbacks
874 def test_dir_create(self):
875 """Test receiving the create signal on dirs."""
876 testdir = os.path.join(self.root_dir, "foo")
877@@ -1153,17 +1197,18 @@
878 os.rmdir(testdir)
879 self.finished_ok()
880
881- self.eq.add_watch(self.root_dir)
882+ yield self.eq.add_watch(self.root_dir)
883 self.eq.subscribe(HitMe())
884
885 # generate the event
886 os.mkdir(testdir)
887- return self._deferred
888+ yield self._deferred
889
890+ @defer.inlineCallbacks
891 def test_file_delete(self):
892 """Test the delete signal on a file."""
893 testfile = os.path.join(self.root_dir, "foo")
894- open(testfile, "w").close()
895+ open_file(testfile, "w").close()
896
897 # helper class, pylint: disable-msg=C0111
898 class HitMe(object):
899@@ -1179,13 +1224,14 @@
900
901 self.finished_ok()
902
903- self.eq.add_watch(self.root_dir)
904+ yield self.eq.add_watch(self.root_dir)
905 self.eq.subscribe(HitMe())
906
907 # generate the event
908 os.remove(testfile)
909- return self._deferred
910+ yield self._deferred
911
912+ @defer.inlineCallbacks
913 def test_dir_delete(self):
914 """Test the delete signal on a dir."""
915 testdir = os.path.join(self.root_dir, "foo")
916@@ -1208,19 +1254,20 @@
917
918 self.finished_ok()
919
920- self.eq.add_watch(self.root_dir)
921+ yield self.eq.add_watch(self.root_dir)
922 self.eq.subscribe(HitMe())
923
924 # generate the event
925 os.rmdir(testdir)
926- return self._deferred
927+ yield self._deferred
928
929+ @defer.inlineCallbacks
930 def test_symlink(self):
931 """Test that symlinks are ignored."""
932 testdir = os.path.join(self.root_dir, "foo")
933 os.mkdir(testdir)
934 fromfile = os.path.join(self.root_dir, "from")
935- open(fromfile, "w").close()
936+ open_file(fromfile, "w").close()
937 symlpath = os.path.join(testdir, "syml")
938
939 def confirm():
940@@ -1228,19 +1275,20 @@
941 self.finished_ok()
942
943 # set up everything and freeze
944- self.eq.add_watch(testdir)
945+ yield self.eq.add_watch(testdir)
946 self.eq.subscribe(DontHitMe(self))
947
948- os.symlink(fromfile, symlpath)
949+ make_link(fromfile, symlpath)
950 reactor.callLater(.1, confirm)
951- return self._deferred
952+ yield self._deferred
953
954+ @defer.inlineCallbacks
955 def test_file_moved_from(self):
956 """Test receiving the delete signal on a file when moved_from."""
957 fromfile = os.path.join(self.root_dir, "foo")
958 helpdir = os.path.join(self.root_dir, "dir")
959 tofile = os.path.join(helpdir, "foo")
960- open(fromfile, "w").close()
961+ open_file(fromfile, "w").close()
962 os.mkdir(helpdir)
963
964 # helper class, pylint: disable-msg=C0111
965@@ -1254,13 +1302,14 @@
966 os.rmdir(helpdir)
967 self.finished_ok()
968
969- self.eq.add_watch(self.root_dir)
970+ yield self.eq.add_watch(self.root_dir)
971 self.eq.subscribe(HitMe())
972
973 # generate the event
974 os.rename(fromfile, tofile)
975- return self._deferred
976+ yield self._deferred
977
978+ @defer.inlineCallbacks
979 def test_dir_moved_from(self):
980 """Test receiving the delete signal on a dir when it's moved_from."""
981 fromdir = os.path.join(self.root_dir, "foo")
982@@ -1280,20 +1329,21 @@
983 os.rmdir(helpdir)
984 self.finished_ok()
985
986- self.eq.add_watch(self.root_dir)
987+ yield self.eq.add_watch(self.root_dir)
988 self.eq.subscribe(HitMe())
989
990 # generate the event
991 os.rename(fromdir, todir)
992- return self._deferred
993+ yield self._deferred
994
995+ @defer.inlineCallbacks
996 def test_file_moved_to(self):
997 """Test receiving the create signal on a file when it's moved_to."""
998 fromfile = os.path.join(self.root_dir, "dir", "foo")
999 tofile = os.path.join(self.root_dir, "foo")
1000 helpdir = os.path.join(self.root_dir, "dir")
1001 os.mkdir(helpdir)
1002- open(fromfile, "w").close()
1003+ open_file(fromfile, "w").close()
1004
1005 # helper class, pylint: disable-msg=C0111
1006 class HitMe(object):
1007@@ -1306,13 +1356,14 @@
1008 os.rmdir(helpdir)
1009 self.finished_ok()
1010
1011- self.eq.add_watch(self.root_dir)
1012+ yield self.eq.add_watch(self.root_dir)
1013 self.eq.subscribe(HitMe())
1014
1015 # generate the event
1016 os.rename(fromfile, tofile)
1017- return self._deferred
1018+ yield self._deferred
1019
1020+ @defer.inlineCallbacks
1021 def test_dir_moved_to(self):
1022 """Test receiving the create signal on a file when it's moved_to."""
1023 fromdir = os.path.join(self.root_dir, "dir", "foo")
1024@@ -1332,13 +1383,14 @@
1025 os.rmdir(helpdir)
1026 self.finished_ok()
1027
1028- self.eq.add_watch(self.root_dir)
1029+ yield self.eq.add_watch(self.root_dir)
1030 self.eq.subscribe(HitMe())
1031
1032 # generate the event
1033 os.rename(fromdir, todir)
1034- return self._deferred
1035+ yield self._deferred
1036
1037+ @defer.inlineCallbacks
1038 def test_dir_moved_from_ignored(self):
1039 """Test moving a dir from ignored name."""
1040 fromdir = os.path.join(self.root_dir, "bar")
1041@@ -1351,13 +1403,14 @@
1042
1043 should_events = [("FS_DIR_CREATE", dict(path=todir))]
1044
1045- self.eq.add_watch(self.root_dir)
1046+ yield self.eq.add_watch(self.root_dir)
1047 self.eq.subscribe(DynamicHitMe(should_events, self))
1048
1049 # generate the event
1050 os.rename(fromdir, todir)
1051- return self._deferred
1052+ yield self._deferred
1053
1054+ @defer.inlineCallbacks
1055 def test_dir_moved_to_ignored(self):
1056 """Test moving a dir to ignored name."""
1057 fromdir = os.path.join(self.root_dir, "foo")
1058@@ -1370,18 +1423,19 @@
1059
1060 should_events = [("FS_DIR_DELETE", dict(path=fromdir))]
1061
1062- self.eq.add_watch(self.root_dir)
1063+ yield self.eq.add_watch(self.root_dir)
1064 self.eq.subscribe(DynamicHitMe(should_events, self))
1065
1066 # generate the event
1067 os.rename(fromdir, todir)
1068- return self._deferred
1069+ yield self._deferred
1070
1071+ @defer.inlineCallbacks
1072 def test_file_moved_from_ignored(self):
1073 """Test moving a file from ignored name."""
1074 fromfile = os.path.join(self.root_dir, "bar")
1075 tofile = os.path.join(self.root_dir, "foo")
1076- open(fromfile, 'w').close()
1077+ open_file(fromfile, 'w').close()
1078
1079 # patch the general processor to ignore all that ends with bar
1080 self.patch(self.eq.monitor._processor.general_processor, "is_ignored",
1081@@ -1391,18 +1445,19 @@
1082 ("FS_FILE_CREATE", dict(path=tofile)),
1083 ("FS_FILE_CLOSE_WRITE", dict(path=tofile)),
1084 ]
1085- self.eq.add_watch(self.root_dir)
1086+ yield self.eq.add_watch(self.root_dir)
1087 self.eq.subscribe(DynamicHitMe(should_events, self))
1088
1089 # generate the event
1090 os.rename(fromfile, tofile)
1091- return self._deferred
1092+ yield self._deferred
1093
1094+ @defer.inlineCallbacks
1095 def test_file_moved_to_ignored(self):
1096 """Test moving a file to ignored name."""
1097 fromfile = os.path.join(self.root_dir, "foo")
1098 tofile = os.path.join(self.root_dir, "bar")
1099- open(fromfile, 'w').close()
1100+ open_file(fromfile, 'w').close()
1101
1102 # patch the general processor to ignore all that ends with bar
1103 self.patch(self.eq.monitor._processor.general_processor, "is_ignored",
1104@@ -1410,20 +1465,21 @@
1105
1106 should_events = [("FS_FILE_DELETE", dict(path=fromfile))]
1107
1108- self.eq.add_watch(self.root_dir)
1109+ yield self.eq.add_watch(self.root_dir)
1110 self.eq.subscribe(DynamicHitMe(should_events, self))
1111
1112 # generate the event
1113 os.rename(fromfile, tofile)
1114- return self._deferred
1115+ yield self._deferred
1116
1117+ @defer.inlineCallbacks
1118 def test_lots_of_changes(self):
1119 """Test doing several operations on files."""
1120 helpdir = os.path.join(self.root_dir, "dir")
1121 os.mkdir(helpdir)
1122 mypath = functools.partial(os.path.join, self.root_dir)
1123
1124- self.eq.add_watch(self.root_dir)
1125+ yield self.eq.add_watch(self.root_dir)
1126
1127 should_events = [
1128 ("FS_FILE_CREATE", dict(path=mypath("foo"))),
1129@@ -1444,16 +1500,17 @@
1130 self.eq.subscribe(DynamicHitMe(should_events, self))
1131
1132 # generate the events
1133- open(mypath("foo"), "w").close()
1134+ open_file(mypath("foo"), "w").close()
1135 os.rename(mypath("foo"), mypath("dir", "foo"))
1136- open(mypath("bar"), "w").close()
1137+ open_file(mypath("bar"), "w").close()
1138 os.rename(mypath("dir", "foo"), mypath("foo"))
1139 os.rename(mypath("bar"), mypath("dir", "bar"))
1140 os.remove(mypath("foo"))
1141 os.rename(mypath("dir", "bar"), mypath("bar"))
1142 os.remove(mypath("bar"))
1143- return self._deferred
1144+ yield self._deferred
1145
1146+ @defer.inlineCallbacks
1147 def test_file_moved_inside(self):
1148 """Test the synthesis of the FILE_MOVE event."""
1149 fromfile = os.path.join(self.root_dir, "foo")
1150@@ -1462,7 +1519,7 @@
1151 tofile = os.path.join(self.root_dir, "bar")
1152 self.fs.create(tofile, "")
1153 self.fs.set_node_id(tofile, "to_node_id")
1154- open(fromfile, "w").close()
1155+ open_file(fromfile, "w").close()
1156
1157 # helper class, pylint: disable-msg=C0111
1158 class HitMe(object):
1159@@ -1476,13 +1533,14 @@
1160 os.remove(tofile)
1161 self.finished_ok()
1162
1163- self.eq.add_watch(self.root_dir)
1164+ yield self.eq.add_watch(self.root_dir)
1165 self.eq.subscribe(HitMe())
1166
1167 # generate the event
1168 os.rename(fromfile, tofile)
1169- return self._deferred
1170+ yield self._deferred
1171
1172+ @defer.inlineCallbacks
1173 def test_dir_moved_inside(self):
1174 """Test the synthesis of the DIR_MOVE event."""
1175 fromdir = os.path.join(self.root_dir, "foo")
1176@@ -1505,13 +1563,14 @@
1177 os.rmdir(todir)
1178 self.finished_ok()
1179
1180- self.eq.add_watch(self.root_dir)
1181+ yield self.eq.add_watch(self.root_dir)
1182 self.eq.subscribe(HitMe())
1183
1184 # generate the event
1185 os.rename(fromdir, todir)
1186- return self._deferred
1187+ yield self._deferred
1188
1189+ @defer.inlineCallbacks
1190 def test_file_moved_inside_mixed(self):
1191 """Test the synthesis of the FILE_MOVE event with more events."""
1192 helpdir = os.path.join(self.root_dir, "dir")
1193@@ -1523,7 +1582,7 @@
1194 self.fs.set_node_id(mypath('bar'), "bar_node_id")
1195
1196
1197- self.eq.add_watch(self.root_dir)
1198+ yield self.eq.add_watch(self.root_dir)
1199
1200 should_events = [
1201 ("FS_FILE_CREATE", dict(path=mypath("foo"))),
1202@@ -1542,15 +1601,16 @@
1203 self.eq.subscribe(DynamicHitMe(should_events, self))
1204
1205 # generate the events
1206- open(mypath("foo"), "w").close()
1207- open(mypath("bar"), "w").close()
1208+ open_file(mypath("foo"), "w").close()
1209+ open_file(mypath("bar"), "w").close()
1210 os.rename(mypath("foo"), mypath("dir", "foo"))
1211 os.rename(mypath("dir", "foo"), mypath("foo"))
1212 os.rename(mypath("bar"), mypath("baz"))
1213 os.remove(mypath("foo"))
1214 os.remove(mypath("baz"))
1215- return self._deferred
1216+ yield self._deferred
1217
1218+ @defer.inlineCallbacks
1219 def test_dir_with_contents_moved_outside(self):
1220 """ test the move of a dir outside the watched diresctory."""
1221 root = os.path.join(self.root_dir, "watched_root")
1222@@ -1565,7 +1625,7 @@
1223 testfile = os.path.join(testdir, "testfile")
1224 self.eq.fs.create(testfile, '')
1225 self.eq.fs.set_node_id(testfile, 'testfile_id')
1226- open(testfile, 'w').close()
1227+ open_file(testfile, 'w').close()
1228
1229 paths = [testdir, testfile]
1230 # helper class, pylint: disable-msg=C0111
1231@@ -1582,13 +1642,14 @@
1232 def handle_FS_FILE_DELETE(innerself, path):
1233 self.assertEqual(paths.pop(), path)
1234
1235- self.eq.add_watch(root)
1236+ yield self.eq.add_watch(root)
1237 self.eq.subscribe(HitMe())
1238
1239 # generate the event
1240 os.rename(testdir, os.path.join(trash, os.path.basename(testdir)))
1241- return self._deferred
1242+ yield self._deferred
1243
1244+ @defer.inlineCallbacks
1245 def test_creation_inside_a_moved_directory(self):
1246 """Test that renaming a directory is supported."""
1247 testdir = os.path.join(self.root_dir, "testdir")
1248@@ -1608,8 +1669,8 @@
1249 os.rmdir(newdirname)
1250 self.finished_ok()
1251
1252- self.eq.add_watch(self.root_dir)
1253- self.eq.add_watch(testdir)
1254+ yield self.eq.add_watch(self.root_dir)
1255+ yield self.eq.add_watch(testdir)
1256 self.eq.subscribe(HitMe())
1257
1258 # rename the dir
1259@@ -1617,9 +1678,10 @@
1260
1261 # generate the event
1262 newfilepath = os.path.join(newdirname, "afile")
1263- open(newfilepath, "w").close()
1264- return self._deferred
1265+ open_file(newfilepath, "w").close()
1266+ yield self._deferred
1267
1268+ @defer.inlineCallbacks
1269 def test_outside_file_moved_to(self):
1270 """Test receiving the create signal on a file when it's moved_to."""
1271 fromfile = os.path.join(self.root_dir, "foo")
1272@@ -1627,19 +1689,20 @@
1273 tofile = os.path.join(root_dir, "foo")
1274 mypath = functools.partial(os.path.join, root_dir)
1275 os.mkdir(root_dir)
1276- open(fromfile, "w").close()
1277+ open_file(fromfile, "w").close()
1278
1279 should_events = [
1280 ("FS_FILE_CREATE", dict(path=mypath("foo"))),
1281 ("FS_FILE_CLOSE_WRITE", dict(path=mypath("foo"))),
1282 ]
1283 self.eq.subscribe(DynamicHitMe(should_events, self))
1284- self.eq.add_watch(root_dir)
1285+ yield self.eq.add_watch(root_dir)
1286
1287 # generate the event
1288 os.rename(fromfile, tofile)
1289- return self._deferred
1290+ yield self._deferred
1291
1292+ @defer.inlineCallbacks
1293 def test_outside_dir_with_contents_moved_to(self):
1294 """Test receiving the create signal on a file when it's moved_to."""
1295 fromdir = os.path.join(self.root_dir, "foo_dir")
1296@@ -1649,18 +1712,19 @@
1297 todir = os.path.join(root_dir, "foo_dir")
1298 os.mkdir(root_dir)
1299 os.mkdir(fromdir)
1300- open(fromfile, "w").close()
1301+ open_file(fromfile, "w").close()
1302
1303 should_events = [
1304 ("FS_DIR_CREATE", dict(path=mypath("foo_dir"))),
1305 ]
1306 self.eq.subscribe(DynamicHitMe(should_events, self))
1307- self.eq.add_watch(root_dir)
1308+ yield self.eq.add_watch(root_dir)
1309
1310 # generate the event
1311 os.rename(fromdir, todir)
1312- return self._deferred
1313+ yield self._deferred
1314
1315+ @defer.inlineCallbacks
1316 def test_delete_inside_moving_directory(self):
1317 """Test to assure that the DELETE signal has the correct path."""
1318 # basedir
1319@@ -1674,28 +1738,29 @@
1320 fromfile = os.path.join(dir1, "test_f")
1321 tofile = os.path.join(dir2, "test_f")
1322 os.mkdir(dir1)
1323- open(fromfile, "w").close()
1324+ open_file(fromfile, "w").close()
1325
1326 should_events = [
1327 ("FS_DIR_MOVE", dict(path_from=dir1, path_to=dir2)),
1328 ("FS_FILE_DELETE", dict(path=tofile)),
1329 ]
1330 self.eq.subscribe(DynamicHitMe(should_events, self))
1331- self.eq.add_watch(self.root_dir)
1332- self.eq.add_watch(basedir)
1333- self.eq.add_watch(dir1)
1334+ yield self.eq.add_watch(self.root_dir)
1335+ yield self.eq.add_watch(basedir)
1336+ yield self.eq.add_watch(dir1)
1337
1338 # generate the event
1339 os.rename(dir1, dir2)
1340 os.remove(tofile)
1341- return self._deferred
1342+ yield self._deferred
1343
1344+ @defer.inlineCallbacks
1345 def test_move_conflict_to_new_file(self):
1346 """Test to assure the signal wents through as a new file."""
1347 testfile = os.path.join(self.root_dir, "testfile")
1348 destfile = os.path.join(self.root_dir, "destfile")
1349 mdid = self.fs.create(testfile, '')
1350- open(testfile, "w").close()
1351+ open_file(testfile, "w").close()
1352 self.fs.move_to_conflict(mdid)
1353
1354 should_events = [
1355@@ -1703,17 +1768,18 @@
1356 ("FS_FILE_CLOSE_WRITE", dict(path=destfile)),
1357 ]
1358 self.eq.subscribe(DynamicHitMe(should_events, self))
1359- self.eq.add_watch(self.root_dir)
1360+ yield self.eq.add_watch(self.root_dir)
1361
1362 # generate the event
1363 os.rename(testfile + ".u1conflict", destfile)
1364- return self._deferred
1365+ yield self._deferred
1366
1367+ @defer.inlineCallbacks
1368 def test_move_conflict_over_file(self):
1369 """Test to assure the signal wents through as a the file."""
1370 testfile = os.path.join(self.root_dir, "testfile")
1371 mdid = self.fs.create(testfile, '')
1372- open(testfile, "w").close()
1373+ open_file(testfile, "w").close()
1374 self.fs.move_to_conflict(mdid)
1375
1376 should_events = [
1377@@ -1721,12 +1787,13 @@
1378 ("FS_FILE_CLOSE_WRITE", dict(path=testfile)),
1379 ]
1380 self.eq.subscribe(DynamicHitMe(should_events, self))
1381- self.eq.add_watch(self.root_dir)
1382+ yield self.eq.add_watch(self.root_dir)
1383
1384 # generate the event
1385 os.rename(testfile + ".u1conflict", testfile)
1386- return self._deferred
1387+ yield self._deferred
1388
1389+ @defer.inlineCallbacks
1390 def test_move_conflict_over_dir(self):
1391 """Test to assure the signal wents through as a the dir."""
1392 testdir = os.path.join(self.root_dir, "testdir")
1393@@ -1738,12 +1805,13 @@
1394 ("FS_DIR_CREATE", dict(path=testdir)),
1395 ]
1396 self.eq.subscribe(DynamicHitMe(should_events, self))
1397- self.eq.add_watch(self.root_dir)
1398+ yield self.eq.add_watch(self.root_dir)
1399
1400 # generate the event
1401 os.rename(testdir + ".u1conflict", testdir)
1402- return self._deferred
1403+ yield self._deferred
1404
1405+ @defer.inlineCallbacks
1406 def test_move_conflict_to_new_dir(self):
1407 """Test to assure the signal wents through as a new dir."""
1408 testdir = os.path.join(self.root_dir, "testdir")
1409@@ -1756,12 +1824,13 @@
1410 ("FS_DIR_CREATE", dict(path=destdir)),
1411 ]
1412 self.eq.subscribe(DynamicHitMe(should_events, self))
1413- self.eq.add_watch(self.root_dir)
1414+ yield self.eq.add_watch(self.root_dir)
1415
1416 # generate the event
1417 os.rename(testdir + ".u1conflict", destdir)
1418- return self._deferred
1419+ yield self._deferred
1420
1421+ @defer.inlineCallbacks
1422 def test_no_read_perms_file(self):
1423 """Test to assure the signal wents through as a the file."""
1424 testfile = os.path.join(self.root_dir, "testfile")
1425@@ -1769,7 +1838,7 @@
1426
1427 should_events = []
1428 self.eq.subscribe(DynamicHitMe(should_events, self))
1429- self.eq.add_watch(self.root_dir)
1430+ yield self.eq.add_watch(self.root_dir)
1431
1432 d = self._deferred
1433 log = self.eq.monitor._processor.log
1434@@ -1785,16 +1854,18 @@
1435 log.addHandler(hdlr)
1436
1437 # generate the event
1438- open(testfile, "w").close()
1439+ open_file(testfile, "w").close()
1440 # and change the permissions so it's ignored
1441 os.chmod(testfile, 0000)
1442+ self.addCleanup(os.chmod, testfile, 0644)
1443
1444 def check(record):
1445 self.assertIn(testfile, record.args)
1446 self.assertEqual(1, len(record.args))
1447 self._deferred.addCallback(check)
1448- return self._deferred
1449+ yield self._deferred
1450
1451+ @defer.inlineCallbacks
1452 def test_no_read_perms_dir(self):
1453 """Test to assure the signal wents through as a the file."""
1454 testdir = os.path.join(self.root_dir, "testdir")
1455@@ -1802,7 +1873,7 @@
1456
1457 should_events = []
1458 self.eq.subscribe(DynamicHitMe(should_events, self))
1459- self.eq.add_watch(self.root_dir)
1460+ yield self.eq.add_watch(self.root_dir)
1461
1462 d = self._deferred
1463 log = self.eq.monitor._processor.log
1464@@ -1821,12 +1892,13 @@
1465 os.makedirs(testdir)
1466 # and change the permissions so it's ignored
1467 os.chmod(testdir, 0000)
1468+ self.addCleanup(os.chmod, testdir, 0755)
1469
1470 def check(record):
1471 self.assertIn(testdir, record.args)
1472 self.assertEqual(1, len(record.args))
1473 self._deferred.addCallback(check)
1474- return self._deferred
1475+ yield self._deferred
1476
1477 @defer.inlineCallbacks
1478 def _create_udf(self, vol_id, path):
1479@@ -1859,8 +1931,8 @@
1480 ("FS_DIR_CREATE", dict(path=moving2)),
1481 ]
1482 self.eq.subscribe(DynamicHitMe(should_events, self))
1483- self.eq.add_watch(base1)
1484- self.eq.add_watch(base2)
1485+ yield self.eq.add_watch(base1)
1486+ yield self.eq.add_watch(base2)
1487
1488 # generate the event
1489 os.rename(moving1, moving2)
1490@@ -1882,7 +1954,7 @@
1491 # working stuff
1492 moving1 = os.path.join(base1, "test")
1493 moving2 = os.path.join(base2, "test")
1494- open(moving1, 'w').close()
1495+ open_file(moving1, 'w').close()
1496
1497 should_events = [
1498 ("FS_FILE_DELETE", dict(path=moving1)),
1499@@ -1890,8 +1962,8 @@
1500 ("FS_FILE_CLOSE_WRITE", dict(path=moving2)),
1501 ]
1502 self.eq.subscribe(DynamicHitMe(should_events, self))
1503- self.eq.add_watch(base1)
1504- self.eq.add_watch(base2)
1505+ yield self.eq.add_watch(base1)
1506+ yield self.eq.add_watch(base2)
1507
1508 # generate the event
1509 os.rename(moving1, moving2)
1510
1511=== modified file 'tests/syncdaemon/test_fsm.py'
1512--- tests/syncdaemon/test_fsm.py 2011-07-27 20:15:10 +0000
1513+++ tests/syncdaemon/test_fsm.py 2011-08-04 19:19:24 +0000
1514@@ -37,7 +37,7 @@
1515 )
1516
1517 from ubuntuone.devtools.handlers import MementoHandler
1518-from ubuntuone.platform import make_dir, open_file, path_exists
1519+from ubuntuone.platform import make_dir, make_link, open_file, path_exists
1520 from ubuntuone.syncdaemon.filesystem_manager import (
1521 DirectoryNotRemovable,
1522 EnableShareWrite,
1523@@ -301,7 +301,7 @@
1524 old_shares_path = os.path.join(self.root_dir, 'Ubuntu One',
1525 'Shared With Me')
1526 old_path = os.path.join(old_shares_path, 'share1_name')
1527- os.symlink(self.shares_dir, old_shares_path)
1528+ make_link(self.shares_dir, old_shares_path)
1529
1530 # put the old path in the mdobj
1531 share_md = self.fsm.fs[share_mdid]
1532@@ -363,7 +363,7 @@
1533 old_shares_path = os.path.join(self.root_dir, 'Ubuntu One',
1534 'Shared With Me')
1535 old_path = os.path.join(old_shares_path, 'share1_name')
1536- os.symlink(self.shares_dir, old_shares_path)
1537+ make_link(self.shares_dir, old_shares_path)
1538
1539 # put the old path in the mdobj
1540 share_md = self.fsm.fs[share_mdid]
1541@@ -424,7 +424,7 @@
1542 old_shares_path = os.path.join(self.root_dir, 'Ubuntu One',
1543 'Shared With Me')
1544 old_path = os.path.join(old_shares_path, 'share1_name')
1545- os.symlink(self.shares_dir, old_shares_path)
1546+ make_link(self.shares_dir, old_shares_path)
1547
1548 # put the old path in the mdobj
1549 share_md = self.fsm.fs[share_mdid]
1550@@ -481,7 +481,7 @@
1551 old_shares_path = os.path.join(self.root_dir, 'Ubuntu One',
1552 'Shared With Me')
1553 old_path = os.path.join(old_shares_path, 'share1_name')
1554- os.symlink(self.shares_dir, old_shares_path)
1555+ make_link(self.shares_dir, old_shares_path)
1556 old_root_path = os.path.join(os.path.dirname(self.root_dir),
1557 'Ubuntu One', 'My Files')
1558
1559@@ -2786,7 +2786,13 @@
1560 def test_make_dir_all_ok(self):
1561 """Create the dir, add a watch, mute the event."""
1562 called = []
1563- self.eq.add_watch = lambda p: called.append(p)
1564+
1565+ def add_watch(path):
1566+ """Fake it."""
1567+ called.append(path)
1568+ return defer.succeed(True)
1569+
1570+ self.eq.add_watch = add_watch
1571 self.eq.add_to_mute_filter = lambda e, **a: called.append((e, a))
1572 local_dir = os.path.join(self.root_dir, "foo")
1573 mdid = self.fsm.create(local_dir, "", is_dir=True)
1574@@ -2825,7 +2831,13 @@
1575 def test_make_dir_ro_watch(self):
1576 """Don't add the watch nor the mute on a RO share."""
1577 called = []
1578- self.eq.add_watch = lambda p: called.append(p)
1579+
1580+ def add_watch(path):
1581+ """Fake it."""
1582+ called.append(path)
1583+ return defer.succeed(True)
1584+
1585+ self.eq.add_watch = add_watch
1586 self.eq.add_to_mute_filter = lambda *a: called.append(a)
1587 share = yield self.create_share('ro_share_id', 'ro',
1588 access_level='View')
1589@@ -3359,7 +3371,7 @@
1590 old_shares_path = os.path.join(self.root_dir, 'Ubuntu One',
1591 'Shared With Me')
1592 old_path = os.path.join(old_shares_path, 'share1_name')
1593- os.symlink(self.shares_dir, old_shares_path)
1594+ make_link(self.shares_dir, old_shares_path)
1595
1596 # put the old path in the mdobj
1597 share_md = self.fsm.fs[share_mdid]
1598@@ -3425,7 +3437,7 @@
1599 old_shares_path = os.path.join(self.root_dir, 'Ubuntu One',
1600 'Shared With Me')
1601 old_path = os.path.join(old_shares_path, 'share1_name')
1602- os.symlink(self.shares_dir, old_shares_path)
1603+ make_link(self.shares_dir, old_shares_path)
1604
1605 # put the old path in the mdobj
1606 share_md = self.fsm.fs[share_mdid]
1607@@ -3486,7 +3498,7 @@
1608 old_shares_path = os.path.join(self.root_dir, 'Ubuntu One',
1609 'Shared With Me')
1610 old_path = os.path.join(old_shares_path, 'share1_name')
1611- os.symlink(self.shares_dir, old_shares_path)
1612+ make_link(self.shares_dir, old_shares_path)
1613
1614 # put the old path in the mdobj
1615 share_md = self.fsm.fs[share_mdid]
1616@@ -3543,7 +3555,7 @@
1617 old_shares_path = os.path.join(self.root_dir, 'Ubuntu One',
1618 'Shared With Me')
1619 old_path = os.path.join(old_shares_path, 'share1_name')
1620- os.symlink(self.shares_dir, old_shares_path)
1621+ make_link(self.shares_dir, old_shares_path)
1622 old_root_path = os.path.join(os.path.dirname(self.root_dir),
1623 'Ubuntu One', 'My Files')
1624
1625
1626=== modified file 'tests/syncdaemon/test_localrescan.py'
1627--- tests/syncdaemon/test_localrescan.py 2011-07-27 19:30:37 +0000
1628+++ tests/syncdaemon/test_localrescan.py 2011-08-04 19:19:24 +0000
1629@@ -28,6 +28,7 @@
1630
1631 from contrib.testing import testcase
1632 from ubuntuone.devtools.handlers import MementoHandler
1633+from ubuntuone.platform import make_link
1634 from ubuntuone.syncdaemon.local_rescan import LocalRescan
1635 from ubuntuone.syncdaemon.marker import MDMarker
1636 from ubuntuone.syncdaemon.tritcask import Tritcask
1637@@ -651,7 +652,7 @@
1638
1639 # and a symlink to ignore!
1640 symlpath = os.path.join(self.share.path, "b")
1641- os.symlink(source, symlpath)
1642+ make_link(source, symlpath)
1643
1644 def check(_):
1645 """check"""
1646@@ -2442,18 +2443,17 @@
1647 yield super(ParentWatchForUDFTestCase, self).setUp()
1648 self._deferred = defer.Deferred()
1649 self.eq = event_queue.EventQueue(self.fsm)
1650- self.original_add = self.eq.add_watch
1651 self.watches = []
1652
1653 def fake_add(path):
1654 """Fake watch handler."""
1655 if path in self.watches:
1656- return False
1657+ return defer.succeed(False)
1658 else:
1659 self.watches.append(path)
1660- return True
1661+ return defer.succeed(True)
1662
1663- self.eq.add_watch = fake_add
1664+ self.patch(self.eq, 'add_watch', fake_add)
1665
1666 self.lr = LocalRescan(self.vm, self.fsm, self.eq, self.aq)
1667
1668@@ -2482,7 +2482,6 @@
1669 @defer.inlineCallbacks
1670 def tearDown(self):
1671 """Cleanup."""
1672- self.eq.add_watch = self.original_add
1673 self.eq.shutdown()
1674 self.eq = None
1675
1676@@ -2523,20 +2522,16 @@
1677 d.addCallback(check_log)
1678 return self._deferred
1679
1680+ @defer.inlineCallbacks
1681 def test_watch_is_not_added_if_present(self):
1682 """Watches are not added if present."""
1683 for path in self.ancestors:
1684- self.eq.add_watch(path)
1685-
1686- def no_watch_added(_):
1687- """Check."""
1688- for path in self.udf.ancestors:
1689- self.assertEquals(1, self.watches.count(path))
1690-
1691- d = self.lr.start()
1692- d.addCallback(no_watch_added)
1693- d.addCallback(lambda _: self._deferred.callback(None))
1694- return self._deferred
1695+ yield self.eq.add_watch(path)
1696+
1697+ yield self.lr.start()
1698+
1699+ for path in self.udf.ancestors:
1700+ self.assertEquals(1, self.watches.count(path))
1701
1702
1703 class BrokenNodesTests(TwistedBase):
1704
1705=== modified file 'tests/syncdaemon/test_main.py'
1706--- tests/syncdaemon/test_main.py 2011-07-27 17:31:25 +0000
1707+++ tests/syncdaemon/test_main.py 2011-08-04 19:19:24 +0000
1708@@ -21,16 +21,15 @@
1709 import os
1710
1711 from twisted.internet import defer, reactor
1712-
1713-from ubuntuone.syncdaemon import main as main_mod
1714-from ubuntuone.clientdefs import VERSION
1715+from ubuntuone.devtools.handlers import MementoHandler
1716
1717 from contrib.testing.testcase import (
1718 BaseTwistedTestCase, FAKED_CREDENTIALS,
1719 )
1720 from tests.platform import setup_main_test, get_main_params
1721-
1722-from ubuntuone.devtools.handlers import MementoHandler
1723+from ubuntuone.clientdefs import VERSION
1724+from ubuntuone.platform import make_link
1725+from ubuntuone.syncdaemon import main as main_mod
1726
1727
1728 def _get_main_common_params(testcase):
1729@@ -162,7 +161,7 @@
1730 def test_create_dirs_already_exists_symlink_too(self):
1731 """test that creating a Main instance works as expected."""
1732 link = os.path.join(self.root, 'Shared With Me')
1733- os.symlink(self.shares, link)
1734+ make_link(self.shares, link)
1735 self.assertTrue(os.path.exists(link))
1736 self.assertTrue(os.path.islink(link))
1737 self.assertTrue(os.path.exists(self.shares))
1738
1739=== modified file 'tests/syncdaemon/test_sync.py'
1740--- tests/syncdaemon/test_sync.py 2011-07-27 17:31:25 +0000
1741+++ tests/syncdaemon/test_sync.py 2011-08-04 19:19:24 +0000
1742@@ -30,6 +30,7 @@
1743
1744 from twisted.internet import defer
1745 from twisted.python.failure import Failure
1746+from ubuntuone.devtools.testcase import skipIfOS
1747
1748 from contrib.testing.testcase import (
1749 FakeMain,
1750@@ -39,6 +40,11 @@
1751 )
1752
1753 from ubuntuone.devtools.handlers import MementoHandler
1754+from ubuntuone.platform import (
1755+ make_dir,
1756+ open_file,
1757+ stat_path,
1758+)
1759 from ubuntuone.syncdaemon.filesystem_manager import FileSystemManager
1760 from ubuntuone.syncdaemon.tritcask import Tritcask
1761 from ubuntuone.syncdaemon.fsm import fsm as fsm_module
1762@@ -98,7 +104,7 @@
1763 def create_share(self, share_id, share_name, access_level='Modify'):
1764 """Create a share."""
1765 share_path = os.path.join(self.shares_dir, share_name)
1766- os.makedirs(share_path)
1767+ make_dir(share_path, recursive=True)
1768 share = Share(path=share_path, volume_id=share_id,
1769 access_level=access_level)
1770 yield self.fsm.vm.add_share(share)
1771@@ -157,6 +163,7 @@
1772 """Test for mdid reset after a deletion"""
1773 # create a node
1774 path = os.path.join(self.share.path, 'path')
1775+ open_file(path, 'w').close()
1776 self.fsm.create(path, "share", node_id='uuid1')
1777 key = FSKey(self.fsm, path=path)
1778 # fake a conflict and delete the metadata
1779@@ -240,6 +247,8 @@
1780 class TestSync(BaseSync):
1781 """Test for Sync."""
1782
1783+ timeout = 3
1784+
1785 @defer.inlineCallbacks
1786 def setUp(self):
1787 """Set up."""
1788@@ -248,30 +257,20 @@
1789 self.fsm = self.main.fs
1790 self.handler.setLevel(logging.DEBUG)
1791
1792+ @skipIfOS('win32', 'In windows we can not unlink opened files.')
1793 def test_deleting_open_files_is_no_cause_for_despair(self):
1794 """test_deleting_open_files_is_no_cause_for_despair."""
1795 def cb(_):
1796 d0 = self.main.wait_for('HQ_HASH_NEW')
1797- f = file(self.root + '/a_file', 'w')
1798- print >> f, 'hola'
1799- os.unlink(self.root + '/a_file')
1800+ fname = os.path.join(self.root, 'a_file')
1801+ f = open_file(fname, 'w')
1802+ f.write('hola')
1803+ os.unlink(fname)
1804 f.close()
1805- print >> file(self.root + '/b_file', 'w'), 'chau'
1806- return d0
1807- d = self.main.wait_for('SYS_LOCAL_RESCAN_DONE')
1808- self.main.start()
1809- d.addCallback(cb)
1810- return d
1811
1812- def test_stomp_deleted_file_is_no_cause_for_despair_either(self):
1813- """test_stomp_deleted_file_is_no_cause_for_despair_either."""
1814- def cb(_):
1815- d0 = self.main.wait_for('HQ_HASH_NEW')
1816- f = file(self.root + '/a_file', 'w')
1817- print >> f, 'hola'
1818- os.unlink(self.root + '/a_file')
1819- f.close()
1820- print >> file(self.root + '/b_file', 'w'), 'chau'
1821+ fname = os.path.join(self.root, 'b_file')
1822+ f = open_file(fname, 'w')
1823+ f.write('chau')
1824 return d0
1825 d = self.main.wait_for('SYS_LOCAL_RESCAN_DONE')
1826 self.main.start()
1827@@ -481,6 +480,7 @@
1828
1829 # create the node
1830 somepath = os.path.join(self.root, 'somepath')
1831+ open_file(somepath, 'w').close()
1832 mdid = self.fsm.create(somepath, '', node_id='node_id')
1833
1834 # send the event and check args after the ssmr instance
1835@@ -536,6 +536,7 @@
1836
1837 # create the node
1838 somepath = os.path.join(self.root, 'somepath')
1839+ os.mkdir(somepath)
1840 mdid = self.fsm.create(somepath, '', node_id='node_id', is_dir=True)
1841
1842 # send the event and check args after the ssmr instance
1843@@ -631,7 +632,7 @@
1844 'parent_id', 'name'])
1845 self.assertEqual(r.share_id, '')
1846 self.assertEqual(r.node_id, 'node_id')
1847- self.assertEqual(r.path, 'somepath/name')
1848+ self.assertEqual(r.path, os.path.join('somepath', 'name'))
1849
1850 def test_SV_FILE_NEW_node_no_id(self):
1851 """Handle SV_FILE_NEW having a node without node_id."""
1852@@ -722,8 +723,8 @@
1853 # create testing data
1854 somepath = os.path.join(self.root, 'somepath')
1855 open(somepath, "w").close()
1856- oldstat = os.stat(somepath)
1857- f = open(somepath, "w")
1858+ oldstat = stat_path(somepath)
1859+ f = open_file(somepath, "w")
1860 f.write("new")
1861 f.close()
1862
1863@@ -1459,8 +1460,12 @@
1864
1865
1866 class TestHandleAqDeltaOk(TestSyncDelta):
1867- """Sync.handle_AQ_DELTA_OK handles the recepcion of a new delta and applies
1868- all the changes that came from it."""
1869+ """Test case for Sync.handle_AQ_DELTA_OK.
1870+
1871+ Assert that handles the recepcion of a new delta and applies all the
1872+ changes that came from it.
1873+
1874+ """
1875
1876 def test_not_full(self):
1877 """If we dont have a full delta, we need to ask for another one."""
1878
1879=== modified file 'tests/syncdaemon/test_vm.py'
1880--- tests/syncdaemon/test_vm.py 2011-06-24 20:24:51 +0000
1881+++ tests/syncdaemon/test_vm.py 2011-08-04 19:19:24 +0000
1882@@ -31,9 +31,8 @@
1883 from twisted.internet import defer, reactor
1884
1885 from contrib.testing.testcase import (
1886+ BaseTwistedTestCase,
1887 FakeMain,
1888- BaseTwistedTestCase,
1889- environ,
1890 )
1891 from ubuntuone.devtools.handlers import MementoHandler
1892 from ubuntuone.storageprotocol import volumes, request
1893@@ -62,6 +61,7 @@
1894 from ubuntuone.platform import (
1895 make_link,
1896 make_dir,
1897+ path_exists,
1898 remove_link,
1899 )
1900
1901@@ -88,6 +88,21 @@
1902 self.partials_dir = self.mktemp('partials_dir')
1903 self.main = FakeMain(self.root_dir, self.shares_dir,
1904 self.data_dir, self.partials_dir)
1905+
1906+ self.watches = set() # keep track of added watches
1907+
1908+ orig_add_watch = self.main.event_q.add_watch
1909+ def fake_add_watch(path):
1910+ self.watches.add(path)
1911+ return orig_add_watch(path)
1912+
1913+ orig_rm_watch = self.main.event_q.rm_watch
1914+ def fake_rm_watch(path):
1915+ self.watches.remove(path)
1916+ return orig_rm_watch(path)
1917+
1918+ self.patch(self.main.event_q, 'add_watch', fake_add_watch)
1919+ self.patch(self.main.event_q, 'rm_watch', fake_rm_watch)
1920 self.vm = self.main.vm
1921 self.addCleanup(self.main.shutdown)
1922
1923@@ -96,6 +111,10 @@
1924 self.vm.log.addHandler(self.handler)
1925 self.addCleanup(self.vm.log.removeHandler, self.handler)
1926
1927+ old_home = os.environ['HOME']
1928+ os.environ['HOME'] = self.home_dir
1929+ self.addCleanup(os.environ.__setitem__, 'HOME', old_home)
1930+
1931 @defer.inlineCallbacks
1932 def tearDown(self):
1933 """ cleanup main and remove the temp dir """
1934@@ -126,55 +145,76 @@
1935 callback(kwargs)
1936
1937 listener = Listener()
1938- setattr(listener, 'handle_'+event, listener._handle_event)
1939+ setattr(listener, 'handle_' + event, listener._handle_event)
1940 event_q.subscribe(listener)
1941 return listener
1942
1943+ def _create_udf_volume(self, volume_id=None, node_id=None,
1944+ suggested_path=u'~/Documents',
1945+ generation=None, free_bytes=100):
1946+ """Return a new UDFVolume."""
1947+ # match protocol expected types
1948+ assert isinstance(suggested_path, unicode)
1949+
1950+ if volume_id is None:
1951+ volume_id = str(uuid.uuid4())
1952+ if node_id is None:
1953+ node_id = str(uuid.uuid4())
1954+
1955+ volume = volumes.UDFVolume(volume_id=volume_id, node_id=node_id,
1956+ generation=generation,
1957+ free_bytes=free_bytes,
1958+ suggested_path=suggested_path)
1959+ return volume
1960+
1961 def _create_udf(self, volume_id=None, node_id=None,
1962- suggested_path='~/Documents',
1963+ suggested_path=u'~/Documents',
1964 subscribed=True, generation=None, free_bytes=100):
1965 """Create an UDF and returns it and the volume"""
1966- with environ('HOME', self.home_dir):
1967- path = get_udf_path(suggested_path)
1968- # make sure suggested_path is unicode
1969- if isinstance(suggested_path, str):
1970- suggested_path = suggested_path.decode('utf-8')
1971-
1972- if volume_id is None:
1973- volume_id = str(uuid.uuid4())
1974- if node_id is None:
1975- node_id = str(uuid.uuid4())
1976-
1977- volume = volumes.UDFVolume(volume_id, node_id, generation,
1978- free_bytes, suggested_path)
1979- udf = UDF.from_udf_volume(volume, path)
1980+ # match protocol expected types
1981+ assert isinstance(suggested_path, unicode)
1982+
1983+ volume = self._create_udf_volume(volume_id=volume_id, node_id=node_id,
1984+ suggested_path=suggested_path,
1985+ generation=generation,
1986+ free_bytes=free_bytes)
1987+
1988+ udf = UDF.from_udf_volume(volume, get_udf_path(suggested_path))
1989 udf.subscribed = subscribed
1990 udf.generation = generation
1991 return udf
1992
1993 def _create_share_volume(self, volume_id=None, node_id=None,
1994- name='fake_share', generation=None, free_bytes=10,
1995+ name=u'fake_share', generation=None, free_bytes=10,
1996 access_level='View', accepted=True,
1997 other_visible_name='visible_username'):
1998 """Return a new ShareVolume."""
1999+ # match protocol expected types
2000+ assert isinstance(name, unicode)
2001+
2002 if volume_id is None:
2003 volume_id = str(uuid.uuid4())
2004 if node_id is None:
2005 node_id = str(uuid.uuid4())
2006
2007- share_volume = volumes.ShareVolume(volume_id=volume_id,
2008- node_id=node_id, generation=generation,
2009- free_bytes=free_bytes, direction='to_me',
2010- share_name=name, other_username='username',
2011- other_visible_name=other_visible_name,
2012- accepted=accepted, access_level=access_level)
2013- return share_volume
2014+ volume = volumes.ShareVolume(volume_id=volume_id,
2015+ node_id=node_id, generation=generation,
2016+ free_bytes=free_bytes, direction='to_me',
2017+ share_name=name,
2018+ other_username='username',
2019+ other_visible_name=other_visible_name,
2020+ accepted=accepted,
2021+ access_level=access_level)
2022+ return volume
2023
2024- def _create_share(self, volume_id=None, node_id=None, name='fake_share',
2025+ def _create_share(self, volume_id=None, node_id=None, name=u'fake_share',
2026 generation=None, free_bytes=1024,
2027 access_level='View', accepted=True, subscribed=False,
2028 other_visible_name='visible_username'):
2029 """Return a new Share."""
2030+ # match protocol expected types
2031+ assert isinstance(name, unicode)
2032+
2033 share_volume = self._create_share_volume(volume_id=volume_id,
2034 node_id=node_id, name=name, generation=generation,
2035 free_bytes=free_bytes, access_level=access_level,
2036@@ -265,7 +305,8 @@
2037 self.assertEqual(mdobj.share_id, share.volume_id)
2038
2039 # check that there isn't a watch in the share (subscribed is False)
2040- self.assertNotIn(share.path, self.main.event_q.monitor._general_watchs)
2041+ self.assertNotIn(share.path, self.watches,
2042+ 'watch for %r should not be present.' % share.path)
2043
2044 # remove the share
2045 self.vm.share_deleted(share.volume_id)
2046@@ -284,7 +325,8 @@
2047 self.assertEqual(mdobj.share_id, share.volume_id)
2048
2049 # check that there isn't a watch in the share (subscribed is False)
2050- self.assertNotIn(share.path, self.main.event_q.monitor._general_watchs)
2051+ self.assertNotIn(share.path, self.watches,
2052+ 'watch for %r should not be present.' % share.path)
2053
2054 @defer.inlineCallbacks
2055 def test_add_share_access_level_modify(self):
2056@@ -304,7 +346,8 @@
2057 self.assertEqual(mdobj.share_id, share.volume_id)
2058
2059 # check that there isn't a watch in the share (subscribed is False)
2060- self.assertNotIn(share.path, self.main.event_q.monitor._general_watchs)
2061+ self.assertNotIn(share.path, self.watches,
2062+ 'watch for %r should not be present.' % share.path)
2063
2064 # remove the share
2065 self.vm.share_deleted(share.volume_id)
2066@@ -323,7 +366,8 @@
2067 self.assertEqual(mdobj.share_id, share.volume_id)
2068
2069 # check that there is a watch in the share
2070- self.assertIn(share.path, self.main.event_q.monitor._general_watchs)
2071+ self.assertIn(share.path, self.watches,
2072+ 'watch for %r should be present.' % share.path)
2073
2074 @defer.inlineCallbacks
2075 def test_add_share_view_does_not_local_scan_share(self):
2076@@ -344,7 +388,8 @@
2077 self.assertEqual(mdobj.node_id, share.node_id)
2078 self.assertEqual(mdobj.share_id, share.volume_id)
2079 self.assertTrue(self.vm.shares[share.volume_id].subscribed)
2080- self.assertNotIn(share.path, self.main.event_q.monitor._general_watchs)
2081+ self.assertNotIn(share.path, self.watches,
2082+ 'watch for %r should not be present.' % share.path)
2083
2084 @defer.inlineCallbacks
2085 def test_add_share_modify_scans_share(self):
2086@@ -381,7 +426,8 @@
2087
2088 self.assertEqual(2, len(self.vm.shares))
2089 self.assertTrue(self.vm.shares[share.volume_id].subscribed)
2090- self.assertIn(share.path, self.main.event_q.monitor._general_watchs)
2091+ self.assertIn(share.path, self.watches,
2092+ 'watch for %r should be present.' % share.path)
2093
2094 @defer.inlineCallbacks
2095 def test_share_deleted(self):
2096@@ -400,7 +446,8 @@
2097 # check that the share isn't in the fsm metadata
2098 self.assertRaises(KeyError, self.main.fs.get_by_path, share.path)
2099 # check that there isn't a watch in the share
2100- self.assertNotIn(share.path, self.main.event_q.monitor._general_watchs)
2101+ self.assertNotIn(share.path, self.watches,
2102+ 'watch for %r should not be present.' % share.path)
2103
2104 @defer.inlineCallbacks
2105 def test_share_deleted_with_content(self):
2106@@ -416,39 +463,41 @@
2107 for i, dir in enumerate(dirs):
2108 path = os.path.join(share.path, dir)
2109 with allow_writes(os.path.split(share.path)[0]):
2110- if not os.path.exists(path):
2111+ if not path_exists(path):
2112 make_dir(path, recursive=True)
2113 self.main.fs.create(path, share.volume_id, is_dir=True)
2114- self.main.fs.set_node_id(path, 'dir_node_id'+str(i))
2115+ self.main.fs.set_node_id(path, 'dir_node_id' + str(i))
2116 # add a inotify watch to the dir
2117- yield self.main.event_q.add_watch(path)
2118+ yield self.vm._add_watch(path)
2119 files = ['a_file', os.path.join('dir', 'file'),
2120 os.path.join('dir','subdir','file')]
2121 for i, file in enumerate(files):
2122 path = os.path.join(share.path, file)
2123 self.main.fs.create(path, share.volume_id)
2124- self.main.fs.set_node_id(path, 'file_node_id'+str(i))
2125+ self.main.fs.set_node_id(path, 'file_node_id' + str(i))
2126
2127 paths = list(self.main.fs.get_paths_starting_with(share.path))
2128- self.assertEqual(len(paths), len(dirs+files)+1)
2129+ self.assertEqual(len(paths), len(dirs + files) + 1)
2130 for path, is_dir in paths:
2131 self.assertTrue(self.main.fs.get_by_path(path))
2132 if is_dir:
2133- self.assertIn(path, self.main.event_q.monitor._general_watchs)
2134+ self.assertIn(path, self.watches,
2135+ 'watch for %r should be present.' % path)
2136 self.assertIn(share.volume_id, self.vm.shares)
2137 self.vm.share_deleted(share.volume_id)
2138 self.assertNotIn(share.volume_id, self.vm.shares)
2139 for path, is_dir in paths:
2140 self.assertRaises(KeyError, self.main.fs.get_by_path, path)
2141 if is_dir:
2142- self.assertNotIn(path,
2143- self.main.event_q.monitor._general_watchs)
2144+ self.assertNotIn(path, self.watches,
2145+ 'watch for %r should not be present.' % path)
2146
2147 self.assertEqual(1, len(self.vm.shares))
2148 # check that the share isn't in the fsm metadata
2149 self.assertRaises(KeyError, self.main.fs.get_by_path, share.path)
2150 # check that there isn't a watch in the share
2151- self.assertNotIn(share.path, self.main.event_q.monitor._general_watchs)
2152+ self.assertNotIn(share.path, self.watches,
2153+ 'watch for %r should not be present.' % share.path)
2154 # check that there isn't any share childs around
2155 for path, _ in paths:
2156 self.assertRaises(KeyError, self.main.fs.get_by_path, path)
2157@@ -549,7 +598,7 @@
2158 'Modify')
2159 # initialize the the root
2160 self.vm._got_root('root_uuid')
2161- shared_dir = os.path.abspath(os.path.join(self.root_dir, 'shared_dir'))
2162+ shared_dir = os.path.join(self.root_dir, 'shared_dir')
2163 self.main.fs.create(path=shared_dir, share_id="", is_dir=True)
2164 self.main.fs.set_node_id(shared_dir, shared_response.subtree)
2165 response = ListShares(None)
2166@@ -601,7 +650,7 @@
2167 self.assertEquals('View', access_level)
2168 self.assertTrue(marker is not None)
2169 share = self.vm.marker_share_map[marker]
2170- self.assertEquals(os.path.abspath(path), share.path)
2171+ self.assertEquals(path, share.path)
2172 self.assertEquals('View', share.access_level)
2173 self.assertEquals(marker, share.volume_id)
2174 self.assertEquals('fake_user', share.other_username)
2175@@ -627,7 +676,7 @@
2176 @defer.inlineCallbacks
2177 def test_create_share_missing_node_id(self):
2178 """Test VolumeManager.create_share in the case of missing node_id."""
2179- path = os.path.abspath(os.path.join(self.vm.root.path, 'shared_path'))
2180+ path = os.path.join(self.vm.root.path, 'shared_path')
2181 self.main.fs.create(path, "")
2182 expected_node_id = uuid.uuid4()
2183 expected_share_id = uuid.uuid4()
2184@@ -829,10 +878,12 @@
2185 """Test for VolumeManager._remove_watch"""
2186 path = os.path.join(self.root_dir, 'dir')
2187 make_dir(path, recursive=True)
2188- yield self.main.event_q.add_watch(path)
2189- self.assertIn(path, self.main.event_q.monitor._general_watchs)
2190+ yield self.vm._add_watch(path)
2191+ self.assertIn(path, self.watches,
2192+ 'watch for %r should be present.' % path)
2193 self.vm._remove_watch(path)
2194- self.assertNotIn(path, self.main.event_q.monitor._general_watchs)
2195+ self.assertNotIn(path, self.watches,
2196+ 'watch for %r should not be present.' % path)
2197
2198 @defer.inlineCallbacks
2199 def test_remove_watches(self):
2200@@ -841,20 +892,22 @@
2201 paths = [os.path.join(self.root_dir, dir) for dir in dirs]
2202 # create metadata and add watches
2203 for i, path in enumerate(paths):
2204- if not os.path.exists(path):
2205+ if not path_exists(path):
2206 make_dir(path, recursive=True)
2207 self.main.fs.create(path, "", is_dir=True)
2208- self.main.fs.set_node_id(path, 'dir_node_id'+str(i))
2209- yield self.main.event_q.add_watch(path)
2210+ self.main.fs.set_node_id(path, 'dir_node_id' + str(i))
2211+ yield self.vm._add_watch(path)
2212 # insert the root_dir in the list
2213- yield self.main.event_q.add_watch(self.root_dir)
2214+ yield self.vm._add_watch(self.root_dir)
2215 paths.insert(0, self.root_dir)
2216 for path in paths:
2217- self.assertIn(path, self.main.event_q.monitor._general_watchs)
2218+ self.assertIn(path, self.watches,
2219+ 'watch for %r should be present.' % path)
2220 # remove the watches
2221- self.vm._remove_watches(os.path.abspath(self.root_dir))
2222+ self.vm._remove_watches(self.root_dir)
2223 for path in paths:
2224- self.assertNotIn(path, self.main.event_q.monitor._general_watchs)
2225+ self.assertNotIn(path, self.watches,
2226+ 'watch for %r should not be present.' % path)
2227
2228 @defer.inlineCallbacks
2229 def test_remove_watches_after_dir_rename(self):
2230@@ -863,29 +916,31 @@
2231 os.mkdir(path)
2232 self.main.fs.create(path, "", is_dir=True)
2233 self.main.fs.set_node_id(path, 'dir_node_id')
2234- yield self.main.event_q.add_watch(path)
2235+ yield self.vm._add_watch(path)
2236
2237- os.rename(path, path+'.old')
2238+ os.rename(path, path + '.old')
2239 # remove the watches
2240- self.vm._remove_watches(os.path.abspath(self.root_dir))
2241+ self.vm._remove_watches(self.root_dir)
2242
2243- self.assertNotIn(path, self.main.event_q.monitor._general_watchs,
2244- 'watch should not be present')
2245+ self.assertNotIn(path, self.watches,
2246+ 'watch for %r should not be present' % path)
2247
2248 @defer.inlineCallbacks
2249 def test_delete_fsm_object(self):
2250 """Test for VolumeManager._delete_fsm_object"""
2251- path = os.path.abspath(os.path.join(self.root_dir, 'dir'))
2252+ path = os.path.join(self.root_dir, 'dir')
2253 make_dir(path, recursive=True)
2254 self.main.fs.create(path, "", is_dir=True)
2255 self.main.fs.set_node_id(path, 'dir_node_id')
2256- yield self.main.event_q.add_watch(path)
2257- self.assertIn(path, self.main.event_q.monitor._general_watchs)
2258+ yield self.vm._add_watch(path)
2259+ self.assertIn(path, self.watches,
2260+ 'watch for %r should be present.' % path)
2261 self.assertTrue(self.main.fs.get_by_path(path), path)
2262 # remove the watch
2263 self.vm._delete_fsm_object(path)
2264 self.assertRaises(KeyError, self.main.fs.get_by_path, path)
2265- self.assertIn(path, self.main.event_q.monitor._general_watchs)
2266+ self.assertIn(path, self.watches,
2267+ 'watch for %r should be present.' % path)
2268
2269 def test_create_fsm_object(self):
2270 """Test for VolumeManager._create_fsm_object"""
2271@@ -927,12 +982,8 @@
2272 @defer.inlineCallbacks
2273 def test_handle_SYS_QUOTA_EXCEEDED_udf(self):
2274 """Test that it updates the free space when error is on an UDF."""
2275- # create the udf
2276- with environ('HOME', self.home_dir):
2277- path = get_udf_path(u'~/UDF')
2278 volume_id = str(uuid.uuid4())
2279- volume = volumes.UDFVolume(volume_id, 'udf_node_id', None, 0, u'~/UDF')
2280- udf = UDF.from_udf_volume(volume, path)
2281+ udf = self._create_udf(volume_id=volume_id)
2282 yield self.vm.add_udf(udf)
2283
2284 # call the handler
2285@@ -972,7 +1023,7 @@
2286 self.assertTrue(share.accepted, 'accepted != True')
2287 self.assertTrue(self.main.fs.get_by_path(share.path),
2288 'No metadata for share root node.')
2289- self.assertTrue(os.path.exists(share.path),
2290+ self.assertTrue(path_exists(share.path),
2291 'share path missing on disk!')
2292
2293 @defer.inlineCallbacks
2294@@ -1037,12 +1088,12 @@
2295 share = self._create_share(access_level=self.access_level,
2296 subscribed=False)
2297 yield self.vm.add_share(share)
2298- self.assertFalse(os.path.exists(share.path))
2299+ self.assertFalse(path_exists(share.path))
2300 self.assertFalse(self.vm.shares[share.id].subscribed)
2301 # subscribe to it
2302 yield self.vm.subscribe_share(share.id)
2303 self.assertTrue(self.vm.shares[share.id].subscribed)
2304- self.assertTrue(os.path.exists(share.path))
2305+ self.assertTrue(path_exists(share.path))
2306
2307 @defer.inlineCallbacks
2308 def test_subscribe_share_missing_volume(self):
2309@@ -1081,12 +1132,12 @@
2310 path = os.path.join(share.path, dir)
2311 with allow_writes(os.path.split(share.path)[0]):
2312 with allow_writes(share.path):
2313- if not os.path.exists(path):
2314+ if not path_exists(path):
2315 make_dir(path, recursive=True)
2316 self.main.fs.create(path, share.volume_id, is_dir=True)
2317- self.main.fs.set_node_id(path, 'dir_node_id'+str(i))
2318+ self.main.fs.set_node_id(path, 'dir_node_id' + str(i))
2319 # add a inotify watch to the dir
2320- yield self.main.event_q.add_watch(path)
2321+ yield self.vm._add_watch(path)
2322 files = ['a_file', os.path.join('dir', 'file'),
2323 os.path.join('dir','subdir','file')]
2324 for i, file in enumerate(files):
2325@@ -1095,9 +1146,9 @@
2326 with allow_writes(share.path):
2327 open(path, 'w').close()
2328 self.main.fs.create(path, share.volume_id)
2329- self.main.fs.set_node_id(path, 'file_node_id'+str(i))
2330+ self.main.fs.set_node_id(path, 'file_node_id' + str(i))
2331 paths = self.main.fs.get_paths_starting_with(share.path)
2332- self.assertEquals(len(paths), len(dirs+files)+1)
2333+ self.assertEquals(len(paths), len(dirs + files) + 1)
2334
2335 # unsubscribe from it
2336 self.vm.unsubscribe_share(share.volume_id)
2337@@ -1108,14 +1159,15 @@
2338 self.assertTrue(self.main.fs.get_by_path(share.path))
2339 # get the childs (should be an empty list)
2340 paths = list(self.main.fs.get_paths_starting_with(share.path))
2341- self.assertEquals(len(dirs+files)+1, len(paths))
2342+ self.assertEquals(len(dirs + files) + 1, len(paths))
2343 # check that there isn't a watch in the share
2344- self.assertNotIn(share.path, self.main.event_q.monitor._general_watchs)
2345+ self.assertNotIn(share.path, self.watches,
2346+ 'watch for %r should not be present.' % share.path)
2347 # check that the childs don't have a watch
2348 for path, is_dir in paths:
2349 if is_dir:
2350- self.assertNotIn(path,
2351- self.main.event_q.monitor._general_watchs)
2352+ self.assertNotIn(path, self.watches,
2353+ 'watch for %r should not be present.' % path)
2354
2355 @defer.inlineCallbacks
2356 def _test_subscribe_share_generations(self, share):
2357@@ -1165,7 +1217,7 @@
2358 share = self._create_share(access_level=self.access_level,
2359 subscribed=False)
2360 yield self.vm.add_share(share)
2361- self.assertFalse(os.path.exists(share.path))
2362+ self.assertFalse(path_exists(share.path))
2363 self.assertFalse(self.vm.shares[share.id].subscribed)
2364 yield self.vm.subscribe_share(share.id)
2365 yield self.vm.unsubscribe_share(share.id)
2366@@ -1191,28 +1243,28 @@
2367 dirs = ['dir', os.path.join('dir','subdir'),
2368 os.path.join('dir', 'empty_dir')]
2369 for i, dir in enumerate(dirs):
2370- path = os.path.abspath(os.path.join(share.path, dir))
2371+ path = os.path.join(share.path, dir)
2372 with allow_writes(os.path.split(share.path)[0]):
2373 with allow_writes(share.path):
2374- if not os.path.exists(path):
2375+ if not path_exists(path):
2376 make_dir(path, recursive=True)
2377 self.main.fs.create(path, share.volume_id, is_dir=True)
2378- self.main.fs.set_node_id(path, 'dir_node_id'+str(i))
2379+ self.main.fs.set_node_id(path, 'dir_node_id' + str(i))
2380 files = ['a_file', os.path.join('dir', 'file'),
2381 os.path.join('dir', 'subdir', 'file')]
2382 for i, path in enumerate(files):
2383- path = os.path.abspath(os.path.join(share.path, path))
2384+ path = os.path.join(share.path, path)
2385 with allow_writes(os.path.split(share.path)[0]):
2386 with allow_writes(share.path):
2387 open(path, 'w').close()
2388 self.main.fs.create(path, share.volume_id)
2389- self.main.fs.set_node_id(path, 'file_node_id'+str(i))
2390+ self.main.fs.set_node_id(path, 'file_node_id' + str(i))
2391 paths = list(self.main.fs.get_paths_starting_with(share.path))
2392 # add a inotify watch to the dirs
2393 for path, is_dir in paths:
2394 if is_dir:
2395- yield self.main.event_q.add_watch(path)
2396- self.assertEquals(len(paths), len(dirs+files)+1, paths)
2397+ yield self.vm._add_watch(path)
2398+ self.assertEquals(len(paths), len(dirs + files) + 1, paths)
2399
2400 # unsubscribe from it
2401 self.vm.unsubscribe_share(share.volume_id)
2402@@ -1222,28 +1274,30 @@
2403 # check that the share is in the fsm metadata
2404 self.assertTrue(self.main.fs.get_by_path(share.path))
2405 # check that there isn't a watch in the share
2406- self.assertNotIn(share.path, self.main.event_q.monitor._general_watchs)
2407+ self.assertNotIn(share.path, self.watches,
2408+ 'watch for %r should not be present.' % share.path)
2409 # check that the childs don't have a watch
2410 for path, is_dir in paths:
2411 if is_dir:
2412- self.assertNotIn(path,
2413- self.main.event_q.monitor._general_watchs)
2414+ self.assertNotIn(path, self.watches,
2415+ 'watch for %r should not be present.' % path)
2416 # check the childs
2417 paths = self.main.fs.get_paths_starting_with(share.path)
2418- self.assertEquals(len(dirs+files)+1, len(paths))
2419+ self.assertEquals(len(dirs + files) + 1, len(paths))
2420 # resubscribe to it
2421 yield self.vm.subscribe_share(share.volume_id)
2422 paths = list(self.main.fs.get_paths_starting_with(share.path))
2423 # we should only have the dirs, as the files metadata is
2424 # delete by local rescan (both hashes are '')
2425- self.assertEquals(len(dirs)+1, len(paths))
2426+ self.assertEquals(len(dirs) + 1, len(paths))
2427 # check that there is a watch in the share
2428- self.assertIn(share.path, self.main.event_q.monitor._general_watchs)
2429+ self.assertIn(share.path, self.watches,
2430+ 'watch for %r should be present.' % share.path)
2431 # check that the child dirs have a watch
2432 for path, is_dir in paths:
2433 if is_dir:
2434- self.assertIn(path, self.main.event_q.monitor._general_watchs,
2435- '%s should have a watch' % path)
2436+ self.assertIn(path, self.watches,
2437+ 'watch for %r should be present.' % path)
2438 self.vm._remove_watch(path)
2439
2440
2441@@ -1267,7 +1321,7 @@
2442 share = self.vm.shares['share_id']
2443 shouldbe_dir = os.path.join(self.shares_dir,
2444 get_share_dir_name(share_response))
2445- self.assertEquals(os.path.abspath(shouldbe_dir), share.path)
2446+ self.assertEquals(shouldbe_dir, share.path)
2447
2448 def test_handle_SHARES_visible_username(self):
2449 """test the handling of AQ_SHARE_LIST with non-ascii visible uname."""
2450@@ -1286,7 +1340,7 @@
2451 share = self.vm.shares['share_id']
2452 shouldbe_dir = os.path.join(self.shares_dir,
2453 get_share_dir_name(share_response))
2454- self.assertEquals(os.path.abspath(shouldbe_dir), share.path)
2455+ self.assertEquals(shouldbe_dir, share.path)
2456
2457 def test_handle_SV_SHARE_CHANGED_sharename(self):
2458 """test the handling of SV_SHARE_CHANGED for non-ascii share name."""
2459@@ -1298,8 +1352,8 @@
2460 self.vm.handle_SV_SHARE_CHANGED(info=share_holder)
2461 shouldbe_dir = os.path.join(self.shares_dir,
2462 get_share_dir_name(share_holder))
2463- self.assertEquals(os.path.abspath(shouldbe_dir),
2464- os.path.abspath(self.vm.shares['share_id'].path))
2465+ self.assertEquals(shouldbe_dir,
2466+ self.vm.shares['share_id'].path)
2467
2468 def test_handle_SV_SHARE_CHANGED_visible(self):
2469 """test the handling of SV_SHARE_CHANGED for non-ascii visible name."""
2470@@ -1311,42 +1365,34 @@
2471 self.vm.handle_SV_SHARE_CHANGED(info=share_holder)
2472 shouldbe_dir = os.path.join(self.shares_dir,
2473 get_share_dir_name(share_holder))
2474- self.assertEquals(os.path.abspath(shouldbe_dir),
2475- os.path.abspath(self.vm.shares['share_id'].path))
2476+ self.assertEquals(shouldbe_dir,
2477+ self.vm.shares['share_id'].path)
2478
2479
2480 class VolumeManagerVolumesTests(BaseVolumeManagerTests):
2481 """Test UDF/Volumes bits of the VolumeManager."""
2482
2483- @defer.inlineCallbacks
2484- def setUp(self):
2485- """Setup the test."""
2486- yield super(VolumeManagerVolumesTests, self).setUp()
2487- old_home = os.environ['HOME']
2488- os.environ['HOME'] = self.home_dir
2489- self.addCleanup(os.environ.__setitem__, 'HOME', old_home)
2490-
2491 def test_udf_ancestors(self):
2492 """UDF's ancestors are correctly returned."""
2493- suggested_path = '~/Documents/Reading/Books/PDFs'
2494- expected = [u'~', u'~/Documents', u'~/Documents/Reading',
2495- u'~/Documents/Reading/Books']
2496+ suggested_path = u'~/Documents/Reading Años/Books/PDFs'
2497+ expected = [u'~',
2498+ os.path.join(u'~', u'Documents'),
2499+ os.path.join(u'~', u'Documents', u'Reading Años'),
2500+ os.path.join(u'~', u'Documents', u'Reading Años', u'Books')]
2501+ expected = [os.path.expanduser(p).encode('utf8') for p in expected]
2502
2503- with environ('HOME', self.home_dir):
2504- udf = self._create_udf(suggested_path=suggested_path)
2505- self.assertEquals(map(os.path.abspath,
2506- map(os.path.expanduser, expected)),
2507- udf.ancestors)
2508+ udf = self._create_udf(suggested_path=suggested_path)
2509+ self.assertEquals(expected, udf.ancestors)
2510
2511 @defer.inlineCallbacks
2512 def test_add_udf(self):
2513 """Test for VolumeManager.add_udf."""
2514- suggested_path = "suggested_path"
2515- udf = self._create_udf(suggested_path='~/' + suggested_path,
2516+ suggested_path = u"~/suggested_path"
2517+ udf = self._create_udf(suggested_path=suggested_path,
2518 subscribed=False)
2519 yield self.vm.add_udf(udf)
2520- self.assertEquals(os.path.abspath(os.path.join(self.home_dir, suggested_path)),
2521- os.path.abspath(udf.path))
2522+ path = get_udf_path(suggested_path)
2523+ self.assertEquals(path, udf.path)
2524 self.assertEquals(1, len(self.vm.udfs))
2525 # check that the UDF is in the fsm metadata
2526 mdobj = self.main.fs.get_by_path(udf.path)
2527@@ -1354,21 +1400,23 @@
2528 self.assertEqual(mdobj.share_id, udf.volume_id)
2529 # check that there isn't a watch in the UDF (we aren't
2530 # subscribed to it)
2531- self.assertNotIn(udf.path, self.main.event_q.monitor._general_watchs)
2532+ self.assertNotIn(udf.path, self.watches,
2533+ 'watch for %r should not be present.' % udf.path)
2534 # remove the udf
2535 self.vm.udf_deleted(udf.volume_id)
2536 # add it again, but this time with subscribed = True
2537 udf.subscribed = True
2538 yield self.vm.add_udf(udf)
2539- self.assertEquals(os.path.abspath(os.path.join(self.home_dir, suggested_path)),
2540- os.path.abspath(udf.path))
2541+ path = get_udf_path(suggested_path)
2542+ self.assertEquals(path, udf.path)
2543 self.assertEquals(1, len(self.vm.udfs))
2544 # check that the UDF is in the fsm metadata
2545 mdobj = self.main.fs.get_by_path(udf.path)
2546 self.assertEqual(mdobj.node_id, udf.node_id)
2547 self.assertEqual(mdobj.share_id, udf.volume_id)
2548 # check that there is a watch in the UDF
2549- self.assertIn(udf.path, self.main.event_q.monitor._general_watchs)
2550+ self.assertIn(udf.path, self.watches,
2551+ 'watch for %r should be present.' % udf.path)
2552
2553 @defer.inlineCallbacks
2554 def test_add_udf_calls_AQ(self):
2555@@ -1389,7 +1437,8 @@
2556 mdobj = self.main.fs.get_by_path(udf.path)
2557 self.assertEqual(mdobj.node_id, udf.node_id)
2558 self.assertEqual(mdobj.share_id, udf.volume_id)
2559- self.assertIn(udf.path, self.main.event_q.monitor._general_watchs)
2560+ self.assertIn(udf.path, self.watches,
2561+ 'watch for %r should be present.' % udf.path)
2562
2563 @defer.inlineCallbacks
2564 def test_udf_deleted(self):
2565@@ -1402,7 +1451,8 @@
2566 # check that the UDF isn't in the fsm metadata
2567 self.assertRaises(KeyError, self.main.fs.get_by_path, udf.path)
2568 # check that there isn't a watch in the UDF
2569- self.assertNotIn(udf.path, self.main.event_q.monitor._general_watchs)
2570+ self.assertNotIn(udf.path, self.watches,
2571+ 'watch for %r should not be present.' % udf.path)
2572
2573 @defer.inlineCallbacks
2574 def test_udf_deleted_with_content(self):
2575@@ -1419,26 +1469,27 @@
2576 os.path.join('dir','empty_dir')]
2577 for i, dir in enumerate(dirs):
2578 path = os.path.join(udf.path, dir)
2579- if not os.path.exists(path):
2580+ if not path_exists(path):
2581 make_dir(path, recursive=True)
2582 self.main.fs.create(path, udf.volume_id, is_dir=True)
2583- self.main.fs.set_node_id(path, 'dir_node_id'+str(i))
2584+ self.main.fs.set_node_id(path, 'dir_node_id' + str(i))
2585 # add a inotify watch to the dir
2586- yield self.main.event_q.add_watch(path)
2587+ yield self.vm._add_watch(path)
2588 files = ['a_file', os.path.join('dir','file'),
2589 os.path.join('dir','subdir','file')]
2590 for i, file in enumerate(files):
2591 path = os.path.join(udf.path, file)
2592 self.main.fs.create(path, udf.volume_id)
2593- self.main.fs.set_node_id(path, 'file_node_id'+str(i))
2594+ self.main.fs.set_node_id(path, 'file_node_id' + str(i))
2595 paths = self.main.fs.get_paths_starting_with(udf.path)
2596- self.assertEquals(len(paths), len(dirs+files)+1)
2597+ self.assertEquals(len(paths), len(dirs + files) + 1)
2598 self.vm.udf_deleted(udf.volume_id)
2599 self.assertEquals(0, len(self.vm.udfs))
2600 # check that the UDF isn't in the fsm metadata
2601 self.assertRaises(KeyError, self.main.fs.get_by_path, udf.path)
2602 # check that there isn't a watch in the UDF
2603- self.assertNotIn(udf.path, self.main.event_q.monitor._general_watchs)
2604+ self.assertNotIn(udf.path, self.watches,
2605+ 'watch for %r should not be present.' % udf.path)
2606 # check that there isn't any udf childs around
2607 for path, _ in paths:
2608 self.assertRaises(KeyError, self.main.fs.get_by_path, path)
2609@@ -1472,9 +1523,6 @@
2610 def setUp(self):
2611 """Setup the test."""
2612 yield super(HandleListVolumesTestCase, self).setUp()
2613- old_home = os.environ['HOME']
2614- os.environ['HOME'] = self.home_dir
2615- self.addCleanup(os.environ.__setitem__, 'HOME', old_home)
2616 # the UDF part makes sense if UDF autosubscribe is True
2617 user_conf = config.get_user_config()
2618 user_conf.set_udf_autosubscribe(True)
2619@@ -1483,10 +1531,18 @@
2620 def test_handle_AQ_LIST_VOLUMES(self):
2621 """Test the handling of the AQ_LIST_VOLUMES event."""
2622 share_id = uuid.uuid4()
2623+ share_name = u'a share name'
2624+ share_node_id = 'something'
2625 share_volume = self._create_share_volume(volume_id=share_id,
2626- name='fake_share', node_id='fake_share_uuid')
2627+ name=share_name,
2628+ node_id=share_node_id)
2629 udf_id = uuid.uuid4()
2630- udf_volume = volumes.UDFVolume(udf_id, 'udf_uuid', None, 100, u'~/UDF')
2631+ udf_node_id = 'yadda-yadda'
2632+ suggested_path = u'~/UDF'
2633+ udf_path = get_udf_path(suggested_path)
2634+ udf_volume = self._create_udf_volume(volume_id=udf_id,
2635+ node_id=udf_node_id,
2636+ suggested_path=suggested_path)
2637 root_volume = volumes.RootVolume(uuid.uuid4(), 17, 10)
2638 response = [share_volume, udf_volume, root_volume]
2639 d1 = defer.Deferred()
2640@@ -1494,9 +1550,7 @@
2641 self.vm.refresh_volumes = lambda: d1.errback('refresh_volumes called!')
2642 self._listen_for('VM_UDF_CREATED', d1.callback)
2643 self._listen_for('SV_VOLUME_NEW_GENERATION', d2.callback)
2644- # use a custom home
2645- with environ('HOME', self.home_dir):
2646- self.vm.handle_AQ_LIST_VOLUMES(response)
2647+ self.vm.handle_AQ_LIST_VOLUMES(response)
2648 yield d1
2649 yield d2
2650 self.assertEqual(2, len(self.vm.shares)) # the new share and root
2651@@ -1505,20 +1559,17 @@
2652 self.assertIn(str(share_id), self.vm.shares)
2653 self.assertIn(str(udf_id), self.vm.udfs)
2654 share = self.vm.shares[str(share_id)]
2655- self.assertEqual('fake_share', share.name)
2656- self.assertEqual('fake_share_uuid', share.node_id)
2657+ self.assertEqual(share_name, share.name)
2658+ self.assertEqual(share_node_id, share.node_id)
2659 udf = self.vm.udfs[str(udf_id)]
2660- self.assertEqual('udf_uuid', udf.node_id)
2661- self.assertEqual(os.path.abspath(os.path.join(self.home_dir, 'UDF')),
2662- os.path.abspath(udf.path))
2663+ self.assertEqual(udf_node_id, udf.node_id)
2664+ self.assertEqual(udf_path, udf.path)
2665 # check that the root it's there, have right node_id and generation
2666 self.assertIn(request.ROOT, self.vm.shares)
2667 root = self.vm.shares[request.ROOT]
2668 self.assertEqual(root.node_id, str(root_volume.node_id))
2669 # now send the same list again and check
2670- # use a custom home
2671- with environ('HOME', self.home_dir):
2672- self.vm.handle_AQ_LIST_VOLUMES(response)
2673+ self.vm.handle_AQ_LIST_VOLUMES(response)
2674 self.assertEqual(2, len(self.vm.shares)) # the share and root
2675 self.assertEqual(1, len(self.vm.udfs)) # one udf
2676 # check that the udf is the same.
2677@@ -1557,15 +1608,16 @@
2678 share_volume = self._create_share_volume(volume_id=share_id, name=name,
2679 node_id='fake_share_uuid')
2680 udf_id = uuid.uuid4()
2681- udf_volume = volumes.UDFVolume(udf_id, 'udf_uuid', None, 10, u'~/ñoño')
2682+ udf_volume = self._create_udf_volume(volume_id=udf_id,
2683+ node_id='udf_uuid',
2684+ generation=None, free_bytes=10,
2685+ suggested_path=u'~/ñoño')
2686 # initialize the the root
2687 self.vm._got_root('root_uuid')
2688 response = [share_volume, udf_volume]
2689 d = defer.Deferred()
2690 self._listen_for('VM_UDF_CREATED', d.callback)
2691- # use a custom home
2692- with environ('HOME', self.home_dir):
2693- self.vm.handle_AQ_LIST_VOLUMES(response)
2694+ self.vm.handle_AQ_LIST_VOLUMES(response)
2695 yield d
2696 self.assertEqual(2, len(self.vm.shares)) # the new shares and root
2697 self.assertEqual(1, len(self.vm.udfs)) # the new shares and root
2698@@ -1577,18 +1629,15 @@
2699 self.assertEqual('fake_share_uuid', share.node_id)
2700 udf = self.vm.udfs[str(udf_id)]
2701 self.assertEqual('udf_uuid', udf.node_id)
2702- self.assertEqual(os.path.abspath(os.path.join(self.home_dir,
2703- u'ñoño'.encode("utf8"))),
2704- os.path.abspath(udf.path))
2705+ self.assertEqual(get_udf_path(udf_volume.suggested_path),
2706+ udf.path)
2707
2708 def test_handle_AQ_LIST_VOLUMES_root(self):
2709 """Test the handling of the AQ_LIST_VOLUMES event."""
2710 root_volume = volumes.RootVolume(uuid.uuid4(), None, 10)
2711 response = [root_volume]
2712 self.vm.refresh_volumes = lambda: self.fail('refresh_volumes called!')
2713- # use a custom home
2714- with environ('HOME', self.home_dir):
2715- self.vm.handle_AQ_LIST_VOLUMES(response)
2716+ self.vm.handle_AQ_LIST_VOLUMES(response)
2717 self.assertEqual(1, len(self.vm.shares)) # the new share and root
2718 # check that the root is in the shares dict
2719 self.assertIn(request.ROOT, self.vm.shares)
2720@@ -1634,9 +1683,7 @@
2721 self._listen_for('SV_VOLUME_NEW_GENERATION',
2722 vol_new_gen_d.callback)
2723
2724- # use a custom home, and trigger the operation
2725- with environ('HOME', self.home_dir):
2726- self.vm.handle_AQ_LIST_VOLUMES(response)
2727+ self.vm.handle_AQ_LIST_VOLUMES(response)
2728
2729 yield share_created_d
2730 events = yield vol_new_gen_d
2731@@ -1676,9 +1723,7 @@
2732 self.patch(self.main.action_q, 'rescan_from_scratch',
2733 lambda vol_id: from_scratch_deferreds.pop(vol_id).callback(vol_id))
2734
2735- # now send the same list again and check using a custom home
2736- with environ('HOME', self.home_dir):
2737- self.vm.handle_AQ_LIST_VOLUMES(response)
2738+ self.vm.handle_AQ_LIST_VOLUMES(response)
2739
2740 vol_id = yield root_from_scratch_d
2741 self.assertEqual(vol_id, '')
2742@@ -1735,9 +1780,7 @@
2743 self._listen_for('SV_VOLUME_NEW_GENERATION',
2744 vol_new_gen_d.callback)
2745
2746- # use a custom home, and trigger the operation
2747- with environ('HOME', self.home_dir):
2748- self.vm.handle_AQ_LIST_VOLUMES(response)
2749+ self.vm.handle_AQ_LIST_VOLUMES(response)
2750
2751 yield udf_created_d
2752 events = yield vol_new_gen_d
2753@@ -1756,8 +1799,7 @@
2754 self.assertIn(str(udf_id), self.vm.udfs)
2755 udf = self.vm.udfs[str(udf_id)]
2756 self.assertEqual('udf_uuid', udf.node_id)
2757- self.assertEqual(os.path.abspath(os.path.join(self.home_dir, 'UDF')),
2758- os.path.abspath(udf.path))
2759+ self.assertEqual(get_udf_path(udf_volume.suggested_path), udf.path)
2760 if auto_subscribe:
2761 # root and udf
2762 self.assertEqual(2, len(list(self.vm.get_volumes())))
2763@@ -1770,9 +1812,8 @@
2764
2765 # root was already checked on test_handle_AQ_LIST_VOLUMES_root
2766
2767- # now send the same list again and check using a custom home
2768- with environ('HOME', self.home_dir):
2769- self.vm.handle_AQ_LIST_VOLUMES(response)
2770+ # now send the same list again and check
2771+ self.vm.handle_AQ_LIST_VOLUMES(response)
2772
2773 check()
2774
2775@@ -1794,9 +1835,6 @@
2776 def setUp(self):
2777 """Setup the test."""
2778 yield super(VolumeManagerOperationsTests, self).setUp()
2779- old_home = os.environ['HOME']
2780- os.environ['HOME'] = self.home_dir
2781- self.addCleanup(os.environ.__setitem__, 'HOME', old_home)
2782 self.main.event_q.push('SYS_INIT_DONE')
2783
2784 def test_create_udf(self):
2785@@ -1806,7 +1844,8 @@
2786
2787 """
2788 d = defer.Deferred()
2789- path = os.path.join(self.home_dir, "MyUDF")
2790+ suggested_path = u"~/MyUDF"
2791+ path = get_udf_path(suggested_path)
2792 udf_id = uuid.uuid4()
2793 node_id = uuid.uuid4()
2794 # patch AQ.create_udf
2795@@ -1818,18 +1857,16 @@
2796 def check(info):
2797 """Check the udf attributes."""
2798 udf = info['udf']
2799- self.assertEquals(os.path.abspath(udf.path),
2800- os.path.abspath(path))
2801+ self.assertEquals(udf.path, path)
2802 self.assertEquals(udf.volume_id, str(udf_id))
2803 self.assertEquals(udf.node_id, str(node_id))
2804- self.assertEquals(udf.suggested_path, u'~/MyUDF')
2805+ self.assertEquals(udf.suggested_path, suggested_path)
2806 self.assertTrue(isinstance(udf.suggested_path, unicode),
2807 'suggested_path should be unicode')
2808 self.assertIn(udf.volume_id, self.vm.udfs)
2809
2810 self._listen_for('VM_UDF_CREATED', d.callback)
2811- with environ('HOME', self.home_dir):
2812- self.vm.create_udf(path)
2813+ self.vm.create_udf(path)
2814 d.addCallback(check)
2815 return d
2816
2817@@ -1839,24 +1876,20 @@
2818 Check that VM calls AQ.create_udf with unicode values.
2819 """
2820 d = defer.Deferred()
2821- path = os.path.join(self.home_dir, u"ñoño",
2822- u"mirá que lindo mi udf").encode('utf-8')
2823+ path = get_udf_path(u"~/ñoño/mirá que lindo mi udf")
2824 # patch AQ.create_udf
2825 def create_udf(path, name, marker):
2826 """Fake create_udf"""
2827 d.callback((path, name))
2828- self.main.action_q.create_udf = create_udf
2829- def check(info):
2830- """Check the values passed to AQ.create_udf"""
2831- path, name = info
2832- self.assertTrue(isinstance(name, unicode),
2833- 'name should be unicode but is: %s' % type(name))
2834- self.assertTrue(isinstance(path, unicode),
2835- 'path should be unicode but is: %s' % type(name))
2836- with environ('HOME', self.home_dir):
2837- self.vm.create_udf(path)
2838- d.addCallback(check)
2839- return d
2840+
2841+ self.patch(self.main.action_q, 'create_udf', create_udf)
2842+ self.vm.create_udf(path)
2843+
2844+ path, name = yield d
2845+ self.assertTrue(isinstance(name, unicode),
2846+ 'name should be unicode but is: %s' % type(name))
2847+ self.assertFalse(isinstance(path, unicode),
2848+ 'path should not be unicode but is: %s' % type(name))
2849
2850 @defer.inlineCallbacks
2851 def test_delete_volume(self):
2852@@ -1878,22 +1911,20 @@
2853 def check_udf(info):
2854 """Check the udf attributes."""
2855 deleted_udf = info['volume']
2856- self.assertEquals(os.path.abspath(deleted_udf.path),
2857- os.path.abspath(udf.path))
2858+ self.assertEquals(deleted_udf.path, udf.path)
2859 self.assertEquals(deleted_udf.volume_id, udf.volume_id)
2860 self.assertEquals(deleted_udf.node_id, udf.node_id)
2861 self.assertEquals(deleted_udf.suggested_path, udf.suggested_path)
2862 self.assertNotIn(deleted_udf.volume_id, self.vm.udfs)
2863 d = defer.Deferred()
2864 self._listen_for('VM_VOLUME_DELETED', d.callback)
2865- with environ('HOME', self.home_dir):
2866- self.vm.delete_volume(share.volume_id)
2867+ self.vm.delete_volume(share.volume_id)
2868 return d
2869
2870 def check_share(info):
2871 """Check the share attributes."""
2872 deleted_share = info['volume']
2873- self.assertEquals(os.path.abspath(deleted_share.path), share.path)
2874+ self.assertEquals(deleted_share.path, share.path)
2875 self.assertEquals(deleted_share.volume_id, share.volume_id)
2876 self.assertEquals(deleted_share.node_id, share.node_id)
2877 self.assertNotIn(deleted_share.volume_id, self.vm.shares)
2878@@ -1901,8 +1932,7 @@
2879 self._listen_for('VM_VOLUME_DELETED', d.callback)
2880 d.addCallback(check_udf)
2881 d.addCallback(check_share)
2882- with environ('HOME', self.home_dir):
2883- self.vm.delete_volume(udf.volume_id)
2884+ self.vm.delete_volume(udf.volume_id)
2885 yield d
2886
2887 @defer.inlineCallbacks
2888@@ -1919,8 +1949,7 @@
2889 else:
2890 d.errback(Exception(""))
2891 self.patch(self.main.action_q, 'delete_volume', delete_volume)
2892- with environ('HOME', self.home_dir):
2893- self.vm.delete_volume(udf.volume_id)
2894+ self.vm.delete_volume(udf.volume_id)
2895 yield d
2896
2897 @defer.inlineCallbacks
2898@@ -1940,12 +1969,12 @@
2899 # create and add a UDF
2900 udf = self._create_udf(subscribed=False)
2901 yield self.vm.add_udf(udf)
2902- self.assertFalse(os.path.exists(udf.path))
2903+ self.assertFalse(path_exists(udf.path))
2904 self.assertFalse(self.vm.udfs[udf.id].subscribed)
2905 # subscribe to it
2906 yield self.vm.subscribe_udf(udf.id)
2907 self.assertTrue(self.vm.udfs[udf.id].subscribed)
2908- self.assertTrue(os.path.exists(udf.path))
2909+ self.assertTrue(path_exists(udf.path))
2910
2911 @defer.inlineCallbacks
2912 def test_subscribe_udf_missing_fsm_md(self):
2913@@ -1953,7 +1982,7 @@
2914 # create and add a UDF
2915 udf = self._create_udf(subscribed=False)
2916 yield self.vm.add_udf(udf)
2917- self.assertFalse(os.path.exists(udf.path))
2918+ self.assertFalse(path_exists(udf.path))
2919 self.assertFalse(self.vm.udfs[udf.id].subscribed)
2920 yield self.vm.subscribe_udf(udf.id)
2921 yield self.vm.unsubscribe_udf(udf.id)
2922@@ -2037,21 +2066,21 @@
2923 os.path.join('dir','empty_dir')]
2924 for i, dir in enumerate(dirs):
2925 path = os.path.join(udf.path, dir)
2926- if not os.path.exists(path):
2927+ if not path_exists(path):
2928 make_dir(path, recursive=True)
2929 self.main.fs.create(path, udf.volume_id, is_dir=True)
2930- self.main.fs.set_node_id(path, 'dir_node_id'+str(i))
2931+ self.main.fs.set_node_id(path, 'dir_node_id' + str(i))
2932 # add a inotify watch to the dir
2933- yield self.main.event_q.add_watch(path)
2934+ yield self.vm._add_watch(path)
2935 files = ['a_file', os.path.join('dir','file'),
2936 os.path.join('dir','subdir','file')]
2937 for i, file in enumerate(files):
2938 path = os.path.join(udf.path, file)
2939 open(path, 'w').close()
2940 self.main.fs.create(path, udf.volume_id)
2941- self.main.fs.set_node_id(path, 'file_node_id'+str(i))
2942+ self.main.fs.set_node_id(path, 'file_node_id' + str(i))
2943 paths = self.main.fs.get_paths_starting_with(udf.path)
2944- self.assertEquals(len(paths), len(dirs+files)+1)
2945+ self.assertEquals(len(paths), len(dirs + files) + 1)
2946
2947 # unsubscribe from it
2948 self.vm.unsubscribe_udf(udf.volume_id)
2949@@ -2062,14 +2091,15 @@
2950 self.assertTrue(self.main.fs.get_by_path(udf.path))
2951 # get the childs (should be an empty list)
2952 paths = list(self.main.fs.get_paths_starting_with(udf.path))
2953- self.assertEquals(len(dirs+files)+1, len(paths))
2954+ self.assertEquals(len(dirs + files) + 1, len(paths))
2955 # check that there isn't a watch in the UDF
2956- self.assertNotIn(udf.path, self.main.event_q.monitor._general_watchs)
2957+ self.assertNotIn(udf.path, self.watches,
2958+ 'watch for %r should not be present.' % udf.path)
2959 # check that the childs don't have a watch
2960 for path, is_dir in paths:
2961 if is_dir:
2962- self.assertNotIn(path,
2963- self.main.event_q.monitor._general_watchs)
2964+ self.assertNotIn(path, self.watches,
2965+ 'watch for %r should not be present.' % path)
2966
2967 @defer.inlineCallbacks
2968 def test_unsubscribe_subscribe_udf_with_content(self):
2969@@ -2084,23 +2114,23 @@
2970 os.path.join('dir','empty_dir')]
2971 for i, path in enumerate(dirs):
2972 path = os.path.join(udf.path, path)
2973- if not os.path.exists(path):
2974+ if not path_exists(path):
2975 make_dir(path, recursive=True)
2976 self.main.fs.create(path, udf.volume_id, is_dir=True)
2977- self.main.fs.set_node_id(path, 'dir_node_id'+str(i))
2978+ self.main.fs.set_node_id(path, 'dir_node_id' + str(i))
2979 files = ['a_file', os.path.join('dir', 'file'),
2980 os.path.join('dir', 'subdir', 'file')]
2981 for i, path in enumerate(files):
2982 path = os.path.join(udf.path, path)
2983 open(path, 'w').close()
2984 self.main.fs.create(path, udf.volume_id)
2985- self.main.fs.set_node_id(path, 'file_node_id'+str(i))
2986+ self.main.fs.set_node_id(path, 'file_node_id' + str(i))
2987 paths = list(self.main.fs.get_paths_starting_with(udf.path))
2988 # add a inotify watch to the dirs
2989 for path, is_dir in paths:
2990 if is_dir:
2991- yield self.main.event_q.add_watch(path)
2992- self.assertEquals(len(paths), len(dirs+files)+1, paths)
2993+ yield self.vm._add_watch(path)
2994+ self.assertEquals(len(paths), len(dirs + files) + 1, paths)
2995
2996 # unsubscribe from it
2997 self.vm.unsubscribe_udf(udf.volume_id)
2998@@ -2110,27 +2140,30 @@
2999 # check that the UDF is in the fsm metadata
3000 self.assertTrue(self.main.fs.get_by_path(udf.path))
3001 # check that there isn't a watch in the UDF
3002- self.assertNotIn(udf.path, self.main.event_q.monitor._general_watchs)
3003+ self.assertNotIn(udf.path, self.watches,
3004+ 'watch for %r should not be present.' % udf.path)
3005 # check that the childs don't have a watch
3006 for path, is_dir in paths:
3007 if is_dir:
3008- self.assertNotIn(path,
3009- self.main.event_q.monitor._general_watchs)
3010+ self.assertNotIn(path, self.watches,
3011+ 'watch for %r should not be present.' % path)
3012 # check the childs
3013 paths = self.main.fs.get_paths_starting_with(udf.path)
3014- self.assertEquals(len(dirs+files)+1, len(paths))
3015+ self.assertEquals(len(dirs + files) + 1, len(paths))
3016 # resubscribe to it
3017 yield self.vm.subscribe_udf(udf.volume_id)
3018 paths = list(self.main.fs.get_paths_starting_with(udf.path))
3019 # we should only have the dirs, as the files metadata is
3020 # delete by local rescan (both hashes are '')
3021- self.assertEquals(len(dirs)+1, len(paths))
3022+ self.assertEquals(len(dirs) + 1, len(paths))
3023 # check that there is a watch in the UDF
3024- self.assertIn(udf.path, self.main.event_q.monitor._general_watchs)
3025+ self.assertIn(udf.path, self.watches,
3026+ 'watch for %r should be present.' % udf.path)
3027 # check that the child dirs have a watch
3028 for path, is_dir in paths:
3029 if is_dir:
3030- self.assertIn(path, self.main.event_q.monitor._general_watchs)
3031+ self.assertIn(path, self.watches,
3032+ 'watch for %r should be present.' % path)
3033 self.vm._remove_watch(path)
3034
3035 @defer.inlineCallbacks
3036@@ -2197,7 +2230,7 @@
3037 def test_handle_AQ_CREATE_UDF_OK(self):
3038 """Test AQ_CREATE_UDF_OK. The UDF is always subscribed."""
3039 d = defer.Deferred()
3040- path = os.path.join(self.home_dir, u'ñoño'.encode("utf8"))
3041+ path = get_udf_path(u'~/ñoño')
3042 udf_id = uuid.uuid4()
3043 node_id = uuid.uuid4()
3044 # patch AQ.create_udf
3045@@ -2208,25 +2241,23 @@
3046 self.main.action_q.create_udf = create_udf
3047 self._listen_for('VM_UDF_CREATED', d.callback)
3048 # fake VM state, call create_udf
3049- with environ('HOME', self.home_dir):
3050- self.vm.create_udf(path)
3051- def check(info):
3052- """The callback"""
3053- udf = info['udf']
3054- self.assertEquals(os.path.abspath(udf.path), os.path.abspath(path))
3055- self.assertEquals(udf.volume_id, str(udf_id))
3056- self.assertEquals(udf.node_id, str(node_id))
3057- self.assertEquals(0, len(self.vm.marker_udf_map))
3058- self.assertTrue(self.vm.udfs[str(udf_id)])
3059- self.assertTrue(self.vm.udfs[str(udf_id)].subscribed)
3060- self.assertTrue(os.path.exists(udf.path))
3061- d.addCallback(check)
3062- return d
3063+ self.vm.create_udf(path)
3064+
3065+ info = yield d
3066+
3067+ udf = info['udf']
3068+ self.assertEqual(udf.path, path)
3069+ self.assertEqual(udf.volume_id, str(udf_id))
3070+ self.assertEqual(udf.node_id, str(node_id))
3071+ self.assertEqual(0, len(self.vm.marker_udf_map))
3072+ self.assertTrue(self.vm.udfs[str(udf_id)])
3073+ self.assertTrue(self.vm.udfs[str(udf_id)].subscribed)
3074+ self.assertTrue(path_exists(udf.path))
3075
3076 def test_handle_AQ_CREATE_UDF_ERROR(self):
3077 """Test for handle_AQ_CREATE_UDF_ERROR."""
3078 d = defer.Deferred()
3079- path = os.path.join(self.home_dir, u'ñoño'.encode("utf8"))
3080+ path = get_udf_path(u'~/ñoño')
3081 # patch AQ.create_udf
3082 def create_udf(path, name, marker):
3083 """Fake create_udf"""
3084@@ -2235,8 +2266,7 @@
3085 self.main.action_q.create_udf = create_udf
3086 self._listen_for('VM_UDF_CREATE_ERROR', d.callback)
3087 # fake VM state, call create_udf
3088- with environ('HOME', self.home_dir):
3089- self.vm.create_udf(path)
3090+ self.vm.create_udf(path)
3091 def check(info):
3092 """The callback"""
3093 self.assertEquals(info['path'], path)
3094@@ -2259,16 +2289,14 @@
3095 def check(info):
3096 """Check the udf attributes."""
3097 deleted_udf = info['volume']
3098- self.assertEquals(os.path.abspath(deleted_udf.path),
3099- os.path.abspath(udf.path))
3100+ self.assertEquals(deleted_udf.path, udf.path)
3101 self.assertEquals(deleted_udf.volume_id, udf.volume_id)
3102 self.assertEquals(deleted_udf.node_id, udf.node_id)
3103 self.assertEquals(deleted_udf.suggested_path, udf.suggested_path)
3104 self.assertNotIn(deleted_udf.volume_id, self.vm.udfs)
3105 self._listen_for('VM_VOLUME_DELETED', d.callback)
3106 d.addCallback(check)
3107- with environ('HOME', self.home_dir):
3108- self.vm.delete_volume(udf.volume_id)
3109+ self.vm.delete_volume(udf.volume_id)
3110 yield d
3111
3112 @defer.inlineCallbacks
3113@@ -2291,8 +2319,7 @@
3114 self.assertEquals(error, 'ERROR!')
3115 self._listen_for('VM_VOLUME_DELETE_ERROR', d.callback)
3116 d.addCallback(check)
3117- with environ('HOME', self.home_dir):
3118- self.vm.delete_volume(udf.volume_id)
3119+ self.vm.delete_volume(udf.volume_id)
3120 yield d
3121
3122 def test_handle_AQ_DELETE_VOLUME_ERROR_missing_volume(self):
3123@@ -2310,7 +2337,8 @@
3124 user_conf.set_share_autosubscribe(auto_subscribe)
3125
3126 # start the test
3127- share_volume = self._create_share_volume(accepted=False, access_level='Modify')
3128+ share_volume = self._create_share_volume(accepted=False,
3129+ access_level='Modify')
3130 # initialize the the root
3131 self.vm._got_root('root_uuid')
3132
3133@@ -2330,8 +2358,7 @@
3134 self.patch(self.main.action_q, 'rescan_from_scratch', self.fail)
3135
3136 # fire SV_VOLUME_CREATED with a share
3137- with environ('HOME', self.home_dir):
3138- self.vm.handle_SV_VOLUME_CREATED(share_volume)
3139+ self.vm.handle_SV_VOLUME_CREATED(share_volume)
3140
3141 info = yield share_created_d
3142 share_id = info['share_id']
3143@@ -2341,14 +2368,14 @@
3144
3145 if auto_subscribe:
3146 self.assertTrue(share.subscribed)
3147- self.assertTrue(os.path.exists(share.path))
3148+ self.assertTrue(path_exists(share.path))
3149 # check that scan_dir and rescan_from_scratch is called
3150 yield local_scan_d
3151 vol_id = yield server_rescan_d
3152 self.assertEqual(vol_id, share.volume_id)
3153 else:
3154 self.assertFalse(share.subscribed)
3155- self.assertFalse(os.path.exists(share.path))
3156+ self.assertFalse(path_exists(share.path))
3157
3158 def test_handle_SV_VOLUME_CREATED_share_subscribe(self):
3159 """Test SV_VOLUME_CREATED with share auto_subscribe """
3160@@ -2374,21 +2401,20 @@
3161 rescan_cb = defer.Deferred()
3162 self.patch(self.main.action_q, 'rescan_from_scratch', rescan_cb.callback)
3163
3164- with environ('HOME', self.home_dir):
3165- self.vm.handle_SV_VOLUME_CREATED(udf_volume)
3166+ self.vm.handle_SV_VOLUME_CREATED(udf_volume)
3167 info = yield d
3168 udf = info['udf']
3169 self.assertEquals(udf.volume_id, str(udf_id))
3170 self.assertIn(str(udf_id), self.vm.udfs)
3171 if auto_subscribe:
3172 self.assertTrue(self.vm.udfs[udf.id].subscribed)
3173- self.assertTrue(os.path.exists(udf.path))
3174+ self.assertTrue(path_exists(udf.path))
3175 # check that rescan_from_scratch is called
3176 vol_id = yield rescan_cb
3177 self.assertEqual(vol_id, udf.volume_id)
3178 else:
3179 self.assertFalse(self.vm.udfs[udf.id].subscribed)
3180- self.assertFalse(os.path.exists(udf.path))
3181+ self.assertFalse(path_exists(udf.path))
3182
3183 def test_handle_SV_VOLUME_CREATED_udf_subscribe(self):
3184 """Test SV_VOLUME_CREATED with udf auto_subscribe """
3185@@ -2404,8 +2430,7 @@
3186 share = self._create_share()
3187 # create a UDF
3188 suggested_path = u'~/ñoño'
3189- with environ('HOME', self.home_dir):
3190- path = get_udf_path(suggested_path)
3191+ path = get_udf_path(suggested_path)
3192 udf_id = uuid.uuid4()
3193 udf_volume = volumes.UDFVolume(udf_id, 'udf_uuid', None, 10,
3194 suggested_path)
3195@@ -2416,8 +2441,7 @@
3196 self.vm._got_root('root_uuid')
3197 d = defer.Deferred()
3198 self._listen_for('VM_VOLUME_DELETED', d.callback)
3199- with environ('HOME', self.home_dir):
3200- self.vm.handle_SV_VOLUME_DELETED(udf_volume.volume_id)
3201+ self.vm.handle_SV_VOLUME_DELETED(udf_volume.volume_id)
3202 info = yield d
3203
3204 udf = info['volume']
3205@@ -2427,8 +2451,7 @@
3206 share_deferred = defer.Deferred()
3207 self._listen_for('VM_VOLUME_DELETED', share_deferred.callback)
3208 # fire SV_VOLUME_DELETED with a share
3209- with environ('HOME', self.home_dir):
3210- self.vm.handle_SV_VOLUME_DELETED(share.volume_id)
3211+ self.vm.handle_SV_VOLUME_DELETED(share.volume_id)
3212 share_info = yield share_deferred
3213 new_share = share_info['volume']
3214 self.assertEquals(new_share.volume_id, share.volume_id)
3215@@ -2584,8 +2607,7 @@
3216 def test_udf_from_udf_volume(self):
3217 """Test for UDF.from_udf_volume."""
3218 suggested_path = u'~/foo/bar'
3219- with environ('HOME', self.home_dir):
3220- path = get_udf_path(suggested_path)
3221+ path = get_udf_path(suggested_path)
3222 volume = volumes.UDFVolume(uuid.uuid4(), uuid.uuid4(), None,
3223 10, suggested_path)
3224 udf = UDF.from_udf_volume(volume, path)
3225@@ -2730,9 +2752,7 @@
3226 d = defer.Deferred()
3227 self.vm.refresh_volumes = lambda: d.errback('refresh_volumes called!')
3228 self._listen_for('VM_UDF_CREATED', d.callback)
3229- # use a custom home
3230- with environ('HOME', self.home_dir):
3231- self.vm.handle_AQ_LIST_VOLUMES(response)
3232+ self.vm.handle_AQ_LIST_VOLUMES(response)
3233 yield d
3234 self.assertEquals(2, len(self.vm.shares)) # the new share and root
3235 self.assertEquals(1, len(self.vm.udfs)) # the new udf
3236@@ -2891,8 +2911,7 @@
3237 vol_rescan_d.callback) # autosubscribe is False
3238 server_rescan_d = defer.Deferred()
3239 self._listen_for('SYS_SERVER_RESCAN_DONE', server_rescan_d.callback)
3240- with environ('HOME', self.home_dir):
3241- yield self.vm.server_rescan()
3242+ yield self.vm.server_rescan()
3243 yield server_rescan_d
3244 events = yield vol_rescan_d
3245
3246@@ -2929,8 +2948,7 @@
3247 vol_rescan_d.callback, 1, collect=True)
3248 server_rescan_d = defer.Deferred()
3249 self._listen_for('SYS_SERVER_RESCAN_DONE', server_rescan_d.callback)
3250- with environ('HOME', self.home_dir):
3251- yield self.vm.server_rescan()
3252+ yield self.vm.server_rescan()
3253
3254 yield server_rescan_d
3255 events = yield vol_rescan_d
3256@@ -2975,9 +2993,7 @@
3257 vol_rescan_d.callback, 1, collect=True)
3258 server_rescan_d = defer.Deferred()
3259 self._listen_for('SYS_SERVER_RESCAN_DONE', server_rescan_d.callback)
3260- with environ('HOME', self.home_dir):
3261- yield self.vm.server_rescan()
3262-
3263+ yield self.vm.server_rescan()
3264
3265 events = yield vol_rescan_d
3266
3267@@ -3027,8 +3043,7 @@
3268 vol_rescan_d.callback, 1, collect=True)
3269 server_rescan_d = defer.Deferred()
3270 self._listen_for('SYS_SERVER_RESCAN_DONE', server_rescan_d.callback)
3271- with environ('HOME', self.home_dir):
3272- yield self.vm.server_rescan()
3273+ yield self.vm.server_rescan()
3274
3275 yield server_rescan_d
3276 events = yield vol_rescan_d
3277@@ -3050,8 +3065,7 @@
3278 self.main.action_q.query_volumes = lambda: defer.fail(Exception('foo bar'))
3279 d = defer.Deferred()
3280 self._listen_for('SYS_SERVER_RESCAN_ERROR', d.callback)
3281- with environ('HOME', self.home_dir):
3282- yield self.vm.server_rescan()
3283+ yield self.vm.server_rescan()
3284 yield d
3285 # now when _volumes_rescan_cb fails
3286 # patch fake action queue
3287@@ -3062,8 +3076,7 @@
3288 self.vm._volumes_rescan_cb = broken_volumes_rescan_cb
3289 d = defer.Deferred()
3290 self._listen_for('SYS_SERVER_RESCAN_ERROR', d.callback)
3291- with environ('HOME', self.home_dir):
3292- yield self.vm.server_rescan()
3293+ yield self.vm.server_rescan()
3294 yield d
3295
3296 @defer.inlineCallbacks
3297@@ -3077,8 +3090,7 @@
3298 self.main.action_q.query_volumes = lambda: defer.succeed(response)
3299 d = defer.Deferred()
3300 self.vm.refresh_shares = lambda: d.callback(True)
3301- with environ('HOME', self.home_dir):
3302- yield self.vm.server_rescan()
3303+ yield self.vm.server_rescan()
3304 called = yield d
3305 self.assertTrue(called)
3306 test_refresh_shares_called_after_server_rescan.timeout = 1
3307@@ -3103,8 +3115,7 @@
3308 self._listen_for('SYS_SERVER_RESCAN_DONE', server_rescan_d.callback)
3309 udf_created_d = defer.Deferred()
3310 self._listen_for('VM_UDF_CREATED', udf_created_d.callback)
3311- with environ('HOME', self.home_dir):
3312- yield self.vm.server_rescan()
3313+ yield self.vm.server_rescan()
3314 yield server_rescan_d
3315 yield udf_created_d
3316 self.assertIn(request.ROOT, self.vm.shares)
3317@@ -3115,8 +3126,7 @@
3318 response = [share_volume, root_volume]
3319 server_rescan_d = defer.Deferred()
3320 self._listen_for('SYS_SERVER_RESCAN_DONE', server_rescan_d.callback)
3321- with environ('HOME', self.home_dir):
3322- yield self.vm.server_rescan()
3323+ yield self.vm.server_rescan()
3324 yield server_rescan_d
3325
3326 self.assertIn(request.ROOT, self.vm.shares)
3327@@ -3147,8 +3157,7 @@
3328 # wait for the VM_UDF_CREATED event in order to properly shutdown
3329 # (local rescan/udf scan is running)
3330 self._listen_for('VM_UDF_CREATED', udf_created_d.callback)
3331- with environ('HOME', self.home_dir):
3332- yield self.vm.server_rescan()
3333+ yield self.vm.server_rescan()
3334 yield server_rescan_d
3335 yield udf_created_d
3336 self.assertIn(request.ROOT, self.vm.shares)
3337@@ -3159,8 +3168,7 @@
3338 response = [udf_volume, root_volume]
3339 server_rescan_d = defer.Deferred()
3340 self._listen_for('SYS_SERVER_RESCAN_DONE', server_rescan_d.callback)
3341- with environ('HOME', self.home_dir):
3342- yield self.vm.server_rescan()
3343+ yield self.vm.server_rescan()
3344 yield server_rescan_d
3345 self.assertIn(request.ROOT, self.vm.shares)
3346 self.assertIn(str(udf_volume.volume_id), self.vm.udfs)
3347@@ -3188,8 +3196,7 @@
3348 self._listen_for('SV_VOLUME_NEW_GENERATION', d.callback, 1, True)
3349 udf_d = defer.Deferred()
3350 self._listen_for('VM_UDF_CREATED', udf_d.callback)
3351- with environ('HOME', self.home_dir):
3352- shares, udfs = yield self.vm._volumes_rescan_cb(response)
3353+ shares, udfs = yield self.vm._volumes_rescan_cb(response)
3354 events = yield d
3355 # check the returned shares and udfs
3356 self.assertEqual(shares, [str(share_id), ''])
3357@@ -3219,8 +3226,7 @@
3358 response = [share_volume, udf_volume, root_volume]
3359 d = defer.Deferred()
3360 self._listen_for('SV_VOLUME_NEW_GENERATION', d.callback, 2, True)
3361- with environ('HOME', self.home_dir):
3362- self.vm._volumes_rescan_cb(response)
3363+ self.vm._volumes_rescan_cb(response)
3364 events = yield d
3365 events_dict = dict((event['volume_id'], event['generation'])
3366 for event in events)
3367@@ -3236,8 +3242,7 @@
3368 response = [share_volume, udf_volume, root_volume]
3369 d = defer.Deferred()
3370 self._listen_for('SV_VOLUME_NEW_GENERATION', d.callback, 1, True)
3371- with environ('HOME', self.home_dir):
3372- self.vm._volumes_rescan_cb(response)
3373+ self.vm._volumes_rescan_cb(response)
3374 events = yield d
3375 events_dict = dict((event['volume_id'], event['generation'])
3376 for event in events)
3377@@ -3255,8 +3260,7 @@
3378 self.assertEquals(None, self.vm.root.node_id)
3379 d = defer.Deferred()
3380 self._listen_for('SV_VOLUME_NEW_GENERATION', d.callback, 1, True)
3381- with environ('HOME', self.home_dir):
3382- self.vm._volumes_rescan_cb(response)
3383+ self.vm._volumes_rescan_cb(response)
3384 events = yield d
3385 events_dict = dict((event['volume_id'], event['generation'])
3386 for event in events)
3387@@ -3278,8 +3282,7 @@
3388 self.assertEquals(str(root_id), self.vm.root.node_id)
3389 d = defer.Deferred()
3390 self._listen_for('SV_VOLUME_NEW_GENERATION', d.callback, 1, True)
3391- with environ('HOME', self.home_dir):
3392- self.vm._volumes_rescan_cb(response)
3393+ self.vm._volumes_rescan_cb(response)
3394 events = yield d
3395 events_dict = dict((event['volume_id'], event['generation'])
3396 for event in events)
3397@@ -3292,8 +3295,7 @@
3398 def test_volumes_rescan_cb_inactive_volume(self):
3399 """Test _volumes_rescan_cb with inactive volume."""
3400 suggested_path = u'~/ñoño/ñandú'
3401- with environ('HOME', self.home_dir):
3402- path = get_udf_path(suggested_path)
3403+ path = get_udf_path(suggested_path)
3404 udf_id = uuid.uuid4()
3405 udf_volume = volumes.UDFVolume(udf_id, uuid.uuid4(), 10, 100,
3406 suggested_path)
3407@@ -3306,8 +3308,7 @@
3408 self.vm.unsubscribe_udf(udf.volume_id)
3409 d = defer.Deferred()
3410 self._listen_for('SV_VOLUME_NEW_GENERATION', d.callback)
3411- with environ('HOME', self.home_dir):
3412- self.vm._volumes_rescan_cb(response)
3413+ self.vm._volumes_rescan_cb(response)
3414 event = yield d
3415 self.assertEqual(len(event), 2)
3416 events_dict = {event['volume_id']: event['generation']}
3417@@ -3323,8 +3324,7 @@
3418
3419 # create a UDF
3420 suggested_path = u'~/ñoño/ñandú'
3421- with environ('HOME', self.home_dir):
3422- path = get_udf_path(suggested_path)
3423+ path = get_udf_path(suggested_path)
3424 udf_id = uuid.uuid4()
3425 udf_volume = volumes.UDFVolume(udf_id, 'udf_uuid', 10, 100,
3426 suggested_path)
3427@@ -3340,8 +3340,7 @@
3428 d = defer.Deferred()
3429 self._listen_for('SV_VOLUME_NEW_GENERATION', d.callback, 2, collect=True)
3430 self.patch(self.vm, '_scan_volume', defer.succeed)
3431- with environ('HOME', self.home_dir):
3432- self.vm._volumes_rescan_cb(response)
3433+ self.vm._volumes_rescan_cb(response)
3434 events = yield d
3435 self.assertEqual(len(events), 2)
3436 events_dict = dict((evt['volume_id'], evt['generation'])
3437@@ -3352,16 +3351,14 @@
3438 mdobj = self.main.fs.get_by_path(udf.path)
3439 self.assertEquals(udf.node_id, mdobj.node_id)
3440 self.assertEquals(udf.id, mdobj.share_id)
3441- self.assertEquals(os.path.abspath(udf.path),
3442- os.path.abspath(mdobj.path))
3443+ self.assertEquals(udf.path, mdobj.path)
3444
3445 @defer.inlineCallbacks
3446 def test_volumes_rescan_cb_active_udf(self):
3447 """Test _volumes_rescan_cb with an active UDF and no-autosubscribe."""
3448 # create a UDF
3449 suggested_path = u'~/ñoño/ñandú'
3450- with environ('HOME', self.home_dir):
3451- path = get_udf_path(suggested_path)
3452+ path = get_udf_path(suggested_path)
3453 udf_id = uuid.uuid4()
3454 udf_volume = volumes.UDFVolume(udf_id, 'udf_uuid', None, 10,
3455 suggested_path)
3456@@ -3375,9 +3372,8 @@
3457 yield self.vm.subscribe_udf(udf.volume_id)
3458 d = defer.Deferred()
3459 self._listen_for('SV_VOLUME_NEW_GENERATION', d.callback, 1)
3460- with environ('HOME', self.home_dir):
3461- shares, udfs = self.vm._volumes_rescan_cb(response)
3462- self.assertIn(udf.volume_id, udfs)
3463+ shares, udfs = self.vm._volumes_rescan_cb(response)
3464+ self.assertIn(udf.volume_id, udfs)
3465 yield d
3466
3467 @defer.inlineCallbacks
3468@@ -3658,7 +3654,6 @@
3469 @defer.inlineCallbacks
3470 def tearDown(self):
3471 """Cleanup all the cruft."""
3472- self.rmtree(self.data_dir)
3473 if self.main:
3474 self.main.shutdown()
3475 VolumeManager.METADATA_VERSION = CURRENT_METADATA_VERSION
3476@@ -3671,7 +3666,7 @@
3477
3478 def set_md_version(self, md_version):
3479 """Write md_version to the .version file."""
3480- if not os.path.exists(self.vm_data_dir):
3481+ if not path_exists(self.vm_data_dir):
3482 make_dir(self.vm_data_dir, recursive=True)
3483 with open(self.version_file, 'w') as fd:
3484 fd.write(md_version)
3485@@ -3753,6 +3748,7 @@
3486 self.shares_dir_link = os.path.join(self.u1_dir, 'Shared With Me')
3487 make_link(self.shares_dir, self.shares_dir_link)
3488 self.db = tritcask.Tritcask(self._tritcask_dir)
3489+ self.addCleanup(self.db.shutdown)
3490 self.old_get_md_version = MetadataUpgrader._get_md_version
3491 MetadataUpgrader._get_md_version = lambda _: None
3492 self.md_upgrader = MetadataUpgrader(self.vm_data_dir, self.share_md_dir,
3493@@ -3763,18 +3759,17 @@
3494 @defer.inlineCallbacks
3495 def tearDown(self):
3496 """Restorre _get_md_version"""
3497- self.db.shutdown()
3498 MetadataUpgrader._get_md_version = self.old_get_md_version
3499 yield super(MetadataUpgraderTests, self).tearDown()
3500
3501 def test_guess_metadata_version_None(self):
3502 """Test _guess_metadata_version method for pre-version."""
3503 # fake a version None layout
3504- if os.path.exists(self.version_file):
3505+ if path_exists(self.version_file):
3506 os.unlink(self.version_file)
3507 for path in [self.share_md_dir, self.shared_md_dir,
3508 self.root_dir, self.shares_dir]:
3509- if os.path.exists(path):
3510+ if path_exists(path):
3511 self.rmtree(path)
3512 make_dir(os.path.join(self.root_dir, 'My Files'),recursive=True)
3513 shares_dir = os.path.join(self.root_dir, 'Shared With Me')
3514@@ -3787,7 +3782,7 @@
3515 def test_guess_metadata_version_1_or_2(self):
3516 """Test _guess_metadata_version method for version 1 or 2."""
3517 # fake a version 1 layout
3518- if os.path.exists(self.version_file):
3519+ if path_exists(self.version_file):
3520 os.unlink(self.version_file)
3521 self.rmtree(self.root_dir)
3522 make_dir(os.path.join(self.root_dir, 'My Files'), recursive=True)
3523@@ -3803,7 +3798,7 @@
3524 def test_guess_metadata_version_4(self):
3525 """Test _guess_metadata_version method for version 4."""
3526 # fake a version 4 layout
3527- if os.path.exists(self.version_file):
3528+ if path_exists(self.version_file):
3529 os.unlink(self.version_file)
3530 remove_link(self.shares_dir_link)
3531 make_link(self.shares_dir_link, self.shares_dir_link)
3532
3533=== modified file 'tests/syncdaemon/test_vm_helper.py'
3534--- tests/syncdaemon/test_vm_helper.py 2011-07-27 20:56:19 +0000
3535+++ tests/syncdaemon/test_vm_helper.py 2011-08-04 19:19:24 +0000
3536@@ -23,28 +23,19 @@
3537 import os
3538 import uuid
3539
3540-from twisted.internet import defer
3541-
3542-from contrib.testing.testcase import environ
3543 from tests.syncdaemon.test_vm import BaseVolumeManagerTests
3544
3545 from ubuntuone.syncdaemon.vm_helper import (
3546 create_shares_link,
3547 get_share_dir_name,
3548 get_udf_path,
3549- get_udf_path_name,
3550+ get_udf_suggested_path,
3551 )
3552
3553
3554 class VMHelperTest(BaseVolumeManagerTests):
3555 """Test the vm_helper methods."""
3556
3557- @defer.inlineCallbacks
3558- def setUp(self):
3559- """Setup for the tests."""
3560- yield super(VMHelperTest, self).setUp()
3561- self.home_dir = self.mktemp('ubuntuonehacker')
3562-
3563 def _test_get_udf_path(self, suggested_path):
3564 """Assert that the resulting udf path is correct."""
3565 assert isinstance(suggested_path, unicode)
3566@@ -67,25 +58,25 @@
3567 """A bytes sequence is returned."""
3568 self._test_get_udf_path(suggested_path=u'~/Documents/Nr 1: really?')
3569
3570- def test_get_udf_path_name(self):
3571- """Test for _get_udf_path_name."""
3572- home_dir = self.home_dir
3573- in_home = os.path.join(home_dir, 'foo')
3574- deep_in_home = os.path.join(home_dir, 'docs', 'foo', 'bar')
3575- outside_home = os.path.join(self.home_dir, os.path.pardir, 'bar', 'foo')
3576+ def test_get_udf_suggested_path(self):
3577+ """Test for get_udf_suggested_path."""
3578+ in_home = os.path.join(self.home_dir, 'foo')
3579+ self.assertEqual(u'~/foo', get_udf_suggested_path(in_home))
3580+
3581+ def test_get_udf_suggested_path_long_path(self):
3582+ """Test for get_udf_suggested_path."""
3583+ deep_in_home = os.path.join(self.home_dir, 'docs', 'foo', 'bar')
3584+ actual = get_udf_suggested_path(deep_in_home)
3585+ self.assertEqual(u'~/docs/foo/bar', actual)
3586+
3587+ def test_get_udf_suggested_path_value_error(self):
3588+ """Test for get_udf_suggested_path."""
3589+ outside_home = os.path.join(self.home_dir, os.path.pardir,
3590+ 'bar', 'foo')
3591 relative_home = os.path.join(os.path.pardir, os.path.pardir, 'foo')
3592- with environ('HOME', home_dir):
3593- path, name = get_udf_path_name(in_home)
3594- self.assertEquals('~', path)
3595- self.assertEquals('foo', name)
3596-
3597- path, name = get_udf_path_name(deep_in_home)
3598- self.assertEquals(os.path.join('~', 'docs', 'foo'), path)
3599- self.assertEquals('bar', name)
3600-
3601- self.assertRaises(ValueError, get_udf_path_name, outside_home)
3602- self.assertRaises(ValueError, get_udf_path_name, None)
3603- self.assertRaises(ValueError, get_udf_path_name, relative_home)
3604+ self.assertRaises(ValueError, get_udf_suggested_path, outside_home)
3605+ self.assertRaises(ValueError, get_udf_suggested_path, None)
3606+ self.assertRaises(ValueError, get_udf_suggested_path, relative_home)
3607
3608 def _test_create_shares_link(self):
3609 """Test for create_shares_link."""
3610@@ -101,26 +92,27 @@
3611
3612 class GetShareDirNameTests(BaseVolumeManagerTests):
3613
3614+ share_id = uuid.uuid4()
3615+ name = u'The little pretty share (♥)'
3616+
3617 def test_get_share_dir_name(self):
3618 """Test for get_share_dir_name."""
3619- share_id = uuid.uuid4()
3620- name = 'The little pretty share'
3621- other_name = 'Dorian Grey'
3622- share = self._create_share_volume(volume_id=share_id, name=name,
3623+ other_name = u'Dorian Grey'
3624+ share = self._create_share_volume(volume_id=self.share_id,
3625+ name=self.name,
3626 other_visible_name=other_name)
3627 result = get_share_dir_name(share)
3628
3629- expected = '%s (%s, %s)' % (name, other_name, share_id)
3630- self.assertEqual(result, expected)
3631+ expected = u'%s (%s, %s)' % (self.name, other_name, self.share_id)
3632+ self.assertEqual(result, expected.encode('utf8'))
3633
3634 def test_get_share_dir_name_visible_name_empty(self):
3635 """Test for get_share_dir_name."""
3636- share_id = uuid.uuid4()
3637- name = 'The little pretty share'
3638- other_name = ''
3639- share = self._create_share_volume(volume_id=share_id, name=name,
3640+ other_name = u''
3641+ share = self._create_share_volume(volume_id=self.share_id,
3642+ name=self.name,
3643 other_visible_name=other_name)
3644 result = get_share_dir_name(share)
3645
3646- expected = '%s (%s)' % (name, share_id)
3647- self.assertEqual(result, expected)
3648+ expected = u'%s (%s)' % (self.name, self.share_id)
3649+ self.assertEqual(result, expected.encode('utf8'))
3650
3651=== modified file 'ubuntuone/platform/windows/filesystem_notifications.py'
3652--- ubuntuone/platform/windows/filesystem_notifications.py 2011-08-01 12:28:20 +0000
3653+++ ubuntuone/platform/windows/filesystem_notifications.py 2011-08-04 19:19:24 +0000
3654@@ -38,7 +38,7 @@
3655 OPEN_EXISTING)
3656 from win32file import (
3657 AllocateReadBuffer,
3658- CreateFile,
3659+ CreateFileW,
3660 GetOverlappedResult,
3661 ReadDirectoryChangesW,
3662 FILE_FLAG_OVERLAPPED,
3663@@ -227,9 +227,12 @@
3664
3665 def _watch(self):
3666 """Watch a path that is a directory."""
3667+ self.log.debug('Adding watch for %r (exists? %r is dir? %r).',
3668+ self._path,
3669+ os.path.exists(self._path), os.path.isdir(self._path))
3670 # we are going to be using the ReadDirectoryChangesW whihc requires
3671 # a directory handle and the mask to be used.
3672- handle = CreateFile(
3673+ handle = CreateFileW(
3674 self._path,
3675 FILE_LIST_DIRECTORY,
3676 FILE_SHARE_READ | FILE_SHARE_WRITE,
3677@@ -237,7 +240,6 @@
3678 OPEN_EXISTING,
3679 FILE_FLAG_BACKUP_SEMANTICS | FILE_FLAG_OVERLAPPED,
3680 None)
3681- self.log.debug('Watching path %s.', self._path)
3682 while True:
3683 # important information to know about the parameters:
3684 # param 1: the handle to the dir
3685
3686=== modified file 'ubuntuone/platform/windows/os_helper.py'
3687--- ubuntuone/platform/windows/os_helper.py 2011-07-29 13:27:18 +0000
3688+++ ubuntuone/platform/windows/os_helper.py 2011-08-04 19:19:24 +0000
3689@@ -109,7 +109,6 @@
3690 * do not contain any invalid character (see WINDOWS_ILLEGAL_CHARS_MAP)
3691
3692 """
3693- logger.debug('assert_windows_path: %r', path)
3694 assert isinstance(path, unicode), 'Path %r should be unicode.' % path
3695 assert path.startswith(LONG_PATH_PREFIX), \
3696 'Path %r should start with the LONG_PATH_PREFIX.' % path
3697@@ -131,7 +130,6 @@
3698 * do not contain the LONG_PATH_PREFIX
3699
3700 """
3701- logger.debug('assert_syncdaemon_path: %r', path)
3702 assert isinstance(path, str), 'Path %r should be a bytes sequence.' % path
3703 try:
3704 path = path.decode('utf8')
3705@@ -448,6 +446,12 @@
3706 @windowspath()
3707 def set_dir_readonly(path):
3708 """Change path permissions to readonly in a dir."""
3709+
3710+ # XXX: THIS IS NOT WORKING PROPERLY, share dir created by tests can not be
3711+ # removed after using set_dir_readwrite. So either set_dir_readonly or
3712+ # set_dir_readwrite are buggy. See bug #820350.
3713+ return
3714+
3715 groups = [
3716 (ADMINISTRATORS_SID, FILE_GENERIC_READ | FILE_GENERIC_WRITE),
3717 (USER_SID, FILE_GENERIC_READ),
3718
3719=== modified file 'ubuntuone/syncdaemon/vm_helper.py'
3720--- ubuntuone/syncdaemon/vm_helper.py 2011-07-27 20:10:33 +0000
3721+++ ubuntuone/syncdaemon/vm_helper.py 2011-08-04 19:19:24 +0000
3722@@ -54,9 +54,9 @@
3723 visible_name = share.from_visible_name
3724
3725 if visible_name:
3726- dir_name = '%s (%s, %s)' % (share_name, visible_name, share_id)
3727+ dir_name = u'%s (%s, %s)' % (share_name, visible_name, share_id)
3728 else:
3729- dir_name = '%s (%s)' % (share_name, share_id)
3730+ dir_name = u'%s (%s)' % (share_name, share_id)
3731
3732 # Unicode boundary! the name is Unicode in protocol and server,
3733 # but here we use bytes for paths
3734@@ -77,33 +77,35 @@
3735 return False
3736
3737
3738-def get_udf_path_name(path):
3739- """Return (path, name) from path.
3740+def get_udf_suggested_path(path):
3741+ """Return the suggested_path, name for 'path'.
3742
3743- path must be a path inside the user home direactory, if it's not
3744+ 'path' must be a path inside the user home directory, if it's not
3745 a ValueError is raised.
3746 """
3747 if not path:
3748 raise ValueError("no path specified")
3749
3750- user_home = os.path.expanduser('~')
3751+ assert isinstance(path, str)
3752+
3753+ path = path.decode('utf8')
3754+
3755+ user_home = os.path.expanduser(u'~')
3756 start_list = os.path.abspath(user_home).split(os.path.sep)
3757 path_list = os.path.abspath(path).split(os.path.sep)
3758
3759 # Work out how much of the filepath is shared by user_home and path.
3760 common_prefix = os.path.commonprefix([start_list, path_list])
3761- if os.path.sep + os.path.join(*common_prefix) != user_home:
3762+ if os.path.sep.join(common_prefix) != user_home:
3763 raise ValueError("path isn't inside user home: %r" % path)
3764
3765- i = len(common_prefix)
3766- rel_list = [os.path.pardir] * (len(start_list) - i) + path_list[i:]
3767- relpath = os.path.join(*rel_list)
3768- head, tail = os.path.split(relpath)
3769- if head == '':
3770- head = '~'
3771- else:
3772- head = os.path.join('~', head)
3773- return head, tail
3774+ # suggested_path is always unicode, because the suggested path is a
3775+ # server-side metadata, and we will always use the unix path separator '/'
3776+
3777+ suggested_path = path.replace(user_home, u'~')
3778+ suggested_path = suggested_path.replace(os.path.sep, u'/')
3779+ assert isinstance(suggested_path, unicode)
3780+ return suggested_path
3781
3782
3783 def get_udf_path(suggested_path):
3784@@ -113,6 +115,7 @@
3785 to and received from the server.
3786
3787 """
3788+ assert isinstance(suggested_path, unicode)
3789 # Unicode boundary! the suggested_path is Unicode in protocol and server,
3790 # but here we use bytes for paths
3791 path = suggested_path.replace(u'/', os.path.sep)
3792
3793=== modified file 'ubuntuone/syncdaemon/volume_manager.py'
3794--- ubuntuone/syncdaemon/volume_manager.py 2011-07-28 22:21:07 +0000
3795+++ ubuntuone/syncdaemon/volume_manager.py 2011-08-04 19:19:24 +0000
3796@@ -46,7 +46,7 @@
3797 create_shares_link,
3798 get_share_dir_name,
3799 get_udf_path,
3800- get_udf_path_name,
3801+ get_udf_suggested_path,
3802 )
3803 from ubuntuone.platform import (
3804 allow_writes,
3805@@ -954,7 +954,7 @@
3806 # add the watch after the mkdir
3807 if share.can_write():
3808 self.log.debug('adding inotify watch to: %s', share.path)
3809- yield self.m.event_q.add_watch(share.path)
3810+ yield self._add_watch(share.path)
3811 # if it's a ro share, change the perms
3812 if not share.can_write():
3813 set_dir_readonly(share.path)
3814@@ -989,11 +989,15 @@
3815 for a_path, _ in self.m.fs.get_paths_starting_with(path):
3816 self.m.fs.delete_metadata(a_path)
3817
3818+ def _add_watch(self, path):
3819+ """Add a inotify watch to path."""
3820+ return self.m.event_q.add_watch(path)
3821+
3822 def _remove_watch(self, path):
3823 """Remove the inotify watch from path."""
3824 try:
3825 self.m.event_q.rm_watch(path)
3826- except (ValueError, RuntimeError, TypeError), e:
3827+ except (ValueError, RuntimeError, TypeError, KeyError), e:
3828 # pyinotify has an ugly error management, if we can call
3829 # it that, :(. We handle this here because it's possible
3830 # and correct that the path is not there anymore
3831@@ -1008,7 +1012,7 @@
3832
3833 def create_share(self, path, username, name, access_level):
3834 """ create a share for the specified path, username, name """
3835- self.log.debug('create share(%r, %s, %s, %s)',
3836+ self.log.debug('create share(%r, %r, %r, %r)',
3837 path, username, name, access_level)
3838 mdobj = self.m.fs.get_by_path(path)
3839 mdid = mdobj.mdid
3840@@ -1024,6 +1028,7 @@
3841 other_username=username, other_visible_name=None,
3842 node_id=node_id)
3843 self.marker_share_map[marker] = share
3844+ # XXX: unicode boundary! username, name should be unicode
3845 self.m.action_q.create_share(node_id, username, name,
3846 access_level, marker, abspath)
3847
3848@@ -1169,7 +1174,7 @@
3849 return
3850
3851 try:
3852- udf_path, udf_name = get_udf_path_name(path)
3853+ suggested_path = get_udf_suggested_path(path)
3854 except ValueError, e:
3855 self.m.event_q.push('VM_UDF_CREATE_ERROR', path=path,
3856 error="INVALID_PATH: %s" % (e,))
3857@@ -1179,18 +1184,16 @@
3858 if marker in self.marker_udf_map:
3859 # ignore this request
3860 self.log.warning('Duplicated create_udf request for '
3861- 'path (ingoring it!): %r', udf_path)
3862+ 'path (ingoring it!): %r', path)
3863 return
3864- udf = UDF(None, None,
3865- # suggested_path is always unicode, because the
3866- # suggested path is a server side metadata we will
3867- # always use the unix path separator /
3868- (udf_path + '/' + udf_name).decode('utf-8'),
3869+ udf = UDF(volume_id=None, node_id=None,
3870+ suggested_path=suggested_path,
3871 # always subscribed since it's a local request
3872- path, subscribed=True)
3873+ path=path, subscribed=True)
3874 self.marker_udf_map[marker] = udf
3875- self.m.action_q.create_udf(udf_path.decode('utf-8'),
3876- udf_name.decode('utf-8'), marker)
3877+ # XXX: unicode boundary! parameters should be unicode
3878+ server_path, udf_name = suggested_path.rsplit(u'/', 1)
3879+ self.m.action_q.create_udf(server_path, udf_name, marker)
3880 except Exception, e:
3881 self.m.event_q.push('VM_UDF_CREATE_ERROR', path=path,
3882 error="UNKNOWN_ERROR: %s" % (e,))

Subscribers

People subscribed via source and target branches