Merge lp:~facundo/ubuntuone-client/plt-fixpath into lp:ubuntuone-client

Proposed by Facundo Batista on 2012-09-14
Status: Merged
Approved by: Facundo Batista on 2012-09-25
Approved revision: 1311
Merged at revision: 1321
Proposed branch: lp:~facundo/ubuntuone-client/plt-fixpath
Merge into: lp:ubuntuone-client
Diff against target: 512 lines (+385/-12)
5 files modified
contrib/testing/testcase.py (+1/-0)
tests/syncdaemon/test_pathlockingtree.py (+273/-0)
tests/syncdaemon/test_sync.py (+6/-5)
ubuntuone/syncdaemon/action_queue.py (+100/-7)
ubuntuone/syncdaemon/sync.py (+5/-0)
To merge this branch: bzr merge lp:~facundo/ubuntuone-client/plt-fixpath
Reviewer Review Type Date Requested Status
Guillermo Gonzalez Approve on 2012-09-24
Manuel de la Peña (community) 2012-09-14 Approve on 2012-09-17
Review via email: mp+124518@code.launchpad.net

Commit Message

Fix the PathLockTree elements on a move operation.

Description of the Change

Fix the PathLockTree elements on a move operation.

Tests included.

To post a comment you must log in.
Manuel de la Peña (mandel) wrote :

Here is some background for the other reviewers:

<facundobatista> so... the situation is the following, the PathLockTree locks operations on other operations regarding the path
<facundobatista> if you touch a file 'foo', a MakeFile and Upload are queued, the Makefile will be executed, taking the lock on 'foo', the Upload will be locked because of the same path
<facundobatista> when Makefile finishes, it releases the lock, the Upload jumps in, all happy
<facundobatista> so, if you then do something like "ls > foo; mv foo bar"
* pedronis has quit (Ping timeout: 246 seconds)
<facundobatista> syncdaemon will get the FILE_CREATE and CLOSE_WRITE and will queue the Makefile and actually send HQ to hash the file
<facundobatista> the Makefile will be executed, taking the lock on 'foo'
<facundobatista> of course, there's a move really fast after the CREATE and CLOSE
<facundobatista> internal stuff is adjusted because of that move
<facundobatista> milliseconds later, the HQ jumps in, and tries to hash 'foo', it gets an error because the file moved! so after that error the correct path is taken (MOVE adjusted internal stuff!), and finally HQ hashes 'bar'
<facundobatista> when HQ finishes hashing bar, it queues an Upload
<facundobatista> the Upload tries to get the pathlock on 'bar' and it succeeds!!!
<facundobatista> because the Makefile was holding a lock on 'foo'
<facundobatista> so the Upload jumps in, but out of order
<facundobatista> so
<facundobatista> the fix does the following
<facundobatista> when the MOVE is processed by syncdaemon
<facundobatista> it adjusts the path of PathLockTree, changing 'foo' to 'bar', there
<facundobatista> so, the Upload will be locked
<facundobatista> and that's all
<facundobatista> it made me to change a little the internals of PathLockTree, but that's another more complex story

Manuel de la Peña (mandel) wrote :

Code looks good, here is a small comment regarding the tests but I don't think it should block the landing of the branch. I have noticed the following code repeated several times:

134 + # release
135 + releases.pop(0)()
136 + release = yield d
137 + release()
138 + self.assertEqual(len(self.plt.root['children_nodes']), 0)

It might be a good idea to reuse the code by creating a method, maybe assert_release, and use it in all the diff tests that need it. As I said, this should not block the branch.

review: Approve
review: Approve
Ubuntu One Auto Pilot (otto-pilot) wrote :
Download full text (407.6 KiB)

The attempt to merge lp:~facundo/ubuntuone-client/plt-fixpath into lp:ubuntuone-client failed. Below is the output from the failed tests.

/usr/bin/gnome-autogen.sh
checking for autoconf >= 2.53...
  testing autoconf2.50... not found.
  testing autoconf... found 2.69
checking for automake >= 1.10...
  testing automake-1.12... not found.
  testing automake-1.11... found 1.11.6
checking for libtool >= 1.5...
  testing libtoolize... found 2.4.2
checking for intltool >= 0.30...
  testing intltoolize... found 0.50.2
checking for pkg-config >= 0.14.0...
  testing pkg-config... found 0.26
checking for gtk-doc >= 1.0...
  testing gtkdocize... found 1.18
Checking for required M4 macros...
Checking for forbidden M4 macros...
Processing ./configure.ac
Running libtoolize...
libtoolize: putting auxiliary files in `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `m4'.
libtoolize: copying file `m4/libtool.m4'
libtoolize: copying file `m4/ltoptions.m4'
libtoolize: copying file `m4/ltsugar.m4'
libtoolize: copying file `m4/ltversion.m4'
libtoolize: copying file `m4/lt~obsolete.m4'
Running intltoolize...
Running gtkdocize...
Running aclocal-1.11...
Running autoconf...
Running autoheader...
Running automake-1.11...
Running ./configure --enable-gtk-doc --enable-debug ...
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... no
checking for mawk... mawk
checking whether make sets $(MAKE)... yes
checking how to create a ustar tar archive... gnutar
checking whether make supports nested variables... yes
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking for library containing strerror... none required
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking dependency style of gcc... (cached) gcc3
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking how to print strings... printf
checking for a sed that does not truncate output... /bin/sed
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking whether the shell unde...

1311. By Facundo Batista on 2012-09-25

Merged trunk in.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'contrib/testing/testcase.py'
2--- contrib/testing/testcase.py 2012-08-16 17:39:19 +0000
3+++ contrib/testing/testcase.py 2012-09-25 13:53:21 +0000
4@@ -146,6 +146,7 @@
5 self.eq = self.event_queue = eq
6 self.uuid_map = action_queue.DeferredMap()
7 self.queue = action_queue.RequestQueue(self)
8+ self.pathlock = action_queue.PathLockingTree()
9
10 # throttling attributes
11 self.readLimit = None
12
13=== modified file 'tests/syncdaemon/test_pathlockingtree.py'
14--- tests/syncdaemon/test_pathlockingtree.py 2012-04-09 20:07:05 +0000
15+++ tests/syncdaemon/test_pathlockingtree.py 2012-09-25 13:53:21 +0000
16@@ -682,3 +682,276 @@
17 self.assertTrue(self.handler.check_debug("releasing",
18 "'a', 'b', 'c', 'd', 'e'",
19 "remaining: 4"))
20+
21+
22+class PathFixingTests(TwistedTestCase):
23+ """Test the path fixing."""
24+
25+ @defer.inlineCallbacks
26+ def setUp(self):
27+ """Set up."""
28+ yield super(PathFixingTests, self).setUp()
29+ self.plt = PathLockingTree()
30+
31+ self.handler = MementoHandler()
32+ self.plt.logger.setLevel(logging.DEBUG)
33+ self.plt.logger.propagate = False
34+ self.plt.logger.addHandler(self.handler)
35+ self.addCleanup(self.plt.logger.removeHandler, self.handler)
36+
37+ def test_clean_pathlocktree(self):
38+ """A fix over nothing stored."""
39+ self.plt.fix_path(tuple('abc'), tuple('abX'))
40+ self.assertEqual(len(self.plt.root['children_nodes']), 0)
41+
42+ @defer.inlineCallbacks
43+ def test_simple_leaf(self):
44+ """Simple change for a leaf."""
45+ from_path = tuple('abc')
46+ to_path = tuple('abX')
47+ release = yield self.plt.acquire(*from_path)
48+
49+ # get leaf deferred
50+ node_a = self.plt.root['children_nodes']['a']
51+ node_b = node_a['children_nodes']['b']
52+ node_c = node_b['children_nodes']['c']
53+ original_deferreds = node_c['node_deferreds']
54+
55+ # fix path
56+ self.plt.fix_path(from_path, to_path)
57+
58+ # get deferred from new path, assert is the same
59+ node_a = self.plt.root['children_nodes']['a']
60+ node_b = node_a['children_nodes']['b']
61+ node_X = node_b['children_nodes']['X']
62+ self.assertEqual(node_X['node_deferreds'], original_deferreds)
63+
64+ # release, it should be clean now
65+ release()
66+ self.assertEqual(len(self.plt.root['children_nodes']), 0)
67+
68+ @defer.inlineCallbacks
69+ def test_complex_leaf(self):
70+ """Change for a leaf with two items."""
71+ from_path = tuple('abc')
72+ to_path = tuple('abX')
73+
74+ releases = []
75+ d = self.plt.acquire(*from_path)
76+ d.addCallback(releases.append)
77+ d = self.plt.acquire(*from_path)
78+ d.addCallback(releases.append)
79+
80+ # get leaf deferred
81+ node_a = self.plt.root['children_nodes']['a']
82+ node_b = node_a['children_nodes']['b']
83+ node_c = node_b['children_nodes']['c']
84+ original_deferreds = node_c['node_deferreds']
85+
86+ # rename
87+ self.plt.fix_path(from_path, to_path)
88+
89+ # get deferred from new path, assert is the same
90+ node_a = self.plt.root['children_nodes']['a']
91+ node_b = node_a['children_nodes']['b']
92+ node_X = node_b['children_nodes']['X']
93+ self.assertEqual(node_X['node_deferreds'], original_deferreds)
94+
95+ # acquire with other one, assert that it's not released
96+ d = self.plt.acquire(*to_path)
97+ d.addCallback(lambda f: self.assertFalse(releases) or f)
98+
99+ # release
100+ releases.pop(0)()
101+ releases.pop(0)()
102+ release = yield d
103+ release()
104+ self.assertEqual(len(self.plt.root['children_nodes']), 0)
105+
106+ @defer.inlineCallbacks
107+ def test_simple_not_leaf(self):
108+ """Simple change for not a leaf."""
109+ from_path = tuple('abc')
110+ to_path = tuple('aXc')
111+ releases = []
112+ d = self.plt.acquire(*from_path)
113+ d.addCallback(releases.append)
114+
115+ # get leaf deferred
116+ node_a = self.plt.root['children_nodes']['a']
117+ node_b = node_a['children_nodes']['b']
118+ node_c = node_b['children_nodes']['c']
119+ original_deferreds = node_c['node_deferreds']
120+
121+ # rename
122+ self.plt.fix_path(from_path, to_path)
123+
124+ # get deferred from new path, assert is the same
125+ node_a = self.plt.root['children_nodes']['a']
126+ node_X = node_a['children_nodes']['X']
127+ node_c = node_X['children_nodes']['c']
128+ self.assertEqual(node_c['node_deferreds'], original_deferreds)
129+
130+ # acquire with other one, assert that it's not released
131+ d = self.plt.acquire(*to_path)
132+ d.addCallback(lambda f: self.assertFalse(releases) or f)
133+
134+ # release
135+ releases.pop(0)()
136+ release = yield d
137+ release()
138+ self.assertEqual(len(self.plt.root['children_nodes']), 0)
139+
140+ @defer.inlineCallbacks
141+ def test_same_tree(self):
142+ """Move a leaf one level up."""
143+ from_path = tuple('abcd')
144+ to_path = tuple('abd')
145+ releases = []
146+ d = self.plt.acquire(*from_path)
147+ d.addCallback(releases.append)
148+
149+ # get leaf deferred
150+ node_a = self.plt.root['children_nodes']['a']
151+ node_b = node_a['children_nodes']['b']
152+ node_c = node_b['children_nodes']['c']
153+ node_d = node_c['children_nodes']['d']
154+ original_deferreds = node_d['node_deferreds']
155+
156+ # rename
157+ self.plt.fix_path(from_path, to_path)
158+
159+ # get deferred from new path, assert is the same
160+ node_a = self.plt.root['children_nodes']['a']
161+ node_b = node_a['children_nodes']['b']
162+ node_d = node_b['children_nodes']['d']
163+ self.assertEqual(node_d['node_deferreds'], original_deferreds)
164+
165+ # check also that the 'c' node is gone
166+ self.assertNotIn('c', node_b['children_nodes'])
167+
168+ # acquire with other one, assert that it's not released
169+ d = self.plt.acquire(*to_path)
170+ d.addCallback(lambda f: self.assertFalse(releases) or f)
171+
172+ # release
173+ releases.pop(0)()
174+ release = yield d
175+ release()
176+ self.assertEqual(len(self.plt.root['children_nodes']), 0)
177+
178+ @defer.inlineCallbacks
179+ def test_parents_move_child(self):
180+ """Complex change involving parents, renaming child."""
181+ releases = []
182+ d = self.plt.acquire('a', 'b')
183+ d.addCallback(releases.append)
184+ d = self.plt.acquire('a', 'b', 'c', on_parent=True)
185+ d.addCallback(releases.append)
186+
187+ # rename
188+ self.plt.fix_path(('a', 'b', 'c'), ('a', 'b', 'X'))
189+
190+ # acquire with other one, assert that it's not released
191+ d = self.plt.acquire('a', 'b', 'X')
192+ d.addCallback(lambda f: self.assertFalse(releases) or f)
193+
194+ # release
195+ releases.pop(0)()
196+ releases.pop(0)()
197+ release = yield d
198+ release()
199+ self.assertEqual(len(self.plt.root['children_nodes']), 0)
200+
201+ @defer.inlineCallbacks
202+ def test_parents_move_parent(self):
203+ """Complex change involving parents, renaming parent."""
204+ releases = []
205+ d = self.plt.acquire('a', 'b')
206+ d.addCallback(releases.append)
207+ d = self.plt.acquire('a', 'b', 'c', on_parent=True)
208+ d.addCallback(releases.append)
209+
210+ # rename
211+ self.plt.fix_path(('a', 'b'), ('a', 'X'))
212+
213+ # acquire with other one, assert that there's only one
214+ # left to release ('aXC', as releasing 'aX' will trigger
215+ # this one)
216+ d = self.plt.acquire('a', 'X', 'd', on_parent=True)
217+ d.addCallback(lambda f: self.assertEqual(len(releases), 1) and f)
218+
219+ # release first the parent, then the child
220+ releases.pop(0)()
221+ releases.pop(0)()
222+ release = yield d
223+ release()
224+ self.assertEqual(len(self.plt.root['children_nodes']), 0)
225+
226+ @defer.inlineCallbacks
227+ def test_very_different_children(self):
228+ """Aquire changing the children a lot."""
229+ releases = []
230+ d = self.plt.acquire('a', 'b', 'c', 'd')
231+ d.addCallback(releases.append)
232+ d = self.plt.acquire('a', 'b', 'c', on_children=True)
233+ d.addCallback(releases.append)
234+
235+ # rename
236+ self.plt.fix_path(('a', 'b', 'c', 'd'), ('a', 'b', 'X', 'Y'))
237+
238+ # acquire with other one, assert that there's only one
239+ # left to release ('aXC', as releasing 'aX' will trigger
240+ # this one)
241+ d = self.plt.acquire('a', 'b', 'X', on_children=True)
242+ d.addCallback(lambda f: self.assertEqual(len(releases), 1) and f)
243+
244+ # release first the parent, then the child
245+ releases.pop(0)()
246+ releases.pop(0)()
247+ release = yield d
248+ release()
249+ self.assertEqual(len(self.plt.root['children_nodes']), 0)
250+
251+ def test_double_simple(self):
252+ """Simple but duplicate acquiring."""
253+ releases = []
254+ d = self.plt.acquire('GetDelta', '')
255+ d.addCallback(releases.append)
256+ d = self.plt.acquire('GetDelta', '')
257+ d.addCallback(releases.append)
258+ releases.pop(0)()
259+ releases.pop(0)()
260+ self.assertEqual(len(self.plt.root['children_nodes']), 0)
261+
262+ def test_moving_over(self):
263+ """Move over something that still exists."""
264+ releases = []
265+ d = self.plt.acquire('a', 'b', 'c')
266+ d.addCallback(releases.append)
267+ d = self.plt.acquire('a', 'b', 'd')
268+ d.addCallback(releases.append)
269+
270+ # rename
271+ self.plt.fix_path(('a', 'b', 'd'), ('a', 'b', 'c'))
272+
273+ # release both
274+ releases.pop(0)()
275+ releases.pop(0)()
276+ self.assertEqual(len(self.plt.root['children_nodes']), 0)
277+
278+ def test_irl_complicated_example(self):
279+ """Just a complicated move I found IRL."""
280+ releases = []
281+ d = self.plt.acquire('temp', 'drizzle', '.bzr', 'checkout', 'limbo',
282+ 'new-19', 'handshake.cc')
283+ d.addCallback(releases.append)
284+
285+ # rename
286+ fix_from = ('temp', 'drizzle', '.bzr', 'checkout', 'limbo', 'new-19')
287+ fix_to = ('temp', 'drizzle', 'libdrizzle')
288+ self.plt.fix_path(fix_from, fix_to)
289+
290+ # release it
291+ releases.pop(0)()
292+ self.assertEqual(len(self.plt.root['children_nodes']), 0)
293
294=== modified file 'tests/syncdaemon/test_sync.py'
295--- tests/syncdaemon/test_sync.py 2012-06-13 21:31:24 +0000
296+++ tests/syncdaemon/test_sync.py 2012-09-25 13:53:21 +0000
297@@ -1,7 +1,5 @@
298 # -*- coding: utf-8 -*-
299 #
300-# Author: Guillermo Gonzalez <guillermo.gonzalez@canonical.com>
301-#
302 # Copyright 2009-2012 Canonical Ltd.
303 #
304 # This program is free software: you can redistribute it and/or modify it
305@@ -1322,9 +1320,10 @@
306 # patch HQ to don't hash the file
307 self.main.hash_q.insert = lambda *a: None
308
309- # record the call
310+ # record the calls
311 called = []
312 self.main.fs.add_to_move_limbo = lambda *a: called.append(a)
313+ self.main.action_q.pathlock.fix_path = lambda *a: called.append(a)
314
315 # create context and call
316 key = FSKey(self.main.fs, path=somepath1)
317@@ -1332,8 +1331,10 @@
318 key=key, logger=None)
319 parent_id = FSKey(self.main.fs, path=self.root)['node_id']
320 ssmr.client_moved('some event', {}, somepath1, somepath2)
321- self.assertEqual(called, [('', 'node_id', parent_id, parent_id,
322- 'bar', somepath1, somepath2)])
323+ self.assertEqual(called[0], (tuple(somepath1.split(os.path.sep)),
324+ tuple(somepath2.split(os.path.sep))))
325+ self.assertEqual(called[1], ('', 'node_id', parent_id, parent_id,
326+ 'bar', somepath1, somepath2))
327
328 def test_clean_move_limbo(self):
329 """Clean the move limbo with what was called."""
330
331=== modified file 'ubuntuone/syncdaemon/action_queue.py'
332--- ubuntuone/syncdaemon/action_queue.py 2012-09-14 22:46:21 +0000
333+++ ubuntuone/syncdaemon/action_queue.py 2012-09-25 13:53:21 +0000
334@@ -119,6 +119,9 @@
335 self.logger = logging.getLogger("ubuntuone.SyncDaemon.PathLockingTree")
336 self.root = dict(children_nodes={})
337 self.count = 0
338+ self.stored_by_id = {}
339+ self.stored_by_elements = {}
340+ self.id_stored = 0
341
342 def acquire(self, *elements, **modifiers):
343 """Acquire the lock for the elements.
344@@ -175,7 +178,12 @@
345 " wait for: %d", elements, on_parent,
346 on_children, len(wait_for))
347
348- relfunc = partial(self._release, deferred, elements, logger)
349+ # store info for later releasing
350+ self.id_stored += 1
351+ self.stored_by_id[self.id_stored] = (deferred, elements, logger)
352+ self.stored_by_elements.setdefault(elements, []).append(self.id_stored)
353+ relfunc = partial(self._release, self.id_stored)
354+
355 if not wait_for:
356 return defer.succeed(relfunc)
357 if len(wait_for) == 1:
358@@ -188,8 +196,15 @@
359 deferred_list.addBoth(lambda _: relfunc)
360 return deferred_list
361
362- def _release(self, deferred, elements, logger):
363+ def _release(self, stored_id):
364 """Release the callback and clean the tree."""
365+ # get the elements from the stored id
366+ deferred, elements, logger = self.stored_by_id.pop(stored_id)
367+ stored_ids = self.stored_by_elements[elements]
368+ stored_ids.remove(stored_id)
369+ if not stored_ids:
370+ del self.stored_by_elements[elements]
371+
372 # clean the tree first!
373 # keep here every node and its child element, to backtrack
374 branch = []
375@@ -200,13 +215,13 @@
376 for element in elements[:-1]:
377 branch.append((desc, element))
378 node = desc['children_nodes'][element]
379- node['children_deferreds'].remove(deferred)
380+ node['children_deferreds'].discard(deferred)
381 desc = node
382
383 # for the final node, remove it from node_deferreds
384 branch.append((desc, elements[-1]))
385 node = desc['children_nodes'][elements[-1]]
386- node['node_deferreds'].remove(deferred)
387+ node['node_deferreds'].discard(deferred)
388
389 # backtrack
390 while branch:
391@@ -223,6 +238,81 @@
392 self.count)
393 deferred.callback(True)
394
395+ def fix_path(self, from_elements, to_elements):
396+ """Fix the internal path."""
397+ self.logger.debug("Fixing path from %r to %r",
398+ from_elements, to_elements)
399+
400+ # fix the stored ids and elements
401+ something_found = False
402+ for key in self.stored_by_elements.keys():
403+ if key == from_elements:
404+ new_key = to_elements
405+ elif key[:len(from_elements)] == from_elements:
406+ new_key = to_elements + key[len(from_elements):]
407+ else:
408+ continue
409+
410+ # fix the id/elements
411+ something_found = True
412+ stored_ids = self.stored_by_elements.pop(key)
413+ self.stored_by_elements.setdefault(new_key, []).extend(stored_ids)
414+ for stored_id in stored_ids:
415+ deferred, old_elements, logger = self.stored_by_id[stored_id]
416+ self.stored_by_id[stored_id] = deferred, new_key, logger
417+
418+ # nothing to fix, really
419+ if not something_found:
420+ return
421+
422+ # get the node to be moved around
423+ branch = []
424+ desc = self.root
425+ for element in from_elements[:-1]:
426+ branch.append((desc, element))
427+ node = desc['children_nodes'][element]
428+ desc = node
429+ node_to_move = desc['children_nodes'].pop(from_elements[-1])
430+
431+ # backtrack to clean the branch
432+ node = desc
433+ while branch:
434+ if node['node_deferreds'] or node['children_nodes']:
435+ # node is not empty, done cleaning the branch!
436+ break
437+
438+ # node is empty! remove it
439+ node, element = branch.pop()
440+ del node['children_nodes'][element]
441+
442+ # keep here every node and its child element, to backtrack to add
443+ # children deferreds in the branch
444+ branch = []
445+
446+ # get the final parent of the new moved node
447+ node = self.root
448+ for pos, element in enumerate(to_elements[:-1]):
449+ # get previous child or create a new one just empty, not using
450+ # setdefault to avoid creating structures if not needed
451+ children_nodes = node['children_nodes']
452+ if element in children_nodes:
453+ node = children_nodes[element]
454+ else:
455+ node = dict(node_deferreds=set(),
456+ children_nodes={},
457+ children_deferreds={})
458+ children_nodes[element] = node
459+ branch.append(node)
460+
461+ node['children_nodes'][to_elements[-1]] = node_to_move
462+
463+ # fix the children deferreds after the movement
464+ all_children_deferreds = (node_to_move['node_deferreds'] |
465+ node_to_move['children_deferreds'])
466+ for node in branch[::-1]:
467+ node['children_deferreds'] = set(all_children_deferreds)
468+ all_children_deferreds.update(node['node_deferreds'])
469+
470
471 class NamedTemporaryFile(object):
472 """Like tempfile.NamedTemporaryFile, but working in 2.5.
473@@ -1099,7 +1189,8 @@
474 if len(self.queue) >= self.memory_pool_limit:
475 # already in the limit, can't go further as we don't have
476 # more room in memory, store it in the offloaded queue
477- logger.debug('offload push: %s %s %s', command_class.__name__, args, kwargs)
478+ logger.debug('offload push: %s %s %s',
479+ command_class.__name__, args, kwargs)
480 self.disk_queue.push((command_class.__name__, args, kwargs))
481 return
482
483@@ -1107,9 +1198,11 @@
484 yield self._really_execute(command_class, *args, **kwargs)
485
486 # command just finished... check to queue more offloaded ones
487- while len(self.queue) < self.memory_pool_limit and len(self.disk_queue) > 0:
488+ while (len(self.queue) < self.memory_pool_limit and
489+ len(self.disk_queue) > 0):
490 command_class_name, args, kwargs = self.disk_queue.pop()
491- logger.debug('offload pop: %s %s %s', command_class_name, args, kwargs)
492+ logger.debug('offload pop: %s %s %s',
493+ command_class_name, args, kwargs)
494 command_class = self.commands[command_class_name]
495 yield self._really_execute(command_class, *args, **kwargs)
496
497
498=== modified file 'ubuntuone/syncdaemon/sync.py'
499--- ubuntuone/syncdaemon/sync.py 2012-08-28 14:34:26 +0000
500+++ ubuntuone/syncdaemon/sync.py 2012-09-25 13:53:21 +0000
501@@ -499,6 +499,11 @@
502 share_id = self.key['share_id']
503 node_id = self.key['node_id']
504
505+ # fix first the PathLockTree, so the move hooks on it's final
506+ # path, if any
507+ self.m.action_q.pathlock.fix_path(tuple(path_from.split(os.path.sep)),
508+ tuple(path_to.split(os.path.sep)))
509+
510 self.m.action_q.move(share_id, node_id, old_parent_id,
511 new_parent_id, new_name, path_from, path_to)
512 self.m.fs.add_to_move_limbo(share_id, node_id, old_parent_id,

Subscribers

People subscribed via source and target branches