Merge lp:~facundo/ubuntuone-client/fix-path-retrieval into lp:ubuntuone-client

Proposed by Facundo Batista
Status: Merged
Approved by: Facundo Batista
Approved revision: 1261
Merged at revision: 1330
Proposed branch: lp:~facundo/ubuntuone-client/fix-path-retrieval
Merge into: lp:ubuntuone-client
Diff against target: 662 lines (+126/-96)
7 files modified
contrib/testing/testcase.py (+1/-1)
tests/platform/linux/eventlog/test_zg_listener.py (+1/-1)
tests/syncdaemon/test_action_queue.py (+52/-40)
tests/syncdaemon/test_localrescan.py (+5/-5)
ubuntuone/syncdaemon/action_queue.py (+40/-25)
ubuntuone/syncdaemon/local_rescan.py (+2/-2)
ubuntuone/syncdaemon/sync.py (+25/-22)
To merge this branch: bzr merge lp:~facundo/ubuntuone-client/fix-path-retrieval
Reviewer Review Type Date Requested Status
Roberto Alsina (community) Approve
Review via email: mp+127583@code.launchpad.net

Commit message

Path is not sent directly anymore to AQ ops, it's retrieved later (LP: #988534).

Description of the change

Path is not sent directly anymore to AQ ops, it's retrieved later.

For this, instead passing paths, the mdid is sent to the operation. This is to solve the situation where the node is renamed between the path is passed to the operation and that the operation is run.

Tests adjusted.

To post a comment you must log in.
Revision history for this message
Roberto Alsina (ralsina) wrote :

Looks good to me!

review: Approve
Revision history for this message
Ubuntu One Auto Pilot (otto-pilot) wrote :
Download full text (26.2 KiB)

The attempt to merge lp:~facundo/ubuntuone-client/fix-path-retrieval into lp:ubuntuone-client failed. Below is the output from the failed tests.

/usr/bin/gnome-autogen.sh
checking for autoconf >= 2.53...
  testing autoconf2.50... not found.
  testing autoconf... found 2.69
checking for automake >= 1.10...
  testing automake-1.12... not found.
  testing automake-1.11... found 1.11.6
checking for libtool >= 1.5...
  testing libtoolize... found 2.4.2
checking for intltool >= 0.30...
  testing intltoolize... found 0.50.2
checking for pkg-config >= 0.14.0...
  testing pkg-config... found 0.26
checking for gtk-doc >= 1.0...
  testing gtkdocize... found 1.18
Checking for required M4 macros...
Checking for forbidden M4 macros...
Processing ./configure.ac
Running libtoolize...
libtoolize: putting auxiliary files in `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIR, `m4'.
libtoolize: copying file `m4/libtool.m4'
libtoolize: copying file `m4/ltoptions.m4'
libtoolize: copying file `m4/ltsugar.m4'
libtoolize: copying file `m4/ltversion.m4'
libtoolize: copying file `m4/lt~obsolete.m4'
Running intltoolize...
Running gtkdocize...
Running aclocal-1.11...
Running autoconf...
Running autoheader...
Running automake-1.11...
Running ./configure --enable-gtk-doc --enable-debug ...
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... no
checking for mawk... mawk
checking whether make sets $(MAKE)... yes
checking how to create a ustar tar archive... gnutar
checking whether make supports nested variables... yes
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking for library containing strerror... none required
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking dependency style of gcc... (cached) gcc3
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking how to print strings... printf
checking for a sed that does not truncate output... /bin/sed
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking whether the she...

1261. By Facundo Batista

Make lint happy

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'contrib/testing/testcase.py'
2--- contrib/testing/testcase.py 2012-09-14 21:03:24 +0000
3+++ contrib/testing/testcase.py 2012-10-03 19:37:22 +0000
4@@ -146,7 +146,7 @@
5 self.eq = self.event_queue = eq
6 self.uuid_map = action_queue.DeferredMap()
7 self.queue = action_queue.RequestQueue(self)
8- self.pathlock = action_queue.PathLockingTree()
9+ self.pathlock = FakedObject()
10
11 # throttling attributes
12 self.readLimit = None
13
14=== modified file 'tests/platform/linux/eventlog/test_zg_listener.py'
15--- tests/platform/linux/eventlog/test_zg_listener.py 2012-04-09 20:07:05 +0000
16+++ tests/platform/linux/eventlog/test_zg_listener.py 2012-10-03 19:37:22 +0000
17@@ -574,7 +574,7 @@
18 self.share_id = ""
19 self.command = MyUpload(request_queue, share_id=self.share_id,
20 node_id='a_node_id', previous_hash='prev_hash',
21- hash='yadda', crc32=0, size=0, path='path')
22+ hash='yadda', crc32=0, size=0, mdid='mdid')
23 self.command.make_logger()
24 self.command.tempfile = FakeTempFile(self.mktemp('tmpdir'))
25 self.fsm = self.action_queue.main.fs
26
27=== modified file 'tests/syncdaemon/test_action_queue.py'
28--- tests/syncdaemon/test_action_queue.py 2012-09-07 19:32:21 +0000
29+++ tests/syncdaemon/test_action_queue.py 2012-10-03 19:37:22 +0000
30@@ -2778,7 +2778,7 @@
31 self.rq = request_queue = RequestQueue(action_queue=self.action_queue)
32 self.command = Download(request_queue, share_id='a_share_id',
33 node_id='a_node_id', server_hash='server_hash',
34- path='path')
35+ mdid='mdid')
36 self.command.make_logger()
37
38 def test_progress_information_setup(self):
39@@ -2809,14 +2809,15 @@
40
41 self.rq = RequestQueue(action_queue=self.action_queue)
42 self.rq.transfers_semaphore = FakeSemaphore()
43+ self.test_path = os.path.join(self.root, 'file')
44+ self.mdid = self.main.fs.create(self.test_path, '')
45
46 class MyDownload(Download):
47 """Just to allow monkeypatching."""
48 sync = lambda s: None
49 self.command = MyDownload(self.rq, share_id='a_share_id',
50 node_id='a_node_id',
51- server_hash='server_hash',
52- path= os.path.join(os.path.sep, 'foo', 'bar'))
53+ server_hash='server_hash', mdid=self.mdid)
54 self.command.make_logger()
55 self.rq.waiting.append(self.command)
56
57@@ -2916,23 +2917,25 @@
58 self.patch(PathLockingTree, 'acquire',
59 lambda s, *a, **k: t.extend((a, k)))
60 self.command._acquire_pathlock()
61- self.assertEqual(t, [("", "foo", "bar"), {'logger': self.command.log}])
62+ should = [tuple(self.test_path.split(os.path.sep)),
63+ {'logger': self.command.log}]
64+ self.assertEqual(t, should)
65
66 def test_upload_download_uniqueness(self):
67 """There should be only one upload/download for a specific node."""
68 # totally fake, we don't care: the messages are only validated on run
69- self.action_queue.download('foo', 'bar', 0, 'path')
70+ self.action_queue.download('foo', 'bar', 0, self.mdid)
71 first_cmd = self.action_queue.queue.waiting[0]
72- self.action_queue.upload('foo', 'bar', 0, 0, 0, 0, 'path')
73+ self.action_queue.upload('foo', 'bar', 0, 0, 0, 0, self.mdid)
74 self.assertTrue(first_cmd.cancelled)
75
76 def test_uniqueness_upload(self):
77 """There should be only one upload/download for a specific node."""
78 # totally fake, we don't care: the messages are only validated on run
79 self.patch(Upload, 'run', lambda self: defer.Deferred())
80- self.action_queue.upload('foo', 'bar', 0, 0, 0, 0, 'path')
81+ self.action_queue.upload('foo', 'bar', 0, 0, 0, 0, self.mdid)
82 first_cmd = self.action_queue.queue.waiting[0]
83- self.action_queue.download('foo', 'bar', 0, 'path')
84+ self.action_queue.download('foo', 'bar', 0, self.mdid)
85 self.assertTrue(first_cmd.cancelled)
86 self.assertTrue(self.handler.check_debug("Previous command cancelled",
87 "Upload", "foo", "bar"))
88@@ -2940,32 +2943,32 @@
89 def test_uniqueness_download(self):
90 """There should be only one upload/download for a specific node."""
91 # totally fake, we don't care: the messages are only validated on run
92- self.action_queue.download('foo', 'bar', 0, 'path0')
93+ self.action_queue.download('foo', 'bar', 0, self.mdid)
94 first_cmd = self.action_queue.queue.waiting[0]
95- self.action_queue.download('foo', 'bar', 1, 'path1')
96+ self.action_queue.download('foo', 'bar', 1, self.mdid)
97 self.assertTrue(first_cmd.cancelled)
98 self.assertTrue(self.handler.check_debug("Previous command cancelled",
99 "Download", "foo", "bar"))
100
101 def test_uniqueness_even_with_markers(self):
102 """Only one upload/download per node, even using markers."""
103- mdid = self.main.fs.create(os.path.join(self.root, 'file'), '')
104+ mdid = self.main.fs.create(os.path.join(self.root, 'file2'), '')
105 m = MDMarker(mdid)
106- self.action_queue.download('share', m, 0, 'path')
107+ self.action_queue.download('share', m, 0, mdid)
108 first_cmd = self.action_queue.queue.waiting[0]
109 self.action_queue.uuid_map.set(mdid, 'bah')
110- self.action_queue.download('share', 'bah', 0, 'path')
111+ self.action_queue.download('share', 'bah', 0, self.mdid)
112 self.assertTrue(first_cmd.cancelled)
113
114 def test_uniqueness_tried_to_cancel_but_no(self):
115 """Previous command didn't cancel even if we tried it."""
116 # the first command will refuse to cancel (patch the class because
117 # the instance is not patchable)
118- self.action_queue.download('foo', 'bar', 0, 'path0')
119+ self.action_queue.download('foo', 'bar', 0, self.mdid)
120 self.action_queue.queue.waiting[0]
121 self.patch(Download, 'cancel', lambda instance: False)
122
123- self.action_queue.download('foo', 'bar', 1, 'path1')
124+ self.action_queue.download('foo', 'bar', 1, self.mdid)
125 self.assertEqual(len(self.action_queue.queue.waiting), 2)
126 self.assertTrue(self.handler.check_debug("Tried to cancel", "couldn't",
127 "Download", "foo", "bar"))
128@@ -3093,7 +3096,7 @@
129 self.rq = request_queue = RequestQueue(action_queue=self.action_queue)
130 self.command = Upload(request_queue, share_id='a_share_id',
131 node_id='a_node_id', previous_hash='prev_hash',
132- hash='yadda', crc32=0, size=0, path='path')
133+ hash='yadda', crc32=0, size=0, mdid='mdid')
134 self.command.make_logger()
135 self.command.magic_hash = FakeMagicHash()
136 self.client = FakeClient()
137@@ -3228,6 +3231,8 @@
138 self.rq.transfers_semaphore = FakeSemaphore()
139 self.rq.unqueue = lambda c: None
140 self.rq.active = True
141+ self.test_path = os.path.join(self.root, 'foo', 'bar')
142+ self.mdid = self.main.fs.create(self.test_path, '')
143
144 class MyUpload(Upload):
145 """Just to allow monkeypatching."""
146@@ -3235,8 +3240,7 @@
147 self.share_id = str(uuid.uuid4())
148 self.command = MyUpload(self.rq, share_id=self.share_id,
149 node_id='a_node_id', previous_hash='prev_hash',
150- hash='yadda', crc32=0, size=0,
151- path=os.path.join(os.path.sep, 'foo', 'bar'))
152+ hash='yadda', crc32=0, size=0, mdid=self.mdid)
153 self.command.make_logger()
154
155 @defer.inlineCallbacks
156@@ -3251,11 +3255,12 @@
157 # mock fsm
158 mocker = Mocker()
159 mdobj = mocker.mock()
160- expect(mdobj.mdid).result('mdid')
161+ expect(mdobj.share_id).result('share_id')
162+ expect(mdobj.path).result('path')
163 fsm = mocker.mock()
164- expect(fsm.get_by_node_id(self.command.share_id, self.command.node_id)
165- ).result(mdobj)
166- expect(fsm.open_file('mdid')).result(StringIO())
167+ expect(fsm.get_by_mdid(self.mdid)).result(mdobj)
168+ expect(fsm.get_abspath('share_id', 'path')).result('/abs/path')
169+ expect(fsm.open_file(self.mdid)).result(StringIO())
170 self.patch(self.main, 'fs', fsm)
171
172 # first fails with UploadInProgress, then finishes ok
173@@ -3494,15 +3499,17 @@
174 self.patch(PathLockingTree, 'acquire',
175 lambda s, *a, **k: t.extend((a, k)))
176 self.command._acquire_pathlock()
177- self.assertEqual(t, [("", "foo", "bar"), {'logger': self.command.log}])
178+ should = [tuple(self.test_path.split(os.path.sep)),
179+ {'logger': self.command.log}]
180+ self.assertEqual(t, should)
181
182 def test_uniqueness_upload(self):
183 """There should be only one upload/download for a specific node."""
184 # totally fake, we don't care: the messages are only validated on run
185 self.patch(Upload, 'run', lambda self: defer.Deferred())
186- self.action_queue.upload('foo', 'bar', 0, 0, 0, 0, 'path0')
187+ self.action_queue.upload('foo', 'bar', 0, 0, 0, 0, self.mdid)
188 first_cmd = self.action_queue.queue.waiting[0]
189- self.action_queue.upload('foo', 'bar', 1, 1, 1, 1, 'path1')
190+ self.action_queue.upload('foo', 'bar', 1, 1, 1, 1, self.mdid)
191 self.assertTrue(first_cmd.cancelled)
192 self.assertTrue(self.handler.check_debug("Previous command cancelled",
193 "Upload", "foo", "bar"))
194@@ -3510,9 +3517,9 @@
195 def test_uniqueness_download(self):
196 """There should be only one upload/download for a specific node."""
197 # totally fake, we don't care: the messages are only validated on run
198- self.action_queue.download('foo', 'bar', 0, 'path')
199+ self.action_queue.download('foo', 'bar', 0, self.mdid)
200 first_cmd = self.action_queue.queue.waiting[0]
201- self.action_queue.upload('foo', 'bar', 0, 0, 0, 0, 'path')
202+ self.action_queue.upload('foo', 'bar', 0, 0, 0, 0, self.mdid)
203 self.assertTrue(first_cmd.cancelled)
204 self.assertTrue(self.handler.check_debug("Previous command cancelled",
205 "Download", "foo", "bar"))
206@@ -3521,21 +3528,21 @@
207 """Only one upload/download per node, even using markers."""
208 mdid = self.main.fs.create(os.path.join(self.root, 'file'), '')
209 m = MDMarker(mdid)
210- self.action_queue.download('share', m, 0, 'path')
211+ self.action_queue.download('share', m, 0, self.mdid)
212 first_cmd = self.action_queue.queue.waiting[0]
213 self.action_queue.uuid_map.set(mdid, 'bah')
214- self.action_queue.upload('share', 'bah', 0, 0, 0, 0, 'path')
215+ self.action_queue.upload('share', 'bah', 0, 0, 0, 0, self.mdid)
216 self.assertTrue(first_cmd.cancelled)
217
218 def test_uniqueness_tried_to_cancel_but_no(self):
219 """Previous command didn't cancel even if we tried it."""
220 # the first command will refuse to cancel
221 self.patch(Upload, 'run', lambda self: defer.Deferred())
222- self.action_queue.upload('foo', 'bar', 0, 0, 0, 0, 'path0', StringIO)
223+ self.action_queue.upload('foo', 'bar', 0, 0, 0, 0, self.mdid, StringIO)
224 self.action_queue.queue.waiting[0]
225 self.patch(Upload, 'cancel', lambda instance: False)
226
227- self.action_queue.upload('foo', 'bar', 1, 1, 1, 1, 'path1', StringIO)
228+ self.action_queue.upload('foo', 'bar', 1, 1, 1, 1, self.mdid, StringIO)
229 self.assertEqual(len(self.action_queue.queue.waiting), 2)
230 self.assertTrue(self.handler.check_debug("Tried to cancel", "couldn't",
231 "Upload", "foo", "bar"))
232@@ -4866,11 +4873,13 @@
233 t = []
234 self.patch(PathLockingTree, 'acquire',
235 lambda s, *a, **k: t.extend((a, k)))
236- cmd = MakeFile(self.rq, VOLUME, 'parent', 'name', 'marker',
237- os.path.join('foo','bar'))
238+ path = os.path.join(self.root, 'file')
239+ mdid = self.main.fs.create(path, '')
240+ cmd = MakeFile(self.rq, VOLUME, 'parent', 'name', 'marker', mdid)
241 cmd._acquire_pathlock()
242- self.assertEqual(t, [('foo', 'bar'), {'on_parent': True,
243- 'logger': None}])
244+ should = [tuple(path.split(os.path.sep)),
245+ {'on_parent': True, 'logger': None}]
246+ self.assertEqual(t, should)
247
248
249 class MakeDirTestCase(ConnectedBaseTestCase):
250@@ -4929,11 +4938,13 @@
251 t = []
252 self.patch(PathLockingTree, 'acquire',
253 lambda s, *a, **k: t.extend((a, k)))
254- cmd = MakeDir(self.rq, VOLUME, 'parent', 'name', 'marker',
255- os.path.join('foo','bar'))
256+ path = os.path.join(self.root, 'file')
257+ mdid = self.main.fs.create(path, '')
258+ cmd = MakeDir(self.rq, VOLUME, 'parent', 'name', 'marker', mdid)
259 cmd._acquire_pathlock()
260- self.assertEqual(t, [('foo', 'bar'), {'on_parent': True,
261- 'logger': None}])
262+ should = [tuple(path.split(os.path.sep)),
263+ {'on_parent': True, 'logger': None}]
264+ self.assertEqual(t, should)
265
266
267 class TestDeltaList(unittest.TestCase):
268@@ -5491,9 +5502,10 @@
269 The semaphore lock must be released! Of course, this test is on
270 download/upload commands.
271 """
272+ mdid = self.main.fs.create(os.path.join(self.root, 'file'), '')
273 cmd = Upload(self.queue, share_id='a_share_id', node_id='a_node_id',
274 previous_hash='prev_hash', hash='yadda', crc32=0, size=0,
275- path='path')
276+ mdid=mdid)
277 cmd.make_logger()
278
279 # patch the command to simulate a request to an already full
280
281=== modified file 'tests/syncdaemon/test_localrescan.py'
282--- tests/syncdaemon/test_localrescan.py 2012-04-09 20:08:42 +0000
283+++ tests/syncdaemon/test_localrescan.py 2012-10-03 19:37:22 +0000
284@@ -1531,7 +1531,7 @@
285 mdobj = self.fsm.get_by_mdid(mdid)
286 self.assertEqual(self.aq.uploaded[0][:7],
287 (mdobj.share_id, mdobj.node_id, mdobj.server_hash,
288- mdobj.local_hash, mdobj.crc32, mdobj.size, path))
289+ mdobj.local_hash, mdobj.crc32, mdobj.size, mdid))
290 self.assertEqual(self.aq.uploaded[1], {'upload_id':None})
291 self.assertTrue(self.handler.check_debug("resuming upload",
292 "interrupted"))
293@@ -1580,7 +1580,7 @@
294 mdobj = self.fsm.get_by_mdid(mdid)
295 self.assertEqual(self.aq.uploaded[0][:7],
296 (mdobj.share_id, mdobj.node_id, mdobj.server_hash,
297- mdobj.local_hash, mdobj.crc32, mdobj.size, path))
298+ mdobj.local_hash, mdobj.crc32, mdobj.size, mdid))
299 self.assertEqual(self.aq.uploaded[1], {'upload_id':'hola'})
300 self.assertTrue(self.handler.check_debug("resuming upload",
301 "interrupted"))
302@@ -1608,7 +1608,7 @@
303 mdobj = self.fsm.get_by_mdid(mdid)
304 self.assertEqual(self.aq.downloaded[:4],
305 (mdobj.share_id, mdobj.node_id,
306- mdobj.server_hash, path))
307+ mdobj.server_hash, mdid))
308 self.assertTrue(self.handler.check_debug("comp yield", "SERVER"))
309
310 @defer.inlineCallbacks
311@@ -1632,7 +1632,7 @@
312 mdobj = self.fsm.get_by_mdid(mdid)
313 self.assertEqual(self.aq.downloaded[:4],
314 (mdobj.share_id, mdobj.node_id,
315- mdobj.server_hash, path))
316+ mdobj.server_hash, mdid))
317 self.assertTrue(self.handler.check_debug("comp yield", "SERVER"))
318
319 @defer.inlineCallbacks
320@@ -1958,7 +1958,7 @@
321 mdobj = self.fsm.get_by_mdid(mdid)
322 self.assertEqual(self.aq.downloaded[:4],
323 (mdobj.share_id, mdobj.node_id,
324- mdobj.server_hash, path))
325+ mdobj.server_hash, mdid))
326 self.assertTrue(self.handler.check_debug("comp yield", "SERVER"))
327
328 @defer.inlineCallbacks
329
330=== modified file 'ubuntuone/syncdaemon/action_queue.py'
331--- ubuntuone/syncdaemon/action_queue.py 2012-09-25 13:36:26 +0000
332+++ ubuntuone/syncdaemon/action_queue.py 2012-10-03 19:37:22 +0000
333@@ -1206,13 +1206,13 @@
334 command_class = self.commands[command_class_name]
335 yield self._really_execute(command_class, *args, **kwargs)
336
337- def make_file(self, share_id, parent_id, name, marker, path):
338+ def make_file(self, share_id, parent_id, name, marker, mdid):
339 """See .interfaces.IMetaQueue."""
340- self.execute(MakeFile, share_id, parent_id, name, marker, path)
341+ self.execute(MakeFile, share_id, parent_id, name, marker, mdid)
342
343- def make_dir(self, share_id, parent_id, name, marker, path):
344+ def make_dir(self, share_id, parent_id, name, marker, mdid):
345 """See .interfaces.IMetaQueue."""
346- self.execute(MakeDir, share_id, parent_id, name, marker, path)
347+ self.execute(MakeDir, share_id, parent_id, name, marker, mdid)
348
349 def move(self, share_id, node_id, old_parent_id, new_parent_id,
350 new_name, path_from, path_to):
351@@ -1270,15 +1270,15 @@
352 """See .interfaces.IMetaQueue."""
353 self.execute(GetPublicFiles)
354
355- def download(self, share_id, node_id, server_hash, path):
356+ def download(self, share_id, node_id, server_hash, mdid):
357 """See .interfaces.IContentQueue.download."""
358- self.execute(Download, share_id, node_id, server_hash, path)
359+ self.execute(Download, share_id, node_id, server_hash, mdid)
360
361 def upload(self, share_id, node_id, previous_hash, hash, crc32,
362- size, path, upload_id=None):
363+ size, mdid, upload_id=None):
364 """See .interfaces.IContentQueue."""
365 self.execute(Upload, share_id, node_id, previous_hash, hash, crc32,
366- size, path, upload_id=upload_id)
367+ size, mdid, upload_id=upload_id)
368
369 def _cancel_op(self, share_id, node_id, cmdclass):
370 """Generalized form of cancel_upload and cancel_download."""
371@@ -1575,6 +1575,13 @@
372 self.finish()
373 return True
374
375+ def _get_current_path(self, mdid):
376+ """Get current path from FSM using the mdid."""
377+ fsm = self.action_queue.main.fs
378+ mdobj = fsm.get_by_mdid(self.mdid)
379+ path = fsm.get_abspath(mdobj.share_id, mdobj.path)
380+ return path
381+
382 def _acquire_pathlock(self):
383 """Acquire pathlock; overwrite if needed."""
384 return defer.succeed(None)
385@@ -1603,11 +1610,11 @@
386 class MakeThing(ActionQueueCommand):
387 """Base of MakeFile and MakeDir."""
388
389- __slots__ = ('share_id', 'parent_id', 'name', 'marker', 'path')
390+ __slots__ = ('share_id', 'parent_id', 'name', 'marker', 'mdid')
391 logged_attrs = ActionQueueCommand.logged_attrs + __slots__
392 possible_markers = 'parent_id',
393
394- def __init__(self, request_queue, share_id, parent_id, name, marker, path):
395+ def __init__(self, request_queue, share_id, parent_id, name, marker, mdid):
396 super(MakeThing, self).__init__(request_queue)
397 self.share_id = share_id
398 self.parent_id = parent_id
399@@ -1615,7 +1622,7 @@
400 # here we use bytes for paths
401 self.name = name.decode("utf-8")
402 self.marker = marker
403- self.path = path
404+ self.mdid = mdid
405
406 def _run(self):
407 """Do the actual running."""
408@@ -1640,8 +1647,9 @@
409
410 def _acquire_pathlock(self):
411 """Acquire pathlock."""
412+ curr_path = self._get_current_path(self.mdid)
413 pathlock = self.action_queue.pathlock
414- return pathlock.acquire(*self.path.split(os.path.sep), on_parent=True,
415+ return pathlock.acquire(*curr_path.split(os.path.sep), on_parent=True,
416 logger=self.log)
417
418
419@@ -1678,6 +1686,10 @@
420 # Unicode boundary! the name is Unicode in protocol and server, but
421 # here we use bytes for paths
422 self.new_name = new_name.decode("utf-8")
423+
424+ # Move stores the paths and uses them to acquire the pathlock
425+ # later, as it is responsible of the moves and nobody else
426+ # will rename the files but it
427 self.path_from = path_from
428 self.path_to = path_to
429
430@@ -1751,6 +1763,8 @@
431 self.share_id = share_id
432 self.node_id = node_id
433 self.parent_id = parent_id
434+ # Unlink stores the path here for the pathlock as it will not change
435+ # in the future (nobody will rename a deleted file)
436 self.path = path
437 self.is_dir = is_dir
438
439@@ -2348,20 +2362,20 @@
440 """Get the contents of a file."""
441
442 __slots__ = ('share_id', 'node_id', 'server_hash',
443- 'fileobj', 'gunzip', 'path', 'download_req', 'tx_semaphore',
444+ 'fileobj', 'gunzip', 'mdid', 'download_req', 'tx_semaphore',
445 'deflated_size', 'n_bytes_read_last', 'n_bytes_read')
446 logged_attrs = ActionQueueCommand.logged_attrs + (
447- 'share_id', 'node_id', 'server_hash', 'path')
448+ 'share_id', 'node_id', 'server_hash', 'mdid')
449 possible_markers = 'node_id',
450
451- def __init__(self, request_queue, share_id, node_id, server_hash, path):
452+ def __init__(self, request_queue, share_id, node_id, server_hash, mdid):
453 super(Download, self).__init__(request_queue)
454 self.share_id = share_id
455 self.node_id = node_id
456 self.server_hash = server_hash
457 self.fileobj = None
458 self.gunzip = None
459- self.path = path
460+ self.mdid = mdid
461 self.download_req = None
462 self.n_bytes_read = 0
463 self.n_bytes_read_last = 0
464@@ -2390,8 +2404,9 @@
465
466 def _acquire_pathlock(self):
467 """Acquire pathlock."""
468+ curr_path = self._get_current_path(self.mdid)
469 pathlock = self.action_queue.pathlock
470- return pathlock.acquire(*self.path.split(os.path.sep), logger=self.log)
471+ return pathlock.acquire(*curr_path.split(os.path.sep), logger=self.log)
472
473 def cancel(self):
474 """Cancel the download."""
475@@ -2514,18 +2529,18 @@
476
477 __slots__ = ('share_id', 'node_id', 'previous_hash', 'hash', 'crc32',
478 'size', 'magic_hash', 'deflated_size', 'tempfile',
479- 'tx_semaphore', 'n_bytes_written_last', 'path', 'upload_req',
480- 'n_bytes_written', 'upload_id')
481+ 'tx_semaphore', 'n_bytes_written_last', 'upload_req',
482+ 'n_bytes_written', 'upload_id', 'mdid')
483
484 logged_attrs = ActionQueueCommand.logged_attrs + (
485 'share_id', 'node_id', 'previous_hash', 'hash', 'crc32',
486- 'size', 'upload_id', 'path')
487+ 'size', 'upload_id', 'mdid')
488 retryable_errors = ActionQueueCommand.retryable_errors + (
489 protocol_errors.UploadInProgressError,)
490 possible_markers = 'node_id',
491
492 def __init__(self, request_queue, share_id, node_id, previous_hash, hash,
493- crc32, size, path, upload_id=None):
494+ crc32, size, mdid, upload_id=None):
495 super(Upload, self).__init__(request_queue)
496 self.share_id = share_id
497 self.node_id = node_id
498@@ -2535,7 +2550,7 @@
499 self.size = size
500 self.upload_id = upload_id
501 self.tempfile = None
502- self.path = path
503+ self.mdid = mdid
504 self.upload_req = None
505 self.n_bytes_written_last = 0
506 self.n_bytes_written = 0
507@@ -2579,8 +2594,9 @@
508
509 def _acquire_pathlock(self):
510 """Acquire pathlock."""
511+ curr_path = self._get_current_path(self.mdid)
512 pathlock = self.action_queue.pathlock
513- return pathlock.acquire(*self.path.split(os.path.sep), logger=self.log)
514+ return pathlock.acquire(*curr_path.split(os.path.sep), logger=self.log)
515
516 def cancel(self):
517 """Cancel the upload."""
518@@ -2614,8 +2630,7 @@
519 self.log.debug('semaphore acquired')
520
521 fsm = self.action_queue.main.fs
522- mdobj = fsm.get_by_node_id(self.share_id, self.node_id)
523- fileobj_factory = lambda: fsm.open_file(mdobj.mdid)
524+ fileobj_factory = lambda: fsm.open_file(self.mdid)
525 yield self.action_queue.zip_queue.zip(self, fileobj_factory)
526
527 def finish(self):
528
529=== modified file 'ubuntuone/syncdaemon/local_rescan.py'
530--- ubuntuone/syncdaemon/local_rescan.py 2012-04-09 20:08:42 +0000
531+++ ubuntuone/syncdaemon/local_rescan.py 2012-10-03 19:37:22 +0000
532@@ -364,7 +364,7 @@
533 """Resume an interrupted download."""
534 mdobj = self.fsm.get_by_path(fullname)
535 self.aq.download(mdobj.share_id, mdobj.node_id,
536- mdobj.server_hash, fullname)
537+ mdobj.server_hash, mdobj.mdid)
538
539 def _resume_upload(self, fullname):
540 """Resume an interrupted upload."""
541@@ -372,7 +372,7 @@
542 upload_id = getattr(mdobj, 'upload_id', None)
543 self.aq.upload(mdobj.share_id, mdobj.node_id, mdobj.server_hash,
544 mdobj.local_hash, mdobj.crc32, mdobj.size,
545- fullname, upload_id=upload_id)
546+ mdobj.mdid, upload_id=upload_id)
547
548 def check_stat(self, fullname, oldstat):
549 """Check stat info and return if different.
550
551=== modified file 'ubuntuone/syncdaemon/sync.py'
552--- ubuntuone/syncdaemon/sync.py 2012-09-14 21:03:24 +0000
553+++ ubuntuone/syncdaemon/sync.py 2012-10-03 19:37:22 +0000
554@@ -467,16 +467,15 @@
555 self.key.delete_metadata()
556 self.new_file(event, params, share_id, node_id, parent_id, name)
557
558- def get_file(self, event, params, hash):
559+ def get_file(self, event, params, server_hash):
560 """Get the contents for the file."""
561- self.key.set(server_hash=hash)
562+ self.key.set(server_hash=server_hash)
563 self.key.sync()
564 share_id = self.key['share_id']
565 node_id = self.key['node_id']
566- path = self.key['path']
567+ mdid = self.key['mdid']
568 self.m.fs.create_partial(node_id=node_id, share_id=share_id)
569- self.m.action_q.download(share_id=share_id, node_id=node_id,
570- server_hash=hash, path=path)
571+ self.m.action_q.download(share_id, node_id, server_hash, mdid)
572
573 def reget_file(self, event, params, hash):
574 """cancel and reget this download."""
575@@ -545,8 +544,9 @@
576 self.key.set(server_hash=empty_hash)
577 self.key.sync()
578 name = os.path.basename(path)
579- marker = MDMarker(self.key.get_mdid())
580- self.m.action_q.make_file(share_id, parent_id, name, marker, path)
581+ mdid = self.key.get_mdid()
582+ marker = MDMarker(mdid)
583+ self.m.action_q.make_file(share_id, parent_id, name, marker, mdid)
584
585 def release_marker_ok(self, event, parms, new_id, marker):
586 """Release ok the received marker in AQ's DeferredMap."""
587@@ -578,7 +578,7 @@
588 name = os.path.basename(path)
589 mdid = self.key.get_mdid()
590 marker = MDMarker(mdid)
591- self.m.action_q.make_dir(share_id, parent_id, name, marker, path)
592+ self.m.action_q.make_dir(share_id, parent_id, name, marker, mdid)
593 self.m.lr.scan_dir(mdid, path)
594
595 def new_local_dir_created(self, event, parms, new_id, marker):
596@@ -599,29 +599,30 @@
597 self.m.action_q.cancel_upload(share_id=self.key['share_id'],
598 node_id=self.key['node_id'])
599
600+ mdid = self.key.get_mdid()
601 local_hash = self.key['local_hash']
602 previous_hash = self.key['server_hash']
603 crc32 = self.key['crc32']
604 size = self.key['size']
605 share_id = self.key['share_id']
606 node_id = self.key['node_id']
607- path = self.key['path']
608 upload_id = self.key.get('upload_id')
609
610 self.m.action_q.upload(share_id, node_id, previous_hash, local_hash,
611- crc32, size, path, upload_id=upload_id)
612+ crc32, size, mdid, upload_id=upload_id)
613
614- def put_file(self, event, params, hash, crc32, size, stat):
615+ def put_file(self, event, params, current_hash, crc32, size, stat):
616 """Upload the file to the server."""
617+ mdid = self.key.get_mdid()
618+ share_id = self.key['share_id']
619+ node_id = self.key['node_id']
620 previous_hash = self.key['server_hash']
621- path = self.key['path']
622 upload_id = self.key.get('upload_id')
623- self.key.set(local_hash=hash, stat=stat, crc32=crc32, size=size)
624+ self.key.set(local_hash=current_hash, stat=stat, crc32=crc32, size=size)
625 self.key.sync()
626
627- self.m.action_q.upload(share_id=self.key['share_id'],
628- node_id=self.key['node_id'], previous_hash=previous_hash,
629- hash=hash, crc32=crc32, size=size, path=path, upload_id=upload_id)
630+ self.m.action_q.upload(share_id, node_id, previous_hash, current_hash,
631+ crc32, size, mdid, upload_id=upload_id)
632
633 def converges_to_server(self, event, params, hash, crc32, size, stat):
634 """the local changes now match the server"""
635@@ -640,19 +641,21 @@
636 self.key.sync()
637 self.m.hash_q.insert(self.key['path'], self.key['mdid'])
638
639- def reput_file(self, event, param, hash, crc32, size, stat):
640+ def reput_file(self, event, param, current_hash, crc32, size, stat):
641 """Put the file again."""
642 self.m.action_q.cancel_upload(share_id=self.key['share_id'],
643 node_id=self.key['node_id'])
644 previous_hash = self.key['server_hash']
645
646- path = self.key['path']
647+ share_id = self.key['share_id']
648+ node_id = self.key['node_id']
649 upload_id = self.key.get('upload_id')
650- self.key.set(local_hash=hash, stat=stat, crc32=crc32, size=size)
651+ self.key.set(local_hash=current_hash, stat=stat,
652+ crc32=crc32, size=size)
653 self.key.sync()
654- self.m.action_q.upload(share_id=self.key['share_id'],
655- node_id=self.key['node_id'], previous_hash=previous_hash,
656- hash=hash, crc32=crc32, size=size, path=path, upload_id=upload_id)
657+ mdid = self.key.get_mdid()
658+ self.m.action_q.upload(share_id, node_id, previous_hash, current_hash,
659+ crc32, size, mdid, upload_id=upload_id)
660
661 def server_file_now_matches(self, event, params, hash):
662 """We got a server hash that matches local hash"""

Subscribers

People subscribed via source and target branches