Merge lp:~dobey/ubuntuone-client/update-4-0 into lp:ubuntuone-client/stable-4-0
- update-4-0
- Merge into stable-4-0
Status: | Merged |
---|---|
Merged at revision: | 1260 |
Proposed branch: | lp:~dobey/ubuntuone-client/update-4-0 |
Merge into: | lp:ubuntuone-client/stable-4-0 |
Diff against target: |
392 lines (+177/-17) 11 files modified
data/syncdaemon.conf (+1/-0) libsyncdaemon/syncdaemon-shares-interface.c (+1/-1) tests/platform/credentials/test_linux.py (+8/-2) tests/platform/ipc/test_perspective_broker.py (+135/-0) tests/syncdaemon/test_action_queue.py (+2/-0) ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_client.py (+1/-0) ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_daemon.py (+1/-1) ubuntuone/platform/ipc/ipc_client.py (+24/-10) ubuntuone/platform/ipc/perspective_broker.py (+2/-2) ubuntuone/syncdaemon/logger.py (+1/-0) ubuntuone/syncdaemon/sync.py (+1/-1) |
To merge this branch: | bzr merge lp:~dobey/ubuntuone-client/update-4-0 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Roberto Alsina (community) | Approve | ||
Review via email: mp+123605@code.launchpad.net |
Commit message
[Facundo Batista]
- Decode the gettext string before compare (LP: #1011822).
[Manuel de la Peña]
- Map the attrib event to an in_modify one (LP: #1043914).
- Add the pattern to ignore the .DS_Store files coming from darwin (LP: #1034138).
- Fix logging by escaping % (LP: #1043183).
[Mike McCracken]
- Fix twisted IPC exceptions for syncdaemon status signals. (LP: #902189)
- Change socket name for daemon to match reverse DNS naming convention. (LP: #1042785)
- Add a guard so that async calls are perform after each other and sd is not started more than once (LP: #1043287).
[Rodney Dawes]
- Skip test_zip_
- Allocate one extra slot for the strv to fit all the GSList entries.
Description of the change
Ubuntu One Auto Pilot (otto-pilot) wrote : | # |
The attempt to merge lp:~dobey/ubuntuone-client/update-4-0 into lp:ubuntuone-client/stable-4-0 failed. Below is the output from the failed tests.
/usr/bin/
checking for autoconf >= 2.53...
testing autoconf2.50... not found.
testing autoconf... found 2.69
checking for automake >= 1.10...
testing automake-1.12... not found.
testing automake-1.11... found 1.11.5
checking for libtool >= 1.5...
testing libtoolize... found 2.4.2
checking for intltool >= 0.30...
testing intltoolize... found 0.50.2
checking for pkg-config >= 0.14.0...
testing pkg-config... found 0.26
checking for gtk-doc >= 1.0...
testing gtkdocize... found 1.18
Checking for required M4 macros...
Checking for forbidden M4 macros...
Processing ./configure.ac
Running libtoolize...
libtoolize: putting auxiliary files in `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: putting macros in AC_CONFIG_
libtoolize: copying file `m4/libtool.m4'
libtoolize: copying file `m4/ltoptions.m4'
libtoolize: copying file `m4/ltsugar.m4'
libtoolize: copying file `m4/ltversion.m4'
libtoolize: copying file `m4/lt~obsolete.m4'
Running intltoolize...
Running gtkdocize...
Running aclocal-1.11...
Running autoconf...
Running autoheader...
Running automake-1.11...
Running ./configure --enable-gtk-doc --enable-debug ...
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... no
checking for mawk... mawk
checking whether make sets $(MAKE)... yes
checking how to create a ustar tar archive... gnutar
checking whether make supports nested variables... yes
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking for library containing strerror... none required
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking dependency style of gcc... (cached) gcc3
checking build system type... x86_64-
checking host system type... x86_64-
checking how to print strings... printf
checking for a sed that does not truncate output... /bin/sed
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking whether the sh...
- 1260. By dobey
-
[Facundo Batista]
- Decode the gettext string before compare (LP: #1011822).
[Manuel de la Peña]
- Map the attrib event to an in_modify one (LP: #1043914).
- Add the pattern to ignore the .DS_Store files coming from darwin (LP: #1034138).
- Fix logging by escaping % (LP: #1043183).[Mike McCracken]
- Fix twisted IPC exceptions for syncdaemon status signals. (LP: #902189)
- Change socket name for daemon to match reverse DNS naming convention. (LP: #1042785)
- Add a guard so that async calls are perform after each other and sd is not started more than once (LP: #1043287).[Rodney Dawes]
- Skip test_zip_
acquire_ lock as it fails intermittently with twisted 12.x.
- Allocate one extra slot for the strv to fit all the GSList entries.
Preview Diff
1 | === modified file 'data/syncdaemon.conf' |
2 | --- data/syncdaemon.conf 2012-07-17 10:51:43 +0000 |
3 | +++ data/syncdaemon.conf 2012-09-10 17:35:22 +0000 |
4 | @@ -86,6 +86,7 @@ |
5 | \A\.~lock\..*#\Z |
6 | \A\.goutputstream-.*\Z |
7 | \A.*-Spotlight\Z |
8 | + \A\.DS_Store\Z |
9 | |
10 | use_trash.default = True |
11 | use_trash.parser = bool |
12 | |
13 | === modified file 'libsyncdaemon/syncdaemon-shares-interface.c' |
14 | --- libsyncdaemon/syncdaemon-shares-interface.c 2012-04-09 20:07:05 +0000 |
15 | +++ libsyncdaemon/syncdaemon-shares-interface.c 2012-09-10 17:35:22 +0000 |
16 | @@ -303,7 +303,7 @@ |
17 | } else { |
18 | GSList *l; |
19 | gint i; |
20 | - gchar **users_array = g_new0 (gchar *, g_slist_length (usernames)); |
21 | + gchar **users_array = g_new0 (gchar *, g_slist_length (usernames) + 1); |
22 | |
23 | for (l = usernames, i = 0; l != NULL; l = l->next, i++) |
24 | users_array[i] = g_strdup (l->data); |
25 | |
26 | === modified file 'tests/platform/credentials/test_linux.py' |
27 | --- tests/platform/credentials/test_linux.py 2012-06-14 16:54:45 +0000 |
28 | +++ tests/platform/credentials/test_linux.py 2012-09-10 17:35:22 +0000 |
29 | @@ -384,7 +384,10 @@ |
30 | yield d |
31 | |
32 | self.assertEqual(self.sso_server._app_name, APP_NAME) |
33 | - params = dict(UI_PARAMS) |
34 | + # convert to unicode for the comparison, as the message is arriving |
35 | + # here as bytes (bad gettext usage!) |
36 | + params = dict((x, y.decode("utf8") if isinstance(y, str) else y) |
37 | + for x, y in UI_PARAMS.items()) |
38 | params.update(self.args) |
39 | self.assertEqual(self.sso_server._args, params) |
40 | |
41 | @@ -397,7 +400,10 @@ |
42 | yield d |
43 | |
44 | self.assertEqual(self.sso_server._app_name, APP_NAME) |
45 | - params = dict(UI_PARAMS) |
46 | + # convert to unicode for the comparison, as the message is arriving |
47 | + # here as bytes (bad gettext usage!) |
48 | + params = dict((x, y.decode("utf8") if isinstance(y, str) else y) |
49 | + for x, y in UI_PARAMS.items()) |
50 | params.update(self.args) |
51 | self.assertEqual(self.sso_server._args, params) |
52 | |
53 | |
54 | === modified file 'tests/platform/ipc/test_perspective_broker.py' |
55 | --- tests/platform/ipc/test_perspective_broker.py 2012-05-22 15:27:36 +0000 |
56 | +++ tests/platform/ipc/test_perspective_broker.py 2012-09-10 17:35:22 +0000 |
57 | @@ -27,6 +27,7 @@ |
58 | # version. If you delete this exception statement from all source |
59 | # files in the program, then also delete it here. |
60 | """IPC tests on perspective broker.""" |
61 | +import itertools |
62 | import os |
63 | |
64 | from mocker import MockerTestCase, ANY |
65 | @@ -62,6 +63,7 @@ |
66 | Status, |
67 | SyncDaemon, |
68 | ) |
69 | +from ubuntuone.platform.ipc import ipc_client |
70 | from ubuntuone.platform.ipc.ipc_client import ( |
71 | signal, |
72 | ConfigClient, |
73 | @@ -498,6 +500,32 @@ |
74 | for signal_name, args in self.signal_mapping: |
75 | self.assert_remote_signal(signal_name, *args) |
76 | |
77 | + def test_remote_signal_calling(self): |
78 | + """Call client functions directly, test for TypeErrors.""" |
79 | + if self.client_name is None: |
80 | + return |
81 | + |
82 | + def fake_remote_cb(*args, **kwargs): |
83 | + return (args, kwargs) |
84 | + |
85 | + client = getattr(self, self.client_name) |
86 | + remote_client = self.client_class(FakeRemoteObject()) |
87 | + for signal_name, args in self.signal_mapping: |
88 | + |
89 | + remote_client_func_name = client.signal_mapping[signal_name] |
90 | + remote_cb_name = "%s_cb" % remote_client_func_name |
91 | + setattr(remote_client, remote_cb_name, fake_remote_cb) |
92 | + self.addCleanup(delattr, remote_client, remote_cb_name) |
93 | + |
94 | + remote_client_func = getattr(remote_client, |
95 | + remote_client_func_name) |
96 | + |
97 | + result = remote_client_func(*args) |
98 | + |
99 | + # add empty kwargs because signal wrapper will add them: |
100 | + expected = (args, {}) |
101 | + self.assertEqual(expected, result) |
102 | + |
103 | |
104 | class StatusTestCase(IPCTestCase): |
105 | """Test the status client class.""" |
106 | @@ -764,3 +792,110 @@ |
107 | lambda _: defer.fail(ipc.AlreadyStartedError())) |
108 | is_running = yield ipc.is_already_running() |
109 | self.assertTrue(is_running, "Should be running by now.") |
110 | + |
111 | + |
112 | +class MultipleConnectionsTestCase(TestCase): |
113 | + """Test the execution of the client with multiple connections.""" |
114 | + |
115 | + @defer.inlineCallbacks |
116 | + def setUp(self): |
117 | + """Set the different tests.""" |
118 | + yield super(MultipleConnectionsTestCase, self).setUp() |
119 | + self.client_connect_d = defer.Deferred() # called when we connected |
120 | + self.client_root_obj_d = defer.Deferred() # called when we got root |
121 | + self.remote_obj_d = defer.Deferred() # called when we got the remotes |
122 | + self.register_to_signals_d = defer.Deferred() # called with signals |
123 | + self.called = [] |
124 | + self.num_clients = 4 |
125 | + |
126 | + @defer.inlineCallbacks |
127 | + def fake_get_root_object(): |
128 | + """Fake getting a root objects.""" |
129 | + yield self.client_root_obj_d |
130 | + self.called.append('getRootObject') |
131 | + defer.returnValue(True) |
132 | + |
133 | + @defer.inlineCallbacks |
134 | + def fake_ipc_client_connect(factory): |
135 | + """Fake ipc_client_connect.""" |
136 | + yield self.client_connect_d |
137 | + self.called.append('ipc_client_connect') |
138 | + |
139 | + # lets patch the factory getRootObjects function |
140 | + self.patch(factory, 'getRootObject', fake_get_root_object) |
141 | + defer.returnValue(True) |
142 | + |
143 | + self.patch(ipc_client, 'ipc_client_connect', fake_ipc_client_connect) |
144 | + |
145 | + @defer.inlineCallbacks |
146 | + def fake_request_remote_objects(my_self, root): |
147 | + """Fake request_remote_objects.""" |
148 | + yield self.remote_obj_d |
149 | + self.called.append('_request_remote_objects') |
150 | + defer.returnValue(True) |
151 | + |
152 | + self.patch(ipc_client.UbuntuOneClient, '_request_remote_objects', |
153 | + fake_request_remote_objects) |
154 | + |
155 | + @defer.inlineCallbacks |
156 | + def fake_register_to_signals(my_self): |
157 | + """Fake registering to signals.""" |
158 | + yield self.register_to_signals_d |
159 | + self.called.append('register_to_signals') |
160 | + defer.returnValue(True) |
161 | + |
162 | + self.patch(ipc_client.UbuntuOneClient, 'register_to_signals', |
163 | + fake_register_to_signals) |
164 | + |
165 | + def grouper(self, n, iterable, fillvalue=None): |
166 | + "grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx" |
167 | + args = [iter(iterable)] * n |
168 | + return itertools.izip_longest(*args, fillvalue=fillvalue) |
169 | + |
170 | + @defer.inlineCallbacks |
171 | + def test_multiple_connections(self): |
172 | + """Test that we only perform connect once but all other are correct.""" |
173 | + deferreds = [self.client_connect_d, self.client_root_obj_d, |
174 | + self.remote_obj_d, self.register_to_signals_d] |
175 | + |
176 | + # the order in which the calls are expected |
177 | + expected_calls = ('ipc_client_connect','getRootObject', |
178 | + '_request_remote_objects', 'register_to_signals') |
179 | + |
180 | + clients = [] |
181 | + while len(clients) < self.num_clients: |
182 | + clients.append(ipc_client.UbuntuOneClient()) |
183 | + |
184 | + connected_d = [] |
185 | + |
186 | + for num_steps in range(len(deferreds)): |
187 | + # tell the first client to connect |
188 | + connected_d.append(clients[0].connect()) |
189 | + |
190 | + # perform the number of connection steps so far |
191 | + for index, step_d in enumerate(deferreds): |
192 | + if index > num_steps: |
193 | + break |
194 | + step_d.callback(True) |
195 | + |
196 | + # call connect to all the other clients |
197 | + for client in clients[1:]: |
198 | + connected_d.append(client.connect()) |
199 | + |
200 | + # perform the rest of steps |
201 | + for step_d in deferreds[num_steps + 1:]: |
202 | + step_d.callback(True) |
203 | + |
204 | + yield defer.gatherResults(connected_d) |
205 | + # reset all the deferreds for the next round of testing |
206 | + for index, d_name in enumerate(('client_connect_d', |
207 | + 'client_root_obj_d', 'remote_obj_d', |
208 | + 'register_to_signals_d')): |
209 | + new_d = defer.Deferred() |
210 | + setattr(self, d_name, new_d) |
211 | + deferreds[index] = new_d |
212 | + |
213 | + # assert that all connect calls have been done in the correct |
214 | + # order |
215 | + for calls in self.grouper(4, self.called, None): |
216 | + self.assertEqual(expected_calls, calls) |
217 | |
218 | === modified file 'tests/syncdaemon/test_action_queue.py' |
219 | --- tests/syncdaemon/test_action_queue.py 2012-06-14 13:47:02 +0000 |
220 | +++ tests/syncdaemon/test_action_queue.py 2012-09-10 17:35:22 +0000 |
221 | @@ -62,6 +62,7 @@ |
222 | FakeUpload, |
223 | ) |
224 | from ubuntuone.devtools import handlers |
225 | +from ubuntuone.devtools.testcases import skipTest |
226 | from ubuntuone import logger, clientdefs |
227 | from ubuntuone.platform import open_file, platform, path_exists |
228 | from ubuntuone.storageprotocol import ( |
229 | @@ -1022,6 +1023,7 @@ |
230 | yield self.zq.zip(upload, 'willbreak') |
231 | self.assertEqual(len(called), 0) |
232 | |
233 | + @skipTest('Intermittently failing on twisted 12: LP: #1031815') |
234 | @defer.inlineCallbacks |
235 | def test_zip_acquire_lock(self): |
236 | """Test that it acquires the lock.""" |
237 | |
238 | === modified file 'ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_client.py' |
239 | --- ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_client.py 2012-08-22 15:03:24 +0000 |
240 | +++ ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_client.py 2012-09-10 17:35:22 +0000 |
241 | @@ -46,6 +46,7 @@ |
242 | ACTIONS = { |
243 | fsevents.IN_CREATE: IN_CREATE, |
244 | fsevents.IN_DELETE: IN_DELETE, |
245 | + fsevents.IN_ATTRIB: IN_MODIFY, |
246 | fsevents.IN_MODIFY: IN_MODIFY, |
247 | fsevents.IN_MOVED_FROM: IN_MOVED_FROM, |
248 | fsevents.IN_MOVED_TO: IN_MOVED_TO, |
249 | |
250 | === modified file 'ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_daemon.py' |
251 | --- ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_daemon.py 2012-07-23 12:26:42 +0000 |
252 | +++ ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_daemon.py 2012-09-10 17:35:22 +0000 |
253 | @@ -97,7 +97,7 @@ |
254 | |
255 | # TODO: This should be in fseventsd to be imported! |
256 | # Path to the socket used by the daemon |
257 | -DAEMON_SOCKET = '/var/run/ubuntuone_fsevents_daemon' |
258 | +DAEMON_SOCKET = '/var/run/com.ubuntu.one.fsevents.sock' |
259 | |
260 | |
261 | class DescriptionFactory(object): |
262 | |
263 | === modified file 'ubuntuone/platform/ipc/ipc_client.py' |
264 | --- ubuntuone/platform/ipc/ipc_client.py 2012-08-27 15:22:13 +0000 |
265 | +++ ubuntuone/platform/ipc/ipc_client.py 2012-09-10 17:35:22 +0000 |
266 | @@ -234,11 +234,11 @@ |
267 | """Emit DownloadStarted.""" |
268 | |
269 | @signal |
270 | - def on_download_file_progress(self, download, **info): |
271 | + def on_download_file_progress(self, download, info): |
272 | """Emit DownloadFileProgress.""" |
273 | |
274 | @signal |
275 | - def on_download_finished(self, download, **info): |
276 | + def on_download_finished(self, download, info): |
277 | """Emit DownloadFinished.""" |
278 | |
279 | @signal |
280 | @@ -246,11 +246,11 @@ |
281 | """Emit UploadStarted.""" |
282 | |
283 | @signal |
284 | - def on_upload_file_progress(self, upload, **info): |
285 | + def on_upload_file_progress(self, upload, info): |
286 | """Emit UploadFileProgress.""" |
287 | |
288 | @signal |
289 | - def on_upload_finished(self, upload, *info): |
290 | + def on_upload_finished(self, upload, info): |
291 | """Emit UploadFinished.""" |
292 | |
293 | @signal |
294 | @@ -261,6 +261,14 @@ |
295 | def on_metaqueue_changed(self): |
296 | """Emit MetaQueueChanged.""" |
297 | |
298 | + @signal |
299 | + def on_request_queue_added(self, op_name, op_id, data): |
300 | + """Emit RequestQueueAdded.""" |
301 | + |
302 | + @signal |
303 | + def on_request_queue_removed(self, op_name, op_id, data): |
304 | + """Emit RequestQueueRemoved.""" |
305 | + |
306 | |
307 | class EventsClient(RemoteClient, Referenceable): |
308 | """Client use to access the status api.""" |
309 | @@ -719,6 +727,8 @@ |
310 | class UbuntuOneClient(object): |
311 | """Root object that provides access to all the remote objects.""" |
312 | |
313 | + connection_lock = defer.DeferredLock() |
314 | + |
315 | def __init__(self): |
316 | """Create a new instance.""" |
317 | self.status = None |
318 | @@ -765,17 +775,21 @@ |
319 | def connect(self): |
320 | """Connect to the syncdaemon service.""" |
321 | # pylint: disable=W0702 |
322 | + yield self.connection_lock.acquire() |
323 | try: |
324 | - # connect to the remote objects |
325 | - self.factory = PBClientFactory() |
326 | - self.client = yield ipc_client_connect(self.factory) |
327 | - root = yield self.factory.getRootObject() |
328 | - yield self._request_remote_objects(root) |
329 | - yield self.register_to_signals() |
330 | + if self.client is None: |
331 | + # connect to the remote objects |
332 | + self.factory = PBClientFactory() |
333 | + self.client = yield ipc_client_connect(self.factory) |
334 | + root = yield self.factory.getRootObject() |
335 | + yield self._request_remote_objects(root) |
336 | + yield self.register_to_signals() |
337 | defer.returnValue(self) |
338 | except Exception, e: |
339 | raise SyncDaemonClientConnectionError( |
340 | 'Could not connect to the syncdaemon ipc.', e) |
341 | + finally: |
342 | + self.connection_lock.release() |
343 | # pylint: disable=W0702 |
344 | |
345 | @defer.inlineCallbacks |
346 | |
347 | === modified file 'ubuntuone/platform/ipc/perspective_broker.py' |
348 | --- ubuntuone/platform/ipc/perspective_broker.py 2012-08-16 12:00:12 +0000 |
349 | +++ ubuntuone/platform/ipc/perspective_broker.py 2012-09-10 17:35:22 +0000 |
350 | @@ -305,7 +305,7 @@ |
351 | """Fire a signal to notify about a download progress.""" |
352 | |
353 | @signal |
354 | - def DownloadFinished(self, path, *info): |
355 | + def DownloadFinished(self, path, info): |
356 | """Fire a signal to notify that a download has finished.""" |
357 | |
358 | @signal |
359 | @@ -317,7 +317,7 @@ |
360 | """Fire a signal to notify about an upload progress.""" |
361 | |
362 | @signal |
363 | - def UploadFinished(self, path, *info): |
364 | + def UploadFinished(self, path, info): |
365 | """Fire a signal to notify that an upload has finished.""" |
366 | |
367 | @signal |
368 | |
369 | === modified file 'ubuntuone/syncdaemon/logger.py' |
370 | --- ubuntuone/syncdaemon/logger.py 2012-06-21 18:58:50 +0000 |
371 | +++ ubuntuone/syncdaemon/logger.py 2012-09-10 17:35:22 +0000 |
372 | @@ -75,6 +75,7 @@ |
373 | |
374 | desc = "%-28s share:%-40r node:%-40r %s(%s) " % (_method, _share, |
375 | _uid, _method, args) |
376 | + desc = desc.replace('%', '%%') |
377 | self.zipped_desc = zlib.compress(desc, 9) |
378 | self.logger = _logger |
379 | |
380 | |
381 | === modified file 'ubuntuone/syncdaemon/sync.py' |
382 | --- ubuntuone/syncdaemon/sync.py 2012-06-13 21:31:24 +0000 |
383 | +++ ubuntuone/syncdaemon/sync.py 2012-09-10 17:35:22 +0000 |
384 | @@ -271,7 +271,7 @@ |
385 | path = self.key.safe_get('path') |
386 | extra = dict(message=message, |
387 | mdid=self.key.safe_get("mdid"), |
388 | - path=path, |
389 | + path=path.replace('%', '%%'), # escape % |
390 | share_id=self.key.safe_get("share_id") or 'root', |
391 | node_id=self.key.safe_get("node_id"), |
392 | hasmd=self.key.has_metadata(), |
+1