Merge lp:~dobey/ubuntuone-client/update-4-0 into lp:ubuntuone-client/stable-4-0
- update-4-0
- Merge into stable-4-0
| Status: | Merged |
|---|---|
| Merged at revision: | 1254 |
| Proposed branch: | lp:~dobey/ubuntuone-client/update-4-0 |
| Merge into: | lp:ubuntuone-client/stable-4-0 |
| Diff against target: |
3235 lines (+2054/-673) 28 files modified
Makefile.am (+2/-1) bin/ubuntuone-syncdaemon (+9/-1) data/syncdaemon.conf (+6/-0) run-mac-tests (+2/-0) run-tests.bat (+3/-1) tests/platform/check_reactor_import.py (+74/-0) tests/platform/filesystem_notifications/test_darwin.py (+17/-7) tests/platform/filesystem_notifications/test_filesystem_notifications.py (+6/-8) tests/platform/filesystem_notifications/test_fsevents_daemon.py (+483/-0) tests/platform/filesystem_notifications/test_linux.py (+16/-7) tests/platform/filesystem_notifications/test_windows.py (+15/-7) tests/syncdaemon/test_config.py (+13/-0) tests/syncdaemon/test_eventqueue.py (+17/-0) ubuntuone/platform/filesystem_notifications/__init__.py (+5/-11) ubuntuone/platform/filesystem_notifications/monitor/__init__.py (+95/-0) ubuntuone/platform/filesystem_notifications/monitor/common.py (+16/-229) ubuntuone/platform/filesystem_notifications/monitor/darwin/__init__.py (+5/-127) ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_client.py (+151/-0) ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_daemon.py (+439/-0) ubuntuone/platform/filesystem_notifications/monitor/linux.py (+10/-258) ubuntuone/platform/filesystem_notifications/monitor/windows.py (+0/-10) ubuntuone/platform/filesystem_notifications/notify_processor/__init__.py (+46/-0) ubuntuone/platform/filesystem_notifications/notify_processor/common.py (+289/-0) ubuntuone/platform/filesystem_notifications/notify_processor/linux.py (+322/-0) ubuntuone/syncdaemon/event_queue.py (+9/-3) ubuntuone/syncdaemon/filesystem_manager.py (+1/-0) ubuntuone/syncdaemon/filesystem_notifications.py (+0/-1) ubuntuone/syncdaemon/main.py (+3/-2) |
| To merge this branch: | bzr merge lp:~dobey/ubuntuone-client/update-4-0 |
| Related bugs: |
| Reviewer | Review Type | Date Requested | Status |
|---|---|---|---|
| Eric Casteleijn (community) | 2012-07-31 | Approve on 2012-07-31 | |
|
Review via email:
|
|||
Commit Message
[Manuel de la Peña]
- Clean the location of the filemonitor imports (LP:1026212).
- Provides an extra command line that allows to choose which filemonitor implementation to use with the ubuntuone sync daemon (LP: 1025689).
- Unify the file monitor implementations to simplify the fact that more than one monitor implementation exists on darwin.
- Move the NotifyProcessor to a multipletform package to simplify the code and remove circular imports.
-Added initial inclusion of the file system monitor that will allow to use the fsevents daemon in darwin.
[Mike McCracken]
- Add test script that fails if reactor is imported from the wrong place (LP: #1030961)
- Remove unnecessary symbol imports to avoid loading default reactor when not needed (LP: #1029636)
Description of the Change
| Ubuntu One Auto Pilot (otto-pilot) wrote : | # |
| Ubuntu One Auto Pilot (otto-pilot) wrote : | # |
The attempt to merge lp:~dobey/ubuntuone-client/update-4-0 into lp:ubuntuone-client/stable-4-0 failed. Below is the output from the failed tests.
/usr/bin/
checking for autoconf >= 2.53...
testing autoconf2.50... not found.
testing autoconf... found 2.69
checking for automake >= 1.10...
testing automake-1.11... found 1.11.5
checking for libtool >= 1.5...
testing libtoolize... found 2.4.2
checking for intltool >= 0.30...
testing intltoolize... found 0.50.2
checking for pkg-config >= 0.14.0...
testing pkg-config... found 0.26
checking for gtk-doc >= 1.0...
testing gtkdocize... found 1.18
Checking for required M4 macros...
Checking for forbidden M4 macros...
Processing ./configure.ac
Running libtoolize...
libtoolize: putting auxiliary files in `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: putting macros in AC_CONFIG_
libtoolize: copying file `m4/libtool.m4'
libtoolize: copying file `m4/ltoptions.m4'
libtoolize: copying file `m4/ltsugar.m4'
libtoolize: copying file `m4/ltversion.m4'
libtoolize: copying file `m4/lt~obsolete.m4'
Running intltoolize...
Running gtkdocize...
Running aclocal-1.11...
Running autoconf...
Running autoheader...
Running automake-1.11...
Running ./configure --enable-gtk-doc --enable-debug ...
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... no
checking for mawk... mawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking for library containing strerror... none required
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking dependency style of gcc... (cached) gcc3
checking build system type... x86_64-
checking host system type... x86_64-
checking how to print strings... printf
checking for a sed that does not truncate output... /bin/sed
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking whether the shell understands some XSI constructs... yes
checking whether the shell understands "+="... y...
| Ubuntu One Auto Pilot (otto-pilot) wrote : | # |
The attempt to merge lp:~dobey/ubuntuone-client/update-4-0 into lp:ubuntuone-client/stable-4-0 failed. Below is the output from the failed tests.
/usr/bin/
checking for autoconf >= 2.53...
testing autoconf2.50... not found.
testing autoconf... found 2.69
checking for automake >= 1.10...
testing automake-1.11... found 1.11.5
checking for libtool >= 1.5...
testing libtoolize... found 2.4.2
checking for intltool >= 0.30...
testing intltoolize... found 0.50.2
checking for pkg-config >= 0.14.0...
testing pkg-config... found 0.26
checking for gtk-doc >= 1.0...
testing gtkdocize... found 1.18
Checking for required M4 macros...
Checking for forbidden M4 macros...
Processing ./configure.ac
Running libtoolize...
libtoolize: putting auxiliary files in `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: putting macros in AC_CONFIG_
libtoolize: copying file `m4/libtool.m4'
libtoolize: copying file `m4/ltoptions.m4'
libtoolize: copying file `m4/ltsugar.m4'
libtoolize: copying file `m4/ltversion.m4'
libtoolize: copying file `m4/lt~obsolete.m4'
Running intltoolize...
Running gtkdocize...
Running aclocal-1.11...
Running autoconf...
Running autoheader...
Running automake-1.11...
Running ./configure --enable-gtk-doc --enable-debug ...
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... no
checking for mawk... mawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking for library containing strerror... none required
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking dependency style of gcc... (cached) gcc3
checking build system type... x86_64-
checking host system type... x86_64-
checking how to print strings... printf
checking for a sed that does not truncate output... /bin/sed
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking whether the shell understands some XSI constructs... yes
checking whether the shell understands "+="... y...
| Ubuntu One Auto Pilot (otto-pilot) wrote : | # |
The attempt to merge lp:~dobey/ubuntuone-client/update-4-0 into lp:ubuntuone-client/stable-4-0 failed. Below is the output from the failed tests.
/usr/bin/
checking for autoconf >= 2.53...
testing autoconf2.50... not found.
testing autoconf... found 2.69
checking for automake >= 1.10...
testing automake-1.11... found 1.11.5
checking for libtool >= 1.5...
testing libtoolize... found 2.4.2
checking for intltool >= 0.30...
testing intltoolize... found 0.50.2
checking for pkg-config >= 0.14.0...
testing pkg-config... found 0.26
checking for gtk-doc >= 1.0...
testing gtkdocize... found 1.18
Checking for required M4 macros...
Checking for forbidden M4 macros...
Processing ./configure.ac
Running libtoolize...
libtoolize: putting auxiliary files in `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: putting macros in AC_CONFIG_
libtoolize: copying file `m4/libtool.m4'
libtoolize: copying file `m4/ltoptions.m4'
libtoolize: copying file `m4/ltsugar.m4'
libtoolize: copying file `m4/ltversion.m4'
libtoolize: copying file `m4/lt~obsolete.m4'
Running intltoolize...
Running gtkdocize...
Running aclocal-1.11...
Running autoconf...
Running autoheader...
Running automake-1.11...
Running ./configure --enable-gtk-doc --enable-debug ...
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... no
checking for mawk... mawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking for library containing strerror... none required
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking dependency style of gcc... (cached) gcc3
checking build system type... x86_64-
checking host system type... x86_64-
checking how to print strings... printf
checking for a sed that does not truncate output... /bin/sed
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking whether the shell understands some XSI constructs... yes
checking whether the shell understands "+="... y...
Preview Diff
| 1 | === modified file 'Makefile.am' |
| 2 | --- Makefile.am 2012-05-15 13:25:37 +0000 |
| 3 | +++ Makefile.am 2012-07-31 17:27:22 +0000 |
| 4 | @@ -54,8 +54,9 @@ |
| 5 | test: logging.conf $(clientdefs_DATA) Makefile |
| 6 | echo "$(PYTHONPATH)" |
| 7 | if test "x$(builddir)" == "x$(srcdir)"; then \ |
| 8 | - PYTHONPATH="$(PYTHONPATH)" u1trial -r $(REACTOR) -p tests/platform/windows,tests/proxy -i "test_windows.py,test_darwin.py" tests || exit 1; \ |
| 9 | + PYTHONPATH="$(PYTHONPATH)" u1trial -r $(REACTOR) -p tests/platform/windows,tests/proxy -i "test_windows.py,test_darwin.py,test_fsevents_daemon.py" tests || exit 1; \ |
| 10 | PYTHONPATH="$(PYTHONPATH)" u1trial -r qt4 -p tests/platform/windows -i "test_windows.py,test_darwin.py" tests/proxy || exit 1; \ |
| 11 | + PYTHONPATH="$(PYTHONPATH)" $(PYTHON) tests/platform/check_reactor_import.py || exit 1; \ |
| 12 | fi |
| 13 | rm -rf _trial_temp |
| 14 | |
| 15 | |
| 16 | === modified file 'bin/ubuntuone-syncdaemon' |
| 17 | --- bin/ubuntuone-syncdaemon 2012-07-02 15:39:40 +0000 |
| 18 | +++ bin/ubuntuone-syncdaemon 2012-07-31 17:27:22 +0000 |
| 19 | @@ -56,6 +56,10 @@ |
| 20 | recursive_move, |
| 21 | set_application_name, |
| 22 | ) |
| 23 | +from ubuntuone.platform.filesystem_notifications.monitor import ( |
| 24 | + get_filemonitor_class, |
| 25 | +) |
| 26 | + |
| 27 | from ubuntuone.syncdaemon import logger, config |
| 28 | from ubuntuone.syncdaemon.config import ( |
| 29 | get_config_files, |
| 30 | @@ -212,6 +216,9 @@ |
| 31 | "[CONSUMER_KEY:CONSUMER_SECRET:]KEY:SECRET" |
| 32 | parser.error(msg) |
| 33 | |
| 34 | + # check which file monitor to use |
| 35 | + monitor_class = yield get_filemonitor_class(options.fs_monitor) |
| 36 | + |
| 37 | main = Main(options.root_dir, options.shares_dir, options.data_dir, |
| 38 | partials_dir, host=options.host, port=int(options.port), |
| 39 | dns_srv=options.dns_srv, ssl=True, |
| 40 | @@ -224,7 +231,8 @@ |
| 41 | write_limit=options.bandwidth_throttling_write_limit, |
| 42 | throttling_enabled=options.bandwidth_throttling_on, |
| 43 | ignore_files=options.ignore, |
| 44 | - oauth_credentials=oauth_credentials) |
| 45 | + oauth_credentials=oauth_credentials, |
| 46 | + monitor_class=monitor_class) |
| 47 | |
| 48 | # override the reactor default signal handlers in order to |
| 49 | # shutdown properly |
| 50 | |
| 51 | === modified file 'data/syncdaemon.conf' |
| 52 | --- data/syncdaemon.conf 2012-06-20 15:19:12 +0000 |
| 53 | +++ data/syncdaemon.conf 2012-07-31 17:27:22 +0000 |
| 54 | @@ -106,6 +106,11 @@ |
| 55 | memory_pool_limit.parser = int |
| 56 | memory_pool_limit.help = How many AQ Commands will be kept in memory to execute. |
| 57 | |
| 58 | +fs_monitor.default = default |
| 59 | +fs_monitor.metavar = MONITOR_TYPE |
| 60 | +fs_monitor.help = Set the file monitor to be used to get the events from the |
| 61 | + file system. |
| 62 | + |
| 63 | |
| 64 | [notifications] |
| 65 | show_all_notifications.default = True |
| 66 | @@ -130,6 +135,7 @@ |
| 67 | write_limit.metavar = UPLOAD_LIMIT |
| 68 | write_limit.help = Set the upload limit (bytes/sec). |
| 69 | |
| 70 | + |
| 71 | [debug] |
| 72 | lsprof_file.metavar = FILE |
| 73 | lsprof_file.help = Profile execution using the lsprof profiler, and write the |
| 74 | |
| 75 | === modified file 'run-mac-tests' |
| 76 | --- run-mac-tests 2012-06-27 19:48:24 +0000 |
| 77 | +++ run-mac-tests 2012-07-31 17:27:22 +0000 |
| 78 | @@ -45,3 +45,5 @@ |
| 79 | python $u1trial --reactor=twisted -i "test_linux.py,test_windows.py" -p tests/platform/linux "$MODULE" |
| 80 | rm -rf _trial_temp |
| 81 | rm -rf build |
| 82 | + |
| 83 | +python tests/platform/check_reactor_import.py |
| 84 | |
| 85 | === modified file 'run-tests.bat' |
| 86 | --- run-tests.bat 2012-05-16 16:56:41 +0000 |
| 87 | +++ run-tests.bat 2012-07-31 17:27:22 +0000 |
| 88 | @@ -74,7 +74,7 @@ |
| 89 | COPY windows\clientdefs.py ubuntuone\clientdefs.py |
| 90 | COPY windows\logging.conf data\logging.conf |
| 91 | :: execute the tests with a number of ignored linux and mac os only modules |
| 92 | -"%PYTHONEXEPATH%" "%TRIALPATH%" --reactor=twisted -c -p tests\platform\linux -i "test_linux.py,test_darwin.py" %PARAMS% tests |
| 93 | +"%PYTHONEXEPATH%" "%TRIALPATH%" --reactor=twisted -c -p tests\platform\linux -i "test_linux.py,test_darwin.py,test_fsevents_daemon.py" %PARAMS% tests |
| 94 | |
| 95 | IF %SKIPLINT% == 1 ( |
| 96 | ECHO Skipping style checks |
| 97 | @@ -82,6 +82,8 @@ |
| 98 | ECHO Performing style checks... |
| 99 | "%PYTHONEXEPATH%" "%LINTPATH%" |
| 100 | |
| 101 | +"%PYTHONEXEPATH%" tests\platform\check_reactor_import.py |
| 102 | + |
| 103 | :: if pep8 is not present, move to the end |
| 104 | IF EXIST "%PEP8PATH%" ( |
| 105 | "%PEP8PATH%" --repeat ubuntuone |
| 106 | |
| 107 | === added file 'tests/platform/check_reactor_import.py' |
| 108 | --- tests/platform/check_reactor_import.py 1970-01-01 00:00:00 +0000 |
| 109 | +++ tests/platform/check_reactor_import.py 2012-07-31 17:27:22 +0000 |
| 110 | @@ -0,0 +1,74 @@ |
| 111 | +#! /usr/bin/python |
| 112 | +# |
| 113 | +# Copyright (C) 2012 Canonical Ltd. |
| 114 | +# |
| 115 | +# This program is free software: you can redistribute it and/or modify it |
| 116 | +# under the terms of the GNU General Public License version 3, as published |
| 117 | +# by the Free Software Foundation. |
| 118 | +# |
| 119 | +# This program is distributed in the hope that it will be useful, but |
| 120 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
| 121 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
| 122 | +# PURPOSE. See the GNU General Public License for more details. |
| 123 | +# |
| 124 | +# You should have received a copy of the GNU General Public License along |
| 125 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
| 126 | +# |
| 127 | +# In addition, as a special exception, the copyright holders give |
| 128 | +# permission to link the code of portions of this program with the |
| 129 | +# OpenSSL library under certain conditions as described in each |
| 130 | +# individual source file, and distribute linked combinations |
| 131 | +# including the two. |
| 132 | +# You must obey the GNU General Public License in all respects |
| 133 | +# for all of the code used other than OpenSSL. If you modify |
| 134 | +# file(s) with this exception, you may extend this exception to your |
| 135 | +# version of the file(s), but you are not obligated to do so. If you |
| 136 | +# do not wish to do so, delete this exception statement from your |
| 137 | +# version. If you delete this exception statement from all source |
| 138 | +# files in the program, then also delete it here. |
| 139 | +"""A script that checks for unintended imports of twisted.internet.reactor.""" |
| 140 | + |
| 141 | +# NOTE: the goal of this script is to avoid a bug that affects |
| 142 | +# ubuntuone-control-panel on windows and darwin. Those platforms use |
| 143 | +# the qt4reactor, and will break if the default reactor is installed |
| 144 | +# first. This can happen if a module used by control-panel (such as |
| 145 | +# ubuntuone.platform.credentials), imports reactor. Only sub-modules |
| 146 | +# that are not used by ubuntuone-control-panel can safely import |
| 147 | +# reactor at module-level. |
| 148 | + |
| 149 | +from __future__ import (unicode_literals, print_function) |
| 150 | + |
| 151 | +import __builtin__ |
| 152 | + |
| 153 | +import sys |
| 154 | +import traceback |
| 155 | + |
| 156 | + |
| 157 | +def fake_import(*args, **kwargs): |
| 158 | + """A wrapper for __import__ that dies when importing reactor.""" |
| 159 | + imp_name_base = args[0] |
| 160 | + |
| 161 | + if len(args) == 4 and args[3] is not None: |
| 162 | + imp_names = ["{0}.{1}".format(imp_name_base, sm) |
| 163 | + for sm in args[3]] |
| 164 | + else: |
| 165 | + imp_names = [imp_name_base] |
| 166 | + |
| 167 | + for imp_name in imp_names: |
| 168 | + if 'twisted.internet.reactor' == imp_name: |
| 169 | + print("ERROR: should not import reactor here:") |
| 170 | + traceback.print_stack() |
| 171 | + sys.exit(1) |
| 172 | + |
| 173 | + r = real_import(*args, **kwargs) |
| 174 | + return r |
| 175 | + |
| 176 | + |
| 177 | +if __name__ == '__main__': |
| 178 | + |
| 179 | + real_import = __builtin__.__import__ |
| 180 | + __builtin__.__import__ = fake_import |
| 181 | + |
| 182 | + subs = ["", ".tools", ".logger", ".credentials"] |
| 183 | + for module in ["ubuntuone.platform" + p for p in subs]: |
| 184 | + m = __import__(module) |
| 185 | |
| 186 | === modified file 'tests/platform/filesystem_notifications/test_darwin.py' |
| 187 | --- tests/platform/filesystem_notifications/test_darwin.py 2012-07-12 13:51:33 +0000 |
| 188 | +++ tests/platform/filesystem_notifications/test_darwin.py 2012-07-31 17:27:22 +0000 |
| 189 | @@ -39,11 +39,14 @@ |
| 190 | |
| 191 | from contrib.testing.testcase import BaseTwistedTestCase |
| 192 | from ubuntuone.devtools.handlers import MementoHandler |
| 193 | -from ubuntuone.platform.filesystem_notifications import ( |
| 194 | - darwin as filesystem_notifications, |
| 195 | -) |
| 196 | -from ubuntuone.platform.filesystem_notifications.common import ( |
| 197 | - NotifyProcessor, |
| 198 | +from ubuntuone.platform.filesystem_notifications.monitor import ( |
| 199 | + common, |
| 200 | +) |
| 201 | +from ubuntuone.platform.filesystem_notifications.monitor.darwin import ( |
| 202 | + fsevents_client as filesystem_notifications, |
| 203 | +) |
| 204 | +from ubuntuone.platform.filesystem_notifications import notify_processor |
| 205 | +from ubuntuone.platform.filesystem_notifications.monitor.common import ( |
| 206 | Watch, |
| 207 | WatchManager, |
| 208 | ) |
| 209 | @@ -60,7 +63,7 @@ |
| 210 | |
| 211 | # A reverse mapping for the tests |
| 212 | REVERSE_MACOS_ACTIONS = {} |
| 213 | -for key, value in filesystem_notifications.ACTIONS.iteritems(): |
| 214 | +for key, value in common.ACTIONS.items(): |
| 215 | REVERSE_MACOS_ACTIONS[value] = key |
| 216 | |
| 217 | |
| 218 | @@ -1030,7 +1033,7 @@ |
| 219 | def setUp(self): |
| 220 | """set up the diffeent tests.""" |
| 221 | yield super(TestNotifyProcessor, self).setUp() |
| 222 | - self.processor = NotifyProcessor(None) |
| 223 | + self.processor = notify_processor.NotifyProcessor(None) |
| 224 | self.general = FakeGeneralProcessor() |
| 225 | self.processor.general_processor = self.general |
| 226 | |
| 227 | @@ -1384,3 +1387,10 @@ |
| 228 | |
| 229 | self.assertTrue(d1.result, "Should not be called yet.") |
| 230 | self.assertTrue(d2, "Should not be called yet.") |
| 231 | + |
| 232 | + @defer.inlineCallbacks |
| 233 | + def test_is_available_monitor(self): |
| 234 | + """Test test the is_available_monitor method.""" |
| 235 | + # we should always return true |
| 236 | + is_available = yield common.FilesystemMonitor.is_available_monitor() |
| 237 | + self.assertTrue(is_available, 'Should always be available.') |
| 238 | |
| 239 | === modified file 'tests/platform/filesystem_notifications/test_filesystem_notifications.py' |
| 240 | --- tests/platform/filesystem_notifications/test_filesystem_notifications.py 2012-07-10 20:11:18 +0000 |
| 241 | +++ tests/platform/filesystem_notifications/test_filesystem_notifications.py 2012-07-31 17:27:22 +0000 |
| 242 | @@ -46,9 +46,7 @@ |
| 243 | remove_dir, |
| 244 | rename, |
| 245 | ) |
| 246 | -from ubuntuone.platform.filesystem_notifications import ( |
| 247 | - _GeneralINotifyProcessor, |
| 248 | -) |
| 249 | +from ubuntuone.platform.filesystem_notifications import notify_processor |
| 250 | from ubuntuone.syncdaemon.tritcask import Tritcask |
| 251 | from ubuntuone.syncdaemon import ( |
| 252 | event_queue, |
| 253 | @@ -62,19 +60,19 @@ |
| 254 | |
| 255 | def test_filter_none(self): |
| 256 | """Still works ok even if not receiving a regex to ignore.""" |
| 257 | - p = _GeneralINotifyProcessor(None) |
| 258 | + p = notify_processor.NotifyProcessor(None) |
| 259 | self.assertFalse(p.is_ignored("froo.pyc")) |
| 260 | |
| 261 | def test_filter_one(self): |
| 262 | """Filters stuff that matches (or not) this one regex.""" |
| 263 | - p = _GeneralINotifyProcessor(None, ['\A.*\\.pyc\Z']) |
| 264 | + p = notify_processor.NotifyProcessor(None, ['\A.*\\.pyc\Z']) |
| 265 | self.assertTrue(p.is_ignored("froo.pyc")) |
| 266 | self.assertFalse(p.is_ignored("froo.pyc.real")) |
| 267 | self.assertFalse(p.is_ignored("otherstuff")) |
| 268 | |
| 269 | def test_filter_two_simple(self): |
| 270 | """Filters stuff that matches (or not) these simple regexes.""" |
| 271 | - p = _GeneralINotifyProcessor(None, ['\A.*foo\Z', '\A.*bar\Z']) |
| 272 | + p = notify_processor.NotifyProcessor(None, ['\A.*foo\Z', '\A.*bar\Z']) |
| 273 | self.assertTrue(p.is_ignored("blah_foo")) |
| 274 | self.assertTrue(p.is_ignored("blah_bar")) |
| 275 | self.assertFalse(p.is_ignored("bar_xxx")) |
| 276 | @@ -83,7 +81,7 @@ |
| 277 | |
| 278 | def test_filter_two_complex(self): |
| 279 | """Filters stuff that matches (or not) these complex regexes.""" |
| 280 | - p = _GeneralINotifyProcessor(None, |
| 281 | + p = notify_processor.NotifyProcessor(None, |
| 282 | ['\A.*foo\Z|\Afoo.*\Z', '\A.*bar\Z']) |
| 283 | self.assertTrue(p.is_ignored("blah_foo")) |
| 284 | self.assertTrue(p.is_ignored("blah_bar")) |
| 285 | @@ -98,7 +96,7 @@ |
| 286 | store_call = lambda *args: calls.append(args) |
| 287 | self.patch(filesystem_notifications, "access", store_call) |
| 288 | self.patch(filesystem_notifications, "path_exists", lambda _: True) |
| 289 | - p = _GeneralINotifyProcessor(None) |
| 290 | + p = notify_processor.NotifyProcessor(None) |
| 291 | p.is_ignored(sample_path) |
| 292 | self.assertEqual(calls, [(sample_path,)]) |
| 293 | |
| 294 | |
| 295 | === added file 'tests/platform/filesystem_notifications/test_fsevents_daemon.py' |
| 296 | --- tests/platform/filesystem_notifications/test_fsevents_daemon.py 1970-01-01 00:00:00 +0000 |
| 297 | +++ tests/platform/filesystem_notifications/test_fsevents_daemon.py 2012-07-31 17:27:22 +0000 |
| 298 | @@ -0,0 +1,483 @@ |
| 299 | +# -*- coding: utf-8 *-* |
| 300 | +# |
| 301 | +# Copyright 2012 Canonical Ltd. |
| 302 | +# |
| 303 | +# This program is free software: you can redistribute it and/or modify it |
| 304 | +# under the terms of the GNU General Public License version 3, as published |
| 305 | +# by the Free Software Foundation. |
| 306 | +# |
| 307 | +# This program is distributed in the hope that it will be useful, but |
| 308 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
| 309 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
| 310 | +# PURPOSE. See the GNU General Public License for more details. |
| 311 | +# |
| 312 | +# You should have received a copy of the GNU General Public License along |
| 313 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
| 314 | +# |
| 315 | +# In addition, as a special exception, the copyright holders give |
| 316 | +# permission to link the code of portions of this program with the |
| 317 | +# OpenSSL library under certain conditions as described in each |
| 318 | +# individual source file, and distribute linked combinations |
| 319 | +# including the two. |
| 320 | +# You must obey the GNU General Public License in all respects |
| 321 | +# for all of the code used other than OpenSSL. If you modify |
| 322 | +# file(s) with this exception, you may extend this exception to your |
| 323 | +# version of the file(s), but you are not obligated to do so. If you |
| 324 | +# do not wish to do so, delete this exception statement from your |
| 325 | +# version. If you delete this exception statement from all source |
| 326 | +# files in the program, then also delete it here. |
| 327 | +"""Tests for the fsevents daemon integration.""" |
| 328 | + |
| 329 | +import os |
| 330 | + |
| 331 | +from twisted.internet import defer, protocol |
| 332 | + |
| 333 | +from contrib.testing.testcase import BaseTwistedTestCase |
| 334 | +from ubuntuone.darwin import fsevents |
| 335 | +from ubuntuone.devtools.testcases.txsocketserver import TidyUnixServer |
| 336 | +from ubuntuone.platform.filesystem_notifications.monitor.darwin import ( |
| 337 | + fsevents_daemon, |
| 338 | +) |
| 339 | +from ubuntuone.platform.filesystem_notifications.pyinotify_agnostic import ( |
| 340 | + IN_CREATE, |
| 341 | + IN_DELETE, |
| 342 | + IN_MOVED_FROM, |
| 343 | + IN_MOVED_TO, |
| 344 | +) |
| 345 | + |
| 346 | +class FakeServerProtocol(protocol.Protocol): |
| 347 | + """A test protocol.""" |
| 348 | + |
| 349 | + def dataReceived(self, data): |
| 350 | + """Echo the data received.""" |
| 351 | + self.transport.write(data) |
| 352 | + |
| 353 | + |
| 354 | +class FakeServerFactory(protocol.ServerFactory): |
| 355 | + """A factory for the test server.""" |
| 356 | + |
| 357 | + protocol = FakeServerProtocol |
| 358 | + |
| 359 | + |
| 360 | +class FakeDaemonEvent(object): |
| 361 | + """A fake daemon event.""" |
| 362 | + |
| 363 | + def __init__(self): |
| 364 | + self.event_paths = [] |
| 365 | + self.is_directory = False |
| 366 | + self.event_type = None |
| 367 | + |
| 368 | + |
| 369 | +class FakeProcessor(object): |
| 370 | + """A fake processor.""" |
| 371 | + |
| 372 | + def __init__(self, *args): |
| 373 | + """Create a new instance.""" |
| 374 | + self.processed_events = [] |
| 375 | + |
| 376 | + def __call__(self, event): |
| 377 | + """Process and event.""" |
| 378 | + self.processed_events.append(event) |
| 379 | + |
| 380 | + |
| 381 | +class FakePyInotifyEventsFactory(object): |
| 382 | + """Fake factory.""" |
| 383 | + |
| 384 | + def __init__(self): |
| 385 | + """Create a new instance.""" |
| 386 | + self.processor = FakeProcessor() |
| 387 | + self.called = [] |
| 388 | + self.watched_paths = [] |
| 389 | + self.ignored_paths = [] |
| 390 | + |
| 391 | + |
| 392 | +class FakeTransport(object): |
| 393 | + """A fake transport for the protocol.""" |
| 394 | + |
| 395 | + def __init__(self): |
| 396 | + """Create a new instance.""" |
| 397 | + self.called = [] |
| 398 | + |
| 399 | + def loseConnection(self): |
| 400 | + """"Lost the connection.""" |
| 401 | + self.called.append('loseConnection') |
| 402 | + |
| 403 | + |
| 404 | +class FakeProtocol(object): |
| 405 | + """A fake protocol object to interact with the daemon.""" |
| 406 | + |
| 407 | + def __init__(self): |
| 408 | + """Create a new instance.""" |
| 409 | + self.called = [] |
| 410 | + self.transport = FakeTransport() |
| 411 | + |
| 412 | + def remove_user(self): |
| 413 | + """Remove the user.""" |
| 414 | + self.called.append('remove_user') |
| 415 | + return defer.succeed(None) |
| 416 | + |
| 417 | + def remove_path(self, path): |
| 418 | + """Remove a path.""" |
| 419 | + self.called.extend(['remove_path', path]) |
| 420 | + return defer.succeed(True) |
| 421 | + |
| 422 | + def add_path(self, path): |
| 423 | + """Add a path.""" |
| 424 | + self.called.extend(['add_path', path]) |
| 425 | + |
| 426 | +class PyInotifyEventsFactoryTestCase(BaseTwistedTestCase): |
| 427 | + """Test the factory used to receive events.""" |
| 428 | + |
| 429 | + @defer.inlineCallbacks |
| 430 | + def setUp(self): |
| 431 | + """Set the diff tests.""" |
| 432 | + yield super(PyInotifyEventsFactoryTestCase, self).setUp() |
| 433 | + self.processor = FakeProcessor() |
| 434 | + self.factory = fsevents_daemon.PyInotifyEventsFactory(self.processor) |
| 435 | + |
| 436 | + def test_path_interesting_not_watched_or_ignored(self): |
| 437 | + """Test that we do know if the path is not interesting.""" |
| 438 | + path = u'/not/watched/path' |
| 439 | + self.assertTrue(self.factory.path_is_not_interesting(path)) |
| 440 | + |
| 441 | + def test_path_interesting_watched_not_ignored(self): |
| 442 | + """Test that we do not know if the path is not interesting.""" |
| 443 | + path = u'/watched/path' |
| 444 | + self.factory.watched_paths.append(path) |
| 445 | + self.assertFalse(self.factory.path_is_not_interesting(path)) |
| 446 | + |
| 447 | + def test_path_interesting_watched_but_ignored(self): |
| 448 | + """Test that we do not know if the path is not interesting.""" |
| 449 | + path = u'/ignored/path' |
| 450 | + self.factory.watched_paths.append(path) |
| 451 | + self.factory.ignored_paths.append(path) |
| 452 | + self.assertTrue(self.factory.path_is_not_interesting(path)) |
| 453 | + |
| 454 | + def test_path_interesting_not_watched_but_ignored(self): |
| 455 | + """Test that we do not know if the path is not interesting.""" |
| 456 | + path = u'/ignored/path' |
| 457 | + self.factory.ignored_paths.append(path) |
| 458 | + self.assertTrue(self.factory.path_is_not_interesting(path)) |
| 459 | + |
| 460 | + def test_is_create_false_rename(self): |
| 461 | + """Test if we do know when an event is a create.""" |
| 462 | + source_path = u'/other/watched/path' |
| 463 | + destination_path = u'/watched/path' |
| 464 | + source_head, _ = os.path.split(source_path) |
| 465 | + destination_head, _ = os.path.split(destination_path) |
| 466 | + self.factory.watched_paths.extend([source_head, destination_head]) |
| 467 | + event = FakeDaemonEvent() |
| 468 | + event.event_paths.extend([source_path, destination_path]) |
| 469 | + self.assertFalse(self.factory.is_create(event)) |
| 470 | + |
| 471 | + def test_is_create_false_delete(self): |
| 472 | + """Test if we do know when an event is a create.""" |
| 473 | + source_path = u'/watched/path' |
| 474 | + destination_path = u'/not/watched/path' |
| 475 | + source_head, _ = os.path.split(source_path) |
| 476 | + self.factory.watched_paths.append(source_head) |
| 477 | + event = FakeDaemonEvent() |
| 478 | + event.event_paths.extend([source_path, destination_path]) |
| 479 | + self.assertFalse(self.factory.is_create(event)) |
| 480 | + |
| 481 | + def test_is_create_true(self): |
| 482 | + """Test is we do know when an event is a create.""" |
| 483 | + source_path = u'/not/watched/path' |
| 484 | + destination_path = u'/watched/path' |
| 485 | + destination_head, _ = os.path.split(destination_path) |
| 486 | + self.factory.watched_paths.append(destination_head) |
| 487 | + event = FakeDaemonEvent() |
| 488 | + event.event_paths.extend([source_path, destination_path]) |
| 489 | + self.assertTrue(self.factory.is_create(event)) |
| 490 | + |
| 491 | + def test_is_delete_false_rename(self): |
| 492 | + """Test if we do know when an event is a delete.""" |
| 493 | + source_path = u'/other/watched/path' |
| 494 | + destination_path = u'/watched/path' |
| 495 | + source_head, _ = os.path.split(source_path) |
| 496 | + destination_head, _ = os.path.split(destination_path) |
| 497 | + self.factory.watched_paths.extend([source_head, destination_head]) |
| 498 | + event = FakeDaemonEvent() |
| 499 | + event.event_paths.extend([source_path, destination_path]) |
| 500 | + self.assertFalse(self.factory.is_delete(event)) |
| 501 | + |
| 502 | + def test_is_delete_false_create(self): |
| 503 | + """Test if we do know when an event is a delete.""" |
| 504 | + source_path = u'/not/watched/path' |
| 505 | + destination_path = u'/watched/path' |
| 506 | + destination_head, _ = os.path.split(destination_path) |
| 507 | + self.factory.watched_paths.append(destination_head) |
| 508 | + event = FakeDaemonEvent() |
| 509 | + event.event_paths.extend([source_path, destination_path]) |
| 510 | + self.assertFalse(self.factory.is_delete(event)) |
| 511 | + |
| 512 | + def test_is_delete_true(self): |
| 513 | + """Test if we do know when an event is a delete.""" |
| 514 | + source_path = u'/watched/path' |
| 515 | + destination_path = u'/not/watched/path' |
| 516 | + source_head, _ = os.path.split(source_path) |
| 517 | + self.factory.watched_paths.append(source_head) |
| 518 | + event = FakeDaemonEvent() |
| 519 | + event.event_paths.extend([source_path, destination_path]) |
| 520 | + self.assertTrue(self.factory.is_delete(event)) |
| 521 | + |
| 522 | + def test_generate_from_event(self): |
| 523 | + """Test the creation of a fake from event.""" |
| 524 | + cookie = 'cookie' |
| 525 | + source_path = u'/source/path' |
| 526 | + destination_path = u'/destination/path' |
| 527 | + event = FakeDaemonEvent() |
| 528 | + event.event_paths.extend([source_path, destination_path]) |
| 529 | + pyinotify_event = self.factory.generate_from_event(event, cookie) |
| 530 | + self.assertEqual(cookie, pyinotify_event.cookie) |
| 531 | + self.assertEqual(0, pyinotify_event.wd) |
| 532 | + self.assertEqual(event.is_directory, pyinotify_event.dir) |
| 533 | + self.assertEqual(IN_MOVED_FROM, pyinotify_event.mask) |
| 534 | + self.assertEqual(source_path, pyinotify_event.pathname) |
| 535 | + |
| 536 | + def test_generate_to_event(self): |
| 537 | + """Test the creation of a fake to event.""" |
| 538 | + cookie = 'cookie' |
| 539 | + source_path = u'/source/path' |
| 540 | + destination_path = u'/destination/path' |
| 541 | + event = FakeDaemonEvent() |
| 542 | + event.event_paths.extend([source_path, destination_path]) |
| 543 | + pyinotify_event = self.factory.generate_to_event(event, cookie) |
| 544 | + self.assertEqual(cookie, pyinotify_event.cookie) |
| 545 | + self.assertEqual(0, pyinotify_event.wd) |
| 546 | + self.assertEqual(event.is_directory, pyinotify_event.dir) |
| 547 | + self.assertEqual(IN_MOVED_TO, pyinotify_event.mask) |
| 548 | + self.assertEqual(destination_path, pyinotify_event.pathname) |
| 549 | + |
| 550 | + def test_convert_in_pyinotify_event_no_rename(self): |
| 551 | + """Test the creation of a no rename event.""" |
| 552 | + event_path = u'/path/of/the/event' |
| 553 | + for action in fsevents_daemon.DARWIN_ACTIONS: |
| 554 | + event = FakeDaemonEvent() |
| 555 | + event.event_paths.append(event_path) |
| 556 | + event.event_type = action |
| 557 | + converted_events = self.factory.convert_in_pyinotify_event(event) |
| 558 | + self.assertEqual(1, len(converted_events)) |
| 559 | + pyinotify_event = converted_events[0] |
| 560 | + self.assertEqual(0, pyinotify_event.wd) |
| 561 | + self.assertEqual(event.is_directory, pyinotify_event.dir) |
| 562 | + self.assertEqual(fsevents_daemon.DARWIN_ACTIONS[action], |
| 563 | + pyinotify_event.mask) |
| 564 | + self.assertEqual(event_path, pyinotify_event.pathname) |
| 565 | + |
| 566 | + def test_convert_in_pyinotify_event_rename_create(self): |
| 567 | + """Test the creation of a rename which is a create.""" |
| 568 | + source_path = u'/not/watched/path' |
| 569 | + destination_path = u'/watched/path' |
| 570 | + head, _ = os.path.split(destination_path) |
| 571 | + self.factory.watched_paths.append(head) |
| 572 | + event = FakeDaemonEvent() |
| 573 | + event.event_type = fsevents.FSE_RENAME |
| 574 | + event.event_paths.extend([source_path, destination_path]) |
| 575 | + converted_events = self.factory.convert_in_pyinotify_event(event) |
| 576 | + self.assertEqual(1, len(converted_events)) |
| 577 | + pyinotify_event = converted_events[0] |
| 578 | + self.assertEqual(0, pyinotify_event.wd) |
| 579 | + self.assertEqual(event.is_directory, pyinotify_event.dir) |
| 580 | + self.assertEqual(IN_CREATE, pyinotify_event.mask) |
| 581 | + self.assertEqual(destination_path, pyinotify_event.pathname) |
| 582 | + |
| 583 | + def test_convert_in_pyinotify_event_rename_delete(self): |
| 584 | + """Test the creation of a rename which is a delete.""" |
| 585 | + source_path = u'/watched/path' |
| 586 | + destination_path = u'/not/watched/path' |
| 587 | + head, _ = os.path.split(source_path) |
| 588 | + self.factory.watched_paths.append(head) |
| 589 | + event = FakeDaemonEvent() |
| 590 | + event.event_type = fsevents.FSE_RENAME |
| 591 | + event.event_paths.extend([source_path, destination_path]) |
| 592 | + converted_events = self.factory.convert_in_pyinotify_event(event) |
| 593 | + self.assertEqual(1, len(converted_events)) |
| 594 | + pyinotify_event = converted_events[0] |
| 595 | + self.assertEqual(0, pyinotify_event.wd) |
| 596 | + self.assertEqual(event.is_directory, pyinotify_event.dir) |
| 597 | + self.assertEqual(IN_DELETE, pyinotify_event.mask) |
| 598 | + self.assertEqual(source_path, pyinotify_event.pathname) |
| 599 | + |
| 600 | + def test_convert_in_pyinotify_event_rename(self): |
| 601 | + """Test the creation of a rename event.""" |
| 602 | + source_path = u'/watched/path1' |
| 603 | + destination_path = u'/watched/path2' |
| 604 | + head, _ = os.path.split(source_path) |
| 605 | + self.factory.watched_paths.append(head) |
| 606 | + event = FakeDaemonEvent() |
| 607 | + event.event_type = fsevents.FSE_RENAME |
| 608 | + event.event_paths.extend([source_path, destination_path]) |
| 609 | + converted_events = self.factory.convert_in_pyinotify_event(event) |
| 610 | + self.assertEqual(2, len(converted_events)) |
| 611 | + from_event = converted_events[0] |
| 612 | + to_event = converted_events[1] |
| 613 | + # assert from event |
| 614 | + self.assertEqual(0, from_event.wd) |
| 615 | + self.assertEqual(event.is_directory, from_event.dir) |
| 616 | + self.assertEqual(IN_MOVED_FROM, from_event.mask) |
| 617 | + self.assertEqual(source_path, from_event.pathname) |
| 618 | + # assert to event |
| 619 | + self.assertEqual(0, to_event.wd) |
| 620 | + self.assertEqual(event.is_directory, to_event.dir) |
| 621 | + self.assertEqual(IN_MOVED_TO, to_event.mask) |
| 622 | + self.assertEqual(destination_path, to_event.pathname) |
| 623 | + # assert cookie |
| 624 | + self.assertEqual(from_event.cookie, to_event.cookie) |
| 625 | + |
| 626 | + def test_process_event_ignored_type(self): |
| 627 | + """Test processing the event of an ignored type.""" |
| 628 | + for action in fsevents_daemon.DARWIN_IGNORED_ACTIONS: |
| 629 | + event = FakeDaemonEvent() |
| 630 | + event.event_type = action |
| 631 | + self.factory.process_event(event) |
| 632 | + self.assertEqual(0, len(self.processor.processed_events)) |
| 633 | + |
| 634 | + def test_process_event_dropped(self): |
| 635 | + """Test processing the drop of the events.""" |
| 636 | + func_called = [] |
| 637 | + event = FakeDaemonEvent() |
| 638 | + event.event_type = fsevents.FSE_EVENTS_DROPPED |
| 639 | + |
| 640 | + def fake_events_dropped(): |
| 641 | + """A fake events dropped implementation.""" |
| 642 | + func_called.append('fake_events_dropped') |
| 643 | + |
| 644 | + self.patch(self.factory, 'events_dropper', fake_events_dropped) |
| 645 | + self.factory.process_event(event) |
| 646 | + self.assertIn('fake_events_dropped', func_called) |
| 647 | + |
| 648 | + def test_process_ignored_path(self): |
| 649 | + """Test processing events from an ignored path.""" |
| 650 | + event_path = u'/path/of/the/event' |
| 651 | + head, _ = os.path.split(event_path) |
| 652 | + self.factory.ignored_paths.append(head) |
| 653 | + event = FakeDaemonEvent() |
| 654 | + event.event_paths.append(event_path) |
| 655 | + event.event_type = fsevents.FSE_CREATE_FILE |
| 656 | + self.factory.process_event(event) |
| 657 | + self.assertEqual(0, len(self.processor.processed_events)) |
| 658 | + |
| 659 | + def test_process_not_ignored_path(self): |
| 660 | + """Test processing events that are not ignored.""" |
| 661 | + event_path = u'/path/of/the/event' |
| 662 | + head, _ = os.path.split(event_path) |
| 663 | + self.factory.watched_paths.append(head) |
| 664 | + event = FakeDaemonEvent() |
| 665 | + event.event_paths.append(event_path) |
| 666 | + event.event_type = fsevents.FSE_CREATE_FILE |
| 667 | + self.factory.process_event(event) |
| 668 | + self.assertEqual(1, len(self.processor.processed_events)) |
| 669 | + self.assertEqual(event_path, |
| 670 | + self.processor.processed_events[0].pathname) |
| 671 | + |
| 672 | + |
| 673 | +class FilesystemMonitorTestCase(BaseTwistedTestCase): |
| 674 | + """Test the notify processor.""" |
| 675 | + |
| 676 | + def fake_connect_to_daemon(self): |
| 677 | + """A fake connection to daemon call.""" |
| 678 | + self.monitor._protocol = self.protocol |
| 679 | + defer.succeed(self.protocol) |
| 680 | + |
| 681 | + @defer.inlineCallbacks |
| 682 | + def setUp(self): |
| 683 | + """Set the tests.""" |
| 684 | + yield super(FilesystemMonitorTestCase, self).setUp() |
| 685 | + self.patch(fsevents_daemon, 'NotifyProcessor', FakeProcessor) |
| 686 | + self.factory = FakePyInotifyEventsFactory() |
| 687 | + self.protocol = FakeProtocol() |
| 688 | + self.monitor = fsevents_daemon.FilesystemMonitor(None, None) |
| 689 | + self.processor = self.monitor._processor |
| 690 | + |
| 691 | + # override default objects |
| 692 | + self.monitor._factory = self.factory |
| 693 | + |
| 694 | + # patch the connect |
| 695 | + self.patch(fsevents_daemon.FilesystemMonitor, '_connect_to_daemon', |
| 696 | + self.fake_connect_to_daemon) |
| 697 | + |
| 698 | + @defer.inlineCallbacks |
| 699 | + def test_shutdown_protocol(self): |
| 700 | + """Test shutdown with a protocol.""" |
| 701 | + self.monitor._protocol = self.protocol |
| 702 | + yield self.monitor.shutdown() |
| 703 | + self.assertIn('remove_user', self.protocol.called) |
| 704 | + |
| 705 | + @defer.inlineCallbacks |
| 706 | + def test_shutdown_no_protocol(self): |
| 707 | + """Test shutdown without a protocol.""" |
| 708 | + stopped = yield self.monitor.shutdown() |
| 709 | + self.assertTrue(stopped) |
| 710 | + |
| 711 | + @defer.inlineCallbacks |
| 712 | + def test_rm_path_not_root(self): |
| 713 | + """Test removing a path.""" |
| 714 | + dirpath = '/path/to/remove/' |
| 715 | + self.factory.watched_paths.append('/path') |
| 716 | + yield self.monitor.rm_watch(dirpath) |
| 717 | + self.assertIn(dirpath, self.factory.ignored_paths) |
| 718 | + |
| 719 | + @defer.inlineCallbacks |
| 720 | + def test_rm_path_root(self): |
| 721 | + """Test removing a path that is a root path.""" |
| 722 | + dirpath = '/path/to/remove/' |
| 723 | + self.factory.watched_paths.append(dirpath) |
| 724 | + yield self.monitor.rm_watch(dirpath) |
| 725 | + self.assertIn('remove_path', self.protocol.called) |
| 726 | + self.assertIn(dirpath, self.protocol.called) |
| 727 | + self.assertNotIn(dirpath, self.factory.watched_paths) |
| 728 | + |
| 729 | + @defer.inlineCallbacks |
| 730 | + def test_add_watch_not_root(self): |
| 731 | + """Test adding a watch.""" |
| 732 | + dirpath = '/path/to/remove/' |
| 733 | + self.factory.watched_paths.append('/path') |
| 734 | + yield self.monitor.add_watch(dirpath) |
| 735 | + self.assertNotIn('add_path', self.protocol.called) |
| 736 | + |
| 737 | + @defer.inlineCallbacks |
| 738 | + def test_add_watch_root(self): |
| 739 | + """Test adding a watch that is a root.""" |
| 740 | + dirpath = '/path/to/remove/' |
| 741 | + self.factory.watched_paths.append('/other/path') |
| 742 | + yield self.monitor.add_watch(dirpath) |
| 743 | + self.assertIn('add_path', self.protocol.called) |
| 744 | + self.assertIn(dirpath, self.protocol.called) |
| 745 | + |
| 746 | + @defer.inlineCallbacks |
| 747 | + def test_add_watch_ignored(self): |
| 748 | + """Test adding a watch that was ignored.""" |
| 749 | + dirpath = '/path/to/remove/' |
| 750 | + self.factory.ignored_paths.append(dirpath) |
| 751 | + yield self.monitor.add_watch(dirpath) |
| 752 | + self.assertNotIn('add_path', self.protocol.called) |
| 753 | + |
| 754 | + @defer.inlineCallbacks |
| 755 | + def test_is_available_monitor_running(self): |
| 756 | + """Test the method when it is indeed running.""" |
| 757 | + monitor_cls = fsevents_daemon.FilesystemMonitor |
| 758 | + |
| 759 | + # start a fake server for the test |
| 760 | + server = TidyUnixServer() |
| 761 | + yield server.listen_server(FakeServerFactory) |
| 762 | + self.addCleanup(server.clean_up) |
| 763 | + |
| 764 | + # set the path |
| 765 | + old_socket = fsevents_daemon.DAEMON_SOCKET |
| 766 | + fsevents_daemon.DAEMON_SOCKET = server.path |
| 767 | + self.addCleanup(setattr, fsevents_daemon, 'DAEMON_SOCKET', old_socket) |
| 768 | + |
| 769 | + result = yield monitor_cls.is_available_monitor() |
| 770 | + self.assertTrue(result) |
| 771 | + |
| 772 | + @defer.inlineCallbacks |
| 773 | + def test_is_available_monitor_fail(self): |
| 774 | + """Test the method when the daemon is not running.""" |
| 775 | + monitor_cls = fsevents_daemon.FilesystemMonitor |
| 776 | + old_socket = fsevents_daemon.DAEMON_SOCKET |
| 777 | + fsevents_daemon.DAEMON_SOCKET += 'test' |
| 778 | + self.addCleanup(setattr, fsevents_daemon, 'DAEMON_SOCKET', old_socket) |
| 779 | + |
| 780 | + result = yield monitor_cls.is_available_monitor() |
| 781 | + self.assertFalse(result) |
| 782 | |
| 783 | === modified file 'tests/platform/filesystem_notifications/test_linux.py' |
| 784 | --- tests/platform/filesystem_notifications/test_linux.py 2012-06-14 18:22:16 +0000 |
| 785 | +++ tests/platform/filesystem_notifications/test_linux.py 2012-07-31 17:27:22 +0000 |
| 786 | @@ -37,8 +37,9 @@ |
| 787 | |
| 788 | from contrib.testing import testcase |
| 789 | from ubuntuone.syncdaemon import volume_manager |
| 790 | -from ubuntuone.platform.filesystem_notifications import ( |
| 791 | - linux as filesystem_notifications, |
| 792 | +from ubuntuone.platform.filesystem_notifications import notify_processor |
| 793 | +from ubuntuone.platform.filesystem_notifications.monitor import ( |
| 794 | + linux as filesystem_notifications |
| 795 | ) |
| 796 | from tests.platform.filesystem_notifications import BaseFSMonitorTestCase |
| 797 | |
| 798 | @@ -71,12 +72,12 @@ |
| 799 | |
| 800 | def test_processor_shutdown_no_timer(self): |
| 801 | """Shutdown the processor, no timer.""" |
| 802 | - processor = filesystem_notifications._GeneralINotifyProcessor('mntr') |
| 803 | + processor = notify_processor.NotifyProcessor('mntr') |
| 804 | processor.shutdown() |
| 805 | |
| 806 | def test_processor_shutdown_timer_inactive(self): |
| 807 | """Shutdown the processor, timer inactive.""" |
| 808 | - processor = filesystem_notifications._GeneralINotifyProcessor('mntr') |
| 809 | + processor = notify_processor.NotifyProcessor('mntr') |
| 810 | d = defer.Deferred() |
| 811 | |
| 812 | def shutdown(): |
| 813 | @@ -89,7 +90,7 @@ |
| 814 | |
| 815 | def test_processor_shutdown_timer_active(self): |
| 816 | """Shutdown the processor, timer going on.""" |
| 817 | - processor = filesystem_notifications._GeneralINotifyProcessor('mntr') |
| 818 | + processor = notify_processor.NotifyProcessor('mntr') |
| 819 | processor.timer = reactor.callLater(10, lambda: None) |
| 820 | processor.shutdown() |
| 821 | self.assertFalse(processor.timer.active()) |
| 822 | @@ -390,6 +391,14 @@ |
| 823 | self.assertEqual(self.monitor._general_watchs, {}) |
| 824 | self.assertEqual(self.monitor._ancestors_watchs, {}) |
| 825 | |
| 826 | + @defer.inlineCallbacks |
| 827 | + def test_is_available_monitor(self): |
| 828 | + """Test test the is_available_monitor method.""" |
| 829 | + # we should always return true |
| 830 | + monitor_cls = filesystem_notifications.FilesystemMonitor |
| 831 | + is_available = yield monitor_cls.is_available_monitor() |
| 832 | + self.assertTrue(is_available, 'Should always be available.') |
| 833 | + |
| 834 | |
| 835 | class FakeEvent(object): |
| 836 | """A fake event.""" |
| 837 | @@ -405,7 +414,7 @@ |
| 838 | """When eCryptFS sends CLOSE_WRITE on folders, ignore it""" |
| 839 | result = [] |
| 840 | monitor = None |
| 841 | - processor = filesystem_notifications._GeneralINotifyProcessor(monitor) |
| 842 | + processor = notify_processor.NotifyProcessor(monitor) |
| 843 | self.patch(processor.general_processor, "push_event", result.append) |
| 844 | |
| 845 | fake_event = FakeEvent() |
| 846 | @@ -420,7 +429,7 @@ |
| 847 | """When anything sends CLOSE_WRITE on files, handle it.""" |
| 848 | result = [] |
| 849 | monitor = None |
| 850 | - processor = filesystem_notifications._GeneralINotifyProcessor(monitor) |
| 851 | + processor = notify_processor.NotifyProcessor(monitor) |
| 852 | self.patch(processor.general_processor, "push_event", result.append) |
| 853 | |
| 854 | fake_event = FakeEvent() |
| 855 | |
| 856 | === modified file 'tests/platform/filesystem_notifications/test_windows.py' |
| 857 | --- tests/platform/filesystem_notifications/test_windows.py 2012-07-10 20:11:18 +0000 |
| 858 | +++ tests/platform/filesystem_notifications/test_windows.py 2012-07-31 17:27:22 +0000 |
| 859 | @@ -50,16 +50,16 @@ |
| 860 | IN_DELETE, |
| 861 | IN_OPEN, |
| 862 | ) |
| 863 | -from ubuntuone.platform.filesystem_notifications import ( |
| 864 | +from ubuntuone.platform.filesystem_notifications.monitor import ( |
| 865 | windows as filesystem_notifications, |
| 866 | ) |
| 867 | -from ubuntuone.platform.filesystem_notifications.common import ( |
| 868 | +from ubuntuone.platform.filesystem_notifications import notify_processor |
| 869 | +from ubuntuone.platform.filesystem_notifications.monitor.common import ( |
| 870 | FilesystemMonitor, |
| 871 | - NotifyProcessor, |
| 872 | Watch, |
| 873 | WatchManager, |
| 874 | ) |
| 875 | -from ubuntuone.platform.filesystem_notifications.windows import ( |
| 876 | +from ubuntuone.platform.filesystem_notifications.monitor.windows import ( |
| 877 | ACTIONS, |
| 878 | FILE_NOTIFY_CHANGE_FILE_NAME, |
| 879 | FILE_NOTIFY_CHANGE_DIR_NAME, |
| 880 | @@ -1057,7 +1057,7 @@ |
| 881 | def setUp(self): |
| 882 | """set up the diffeent tests.""" |
| 883 | yield super(TestNotifyProcessor, self).setUp() |
| 884 | - self.processor = NotifyProcessor(None) |
| 885 | + self.processor = notify_processor.NotifyProcessor(None) |
| 886 | self.general = FakeGeneralProcessor() |
| 887 | self.processor.general_processor = self.general |
| 888 | |
| 889 | @@ -1087,9 +1087,10 @@ |
| 890 | """Test that we do indeed ignore the correct paths.""" |
| 891 | not_ignored = 'test' |
| 892 | ignored = not_ignored + '.lnk' |
| 893 | - self.assertFalse(filesystem_notifications.path_is_ignored(not_ignored), |
| 894 | + path_is_ignored = notify_processor.common.path_is_ignored |
| 895 | + self.assertFalse(path_is_ignored(not_ignored), |
| 896 | 'Only links should be ignored.') |
| 897 | - self.assertTrue(filesystem_notifications.path_is_ignored(ignored), |
| 898 | + self.assertTrue(path_is_ignored(ignored), |
| 899 | 'Links should be ignored.') |
| 900 | |
| 901 | def test_is_ignored(self): |
| 902 | @@ -1445,3 +1446,10 @@ |
| 903 | # lets ensure that we never added the watches |
| 904 | self.assertEqual(0, len(monitor._watch_manager._wdm.values()), |
| 905 | 'No watches should have been added.') |
| 906 | + |
| 907 | + @defer.inlineCallbacks |
| 908 | + def test_is_available_monitor(self): |
| 909 | + """Test test the is_available_monitor method.""" |
| 910 | + # we should always return true |
| 911 | + is_available = yield FilesystemMonitor.is_available_monitor() |
| 912 | + self.assertTrue(is_available, 'Should always be available.') |
| 913 | |
| 914 | === modified file 'tests/syncdaemon/test_config.py' |
| 915 | --- tests/syncdaemon/test_config.py 2012-07-17 18:36:13 +0000 |
| 916 | +++ tests/syncdaemon/test_config.py 2012-07-31 17:27:22 +0000 |
| 917 | @@ -700,6 +700,19 @@ |
| 918 | self.assertEquals(self.cp.get('__main__', 'ignore').value, |
| 919 | ['.*\\.pyc', '.*\\.sw[opnx]']) |
| 920 | |
| 921 | + def test_fs_monitor_not_default(self): |
| 922 | + """Test get monitor.""" |
| 923 | + monitor_id = 'my_monitor' |
| 924 | + conf_file = os.path.join(self.test_root, 'test_new_config.conf') |
| 925 | + with open_file(conf_file, 'w') as fd: |
| 926 | + fd.write('[__main__]\n') |
| 927 | + fd.write('fs_monitor = %s\n' % monitor_id) |
| 928 | + self.assertTrue(path_exists(conf_file)) |
| 929 | + self.cp.read([conf_file]) |
| 930 | + self.cp.parse_all() |
| 931 | + self.assertEquals(self.cp.get('__main__', 'fs_monitor').value, |
| 932 | + monitor_id) |
| 933 | + |
| 934 | def test_use_trash_default(self): |
| 935 | """Test default configuration for use_trash.""" |
| 936 | self.cp.parse_all() |
| 937 | |
| 938 | === modified file 'tests/syncdaemon/test_eventqueue.py' |
| 939 | --- tests/syncdaemon/test_eventqueue.py 2012-04-09 20:07:05 +0000 |
| 940 | +++ tests/syncdaemon/test_eventqueue.py 2012-07-31 17:27:22 +0000 |
| 941 | @@ -37,6 +37,7 @@ |
| 942 | from twisted.trial.unittest import TestCase |
| 943 | |
| 944 | from contrib.testing.testcase import BaseTwistedTestCase, FakeVolumeManager |
| 945 | +from ubuntuone.platform.filesystem_notifications.monitor import FilesystemMonitor |
| 946 | from ubuntuone.syncdaemon import ( |
| 947 | event_queue, |
| 948 | filesystem_manager, |
| 949 | @@ -406,6 +407,22 @@ |
| 950 | return self.shutdown_d |
| 951 | |
| 952 | |
| 953 | +class EventQueueInitTestCase(TestCase): |
| 954 | + """Test the init of the EQ.""" |
| 955 | + |
| 956 | + def test_default_monitor(self): |
| 957 | + """Test the init with the default monitor.""" |
| 958 | + eq = event_queue.EventQueue(None) |
| 959 | + self.assertIsInstance(eq.monitor, FilesystemMonitor) |
| 960 | + return eq.shutdown() |
| 961 | + |
| 962 | + def test_passed_monitor(self): |
| 963 | + """Test the init with a custom monitor.""" |
| 964 | + eq = event_queue.EventQueue(None, monitor_class=FakeMonitor) |
| 965 | + self.assertIsInstance(eq.monitor, FakeMonitor) |
| 966 | + |
| 967 | + |
| 968 | + |
| 969 | class EventQueueShutdownTestCase(TestCase): |
| 970 | """Test the shutdown method in EQ.""" |
| 971 | |
| 972 | |
| 973 | === modified file 'ubuntuone/platform/filesystem_notifications/__init__.py' |
| 974 | --- ubuntuone/platform/filesystem_notifications/__init__.py 2012-07-10 20:11:18 +0000 |
| 975 | +++ ubuntuone/platform/filesystem_notifications/__init__.py 2012-07-31 17:27:22 +0000 |
| 976 | @@ -28,14 +28,8 @@ |
| 977 | # files in the program, then also delete it here. |
| 978 | """File System Notification module.""" |
| 979 | |
| 980 | -import sys |
| 981 | - |
| 982 | - |
| 983 | -if sys.platform in ('darwin', 'win32'): |
| 984 | - from ubuntuone.platform.filesystem_notifications import common |
| 985 | - FilesystemMonitor = common.FilesystemMonitor |
| 986 | - _GeneralINotifyProcessor = common.NotifyProcessor |
| 987 | -else: |
| 988 | - from ubuntuone.platform.filesystem_notifications import linux |
| 989 | - FilesystemMonitor = linux.FilesystemMonitor |
| 990 | - _GeneralINotifyProcessor = linux._GeneralINotifyProcessor |
| 991 | +from ubuntuone.platform.filesystem_notifications import ( |
| 992 | + notify_processor, |
| 993 | +) |
| 994 | + |
| 995 | +_GeneralINotifyProcessor = notify_processor.NotifyProcessor |
| 996 | |
| 997 | === added directory 'ubuntuone/platform/filesystem_notifications/filesystem_monitor' |
| 998 | === added file 'ubuntuone/platform/filesystem_notifications/filesystem_monitor/__init__.py' |
| 999 | === added directory 'ubuntuone/platform/filesystem_notifications/monitor' |
| 1000 | === added file 'ubuntuone/platform/filesystem_notifications/monitor/__init__.py' |
| 1001 | --- ubuntuone/platform/filesystem_notifications/monitor/__init__.py 1970-01-01 00:00:00 +0000 |
| 1002 | +++ ubuntuone/platform/filesystem_notifications/monitor/__init__.py 2012-07-31 17:27:22 +0000 |
| 1003 | @@ -0,0 +1,95 @@ |
| 1004 | +# -*- coding: utf-8 *-* |
| 1005 | +# |
| 1006 | +# Copyright 2011-2012 Canonical Ltd. |
| 1007 | +# |
| 1008 | +# This program is free software: you can redistribute it and/or modify it |
| 1009 | +# under the terms of the GNU General Public License version 3, as published |
| 1010 | +# by the Free Software Foundation. |
| 1011 | +# |
| 1012 | +# This program is distributed in the hope that it will be useful, but |
| 1013 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
| 1014 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
| 1015 | +# PURPOSE. See the GNU General Public License for more details. |
| 1016 | +# |
| 1017 | +# You should have received a copy of the GNU General Public License along |
| 1018 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
| 1019 | +# |
| 1020 | +# In addition, as a special exception, the copyright holders give |
| 1021 | +# permission to link the code of portions of this program with the |
| 1022 | +# OpenSSL library under certain conditions as described in each |
| 1023 | +# individual source file, and distribute linked combinations |
| 1024 | +# including the two. |
| 1025 | +# You must obey the GNU General Public License in all respects |
| 1026 | +# for all of the code used other than OpenSSL. If you modify |
| 1027 | +# file(s) with this exception, you may extend this exception to your |
| 1028 | +# version of the file(s), but you are not obligated to do so. If you |
| 1029 | +# do not wish to do so, delete this exception statement from your |
| 1030 | +# version. If you delete this exception statement from all source |
| 1031 | +# files in the program, then also delete it here. |
| 1032 | +"""Filesystem monitors per platform.""" |
| 1033 | + |
| 1034 | +import sys |
| 1035 | + |
| 1036 | +from twisted.internet import defer |
| 1037 | + |
| 1038 | +DEFAULT_MONITOR = 'default' |
| 1039 | + |
| 1040 | + |
| 1041 | +class NoAvailableMonitorError(Exception): |
| 1042 | + """Raised if there are no available monitors in the system.""" |
| 1043 | + |
| 1044 | + |
| 1045 | +if sys.platform == 'win32': |
| 1046 | + from ubuntuone.platform.filesystem_notifications.monitor import ( |
| 1047 | + common, |
| 1048 | + ) |
| 1049 | + |
| 1050 | + FILEMONITOR_IDS = { |
| 1051 | + DEFAULT_MONITOR: common.FilesystemMonitor, |
| 1052 | + } |
| 1053 | + |
| 1054 | +elif sys.platform == 'darwin': |
| 1055 | + from ubuntuone.platform.filesystem_notifications.monitor import darwin |
| 1056 | + from ubuntuone.platform.filesystem_notifications.monitor import ( |
| 1057 | + common, |
| 1058 | + ) |
| 1059 | + |
| 1060 | + FILEMONITOR_IDS = { |
| 1061 | + DEFAULT_MONITOR: common.FilesystemMonitor, |
| 1062 | + 'daemon': darwin.fsevents_daemon.FilesystemMonitor, |
| 1063 | + } |
| 1064 | +else: |
| 1065 | + from ubuntuone.platform.filesystem_notifications.monitor import ( |
| 1066 | + linux, |
| 1067 | + ) |
| 1068 | + |
| 1069 | + FILEMONITOR_IDS = { |
| 1070 | + DEFAULT_MONITOR: linux.FilesystemMonitor, |
| 1071 | + } |
| 1072 | + |
| 1073 | +# mantain old API |
| 1074 | +FilesystemMonitor = FILEMONITOR_IDS[DEFAULT_MONITOR] |
| 1075 | + |
| 1076 | + |
| 1077 | +@defer.inlineCallbacks |
| 1078 | +def get_filemonitor_class(monitor_id=None): |
| 1079 | + """Return the class to be used.""" |
| 1080 | + if monitor_id is None: |
| 1081 | + # use default |
| 1082 | + monitor_id = 'default' |
| 1083 | + |
| 1084 | + if monitor_id not in FILEMONITOR_IDS: |
| 1085 | + raise NoAvailableMonitorError( |
| 1086 | + 'No available monitor with id "%s"could be found.' % monitor_id) |
| 1087 | + |
| 1088 | + # retrieve the correct class and assert it can be used |
| 1089 | + cls = FILEMONITOR_IDS[monitor_id] |
| 1090 | + is_available = yield cls.is_available_monitor() |
| 1091 | + |
| 1092 | + if is_available: |
| 1093 | + defer.returnValue(cls) |
| 1094 | + elif not is_available and monitor_id != DEFAULT_MONITOR: |
| 1095 | + cls = yield get_filemonitor_class(DEFAULT_MONITOR) |
| 1096 | + defer.returnValue(cls) |
| 1097 | + else: |
| 1098 | + raise NoAvailableMonitorError('No available monitor could be found.') |
| 1099 | |
| 1100 | === renamed file 'ubuntuone/platform/filesystem_notifications/common.py' => 'ubuntuone/platform/filesystem_notifications/monitor/common.py' |
| 1101 | --- ubuntuone/platform/filesystem_notifications/common.py 2012-07-13 12:39:33 +0000 |
| 1102 | +++ ubuntuone/platform/filesystem_notifications/monitor/common.py 2012-07-31 17:27:22 +0000 |
| 1103 | @@ -34,15 +34,11 @@ |
| 1104 | |
| 1105 | from twisted.internet import defer |
| 1106 | |
| 1107 | +from ubuntuone.platform.filesystem_notifications import notify_processor |
| 1108 | from ubuntuone.platform.filesystem_notifications.pyinotify_agnostic import ( |
| 1109 | Event, |
| 1110 | WatchManagerError, |
| 1111 | - ProcessEvent, |
| 1112 | - IN_OPEN, |
| 1113 | - IN_CLOSE_NOWRITE, |
| 1114 | - IN_CLOSE_WRITE, |
| 1115 | IN_CREATE, |
| 1116 | - IN_IGNORED, |
| 1117 | IN_ISDIR, |
| 1118 | IN_DELETE, |
| 1119 | IN_MOVED_FROM, |
| 1120 | @@ -57,11 +53,14 @@ |
| 1121 | os_path, |
| 1122 | ) |
| 1123 | |
| 1124 | + |
| 1125 | if sys.platform == 'darwin': |
| 1126 | - from ubuntuone.platform.filesystem_notifications import darwin |
| 1127 | - source = darwin |
| 1128 | + from ubuntuone.platform.filesystem_notifications.monitor.darwin import ( |
| 1129 | + fsevents_client, |
| 1130 | + ) |
| 1131 | + source = fsevents_client |
| 1132 | elif sys.platform == 'win32': |
| 1133 | - from ubuntuone.platform.filesystem_notifications import windows |
| 1134 | + from ubuntuone.platform.filesystem_notifications.monitor import windows |
| 1135 | source = windows |
| 1136 | else: |
| 1137 | raise ImportError('Not supported platform') |
| 1138 | @@ -73,27 +72,10 @@ |
| 1139 | # a map of the actions to names so that we have better logs. |
| 1140 | ACTIONS_NAMES = source.ACTIONS_NAMES |
| 1141 | |
| 1142 | -# ignore paths in the platform, mainly links atm |
| 1143 | -path_is_ignored = source.path_is_ignored |
| 1144 | - |
| 1145 | # the base class to be use for a platform |
| 1146 | PlatformWatch = source.Watch |
| 1147 | PlatformWatchManager = source.WatchManager |
| 1148 | |
| 1149 | -# translates quickly the event and it's is_dir state to our standard events |
| 1150 | -NAME_TRANSLATIONS = { |
| 1151 | - IN_OPEN: 'FS_FILE_OPEN', |
| 1152 | - IN_CLOSE_NOWRITE: 'FS_FILE_CLOSE_NOWRITE', |
| 1153 | - IN_CLOSE_WRITE: 'FS_FILE_CLOSE_WRITE', |
| 1154 | - IN_CREATE: 'FS_FILE_CREATE', |
| 1155 | - IN_CREATE | IN_ISDIR: 'FS_DIR_CREATE', |
| 1156 | - IN_DELETE: 'FS_FILE_DELETE', |
| 1157 | - IN_DELETE | IN_ISDIR: 'FS_DIR_DELETE', |
| 1158 | - IN_MOVED_FROM: 'FS_FILE_DELETE', |
| 1159 | - IN_MOVED_FROM | IN_ISDIR: 'FS_DIR_DELETE', |
| 1160 | - IN_MOVED_TO: 'FS_FILE_CREATE', |
| 1161 | - IN_MOVED_TO | IN_ISDIR: 'FS_DIR_CREATE'} |
| 1162 | - |
| 1163 | # our logging level |
| 1164 | TRACE = logger.TRACE |
| 1165 | |
| 1166 | @@ -330,8 +312,8 @@ |
| 1167 | @is_valid_os_path(path_indexes=[1]) |
| 1168 | def get_wd(self, path): |
| 1169 | """Return the watcher that is used to watch the given path.""" |
| 1170 | - if not path.endswith(os.path.sep): |
| 1171 | - path = path + os.path.sep |
| 1172 | + if not path[-1] == os.path.sep: |
| 1173 | + path += os.path.sep |
| 1174 | for current_wd in self._wdm: |
| 1175 | watch_path = self._wdm[current_wd].path |
| 1176 | if ((watch_path == path or watch_path in path) |
| 1177 | @@ -378,207 +360,6 @@ |
| 1178 | yield self.manager.stop() |
| 1179 | |
| 1180 | |
| 1181 | -class NotifyProcessor(ProcessEvent): |
| 1182 | - """Processor that takes care of dealing with the events. |
| 1183 | - |
| 1184 | - This interface will be exposed to syncdaemon, ergo all passed |
| 1185 | - and returned paths must be a sequence of BYTES encoded with utf8. |
| 1186 | - """ |
| 1187 | - |
| 1188 | - def __init__(self, monitor, ignore_config=None): |
| 1189 | - # XXX: avoid circular imports. |
| 1190 | - from ubuntuone.syncdaemon.filesystem_notifications import ( |
| 1191 | - GeneralINotifyProcessor) |
| 1192 | - self.general_processor = GeneralINotifyProcessor(monitor, |
| 1193 | - self.handle_dir_delete, NAME_TRANSLATIONS, |
| 1194 | - path_is_ignored, IN_IGNORED, ignore_config=ignore_config) |
| 1195 | - self.held_event = None |
| 1196 | - |
| 1197 | - def rm_from_mute_filter(self, event, paths): |
| 1198 | - """Remove event from the mute filter.""" |
| 1199 | - self.general_processor.rm_from_mute_filter(event, paths) |
| 1200 | - |
| 1201 | - def add_to_mute_filter(self, event, paths): |
| 1202 | - """Add an event and path(s) to the mute filter.""" |
| 1203 | - self.general_processor.add_to_mute_filter(event, paths) |
| 1204 | - |
| 1205 | - @is_valid_syncdaemon_path(path_indexes=[1]) |
| 1206 | - def is_ignored(self, path): |
| 1207 | - """Should we ignore this path?""" |
| 1208 | - return self.general_processor.is_ignored(path) |
| 1209 | - |
| 1210 | - def release_held_event(self, timed_out=False): |
| 1211 | - """Release the event on hold to fulfill its destiny.""" |
| 1212 | - self.general_processor.push_event(self.held_event) |
| 1213 | - self.held_event = None |
| 1214 | - |
| 1215 | - def process_IN_MODIFY(self, event): |
| 1216 | - """Capture a modify event and fake an open ^ close write events.""" |
| 1217 | - # lets ignore dir changes |
| 1218 | - if event.dir: |
| 1219 | - return |
| 1220 | - # on someplatforms we just get IN_MODIFY, lets always fake |
| 1221 | - # an OPEN & CLOSE_WRITE couple |
| 1222 | - raw_open = raw_close = { |
| 1223 | - 'wd': event.wd, |
| 1224 | - 'dir': event.dir, |
| 1225 | - 'name': event.name, |
| 1226 | - 'path': event.path} |
| 1227 | - # caculate the open mask |
| 1228 | - raw_open['mask'] = IN_OPEN |
| 1229 | - # create the event using the raw data, then fix the pathname param |
| 1230 | - open_event = Event(raw_open) |
| 1231 | - open_event.pathname = event.pathname |
| 1232 | - # push the open |
| 1233 | - self.general_processor.push_event(open_event) |
| 1234 | - raw_close['mask'] = IN_CLOSE_WRITE |
| 1235 | - close_event = Event(raw_close) |
| 1236 | - close_event.pathname = event.pathname |
| 1237 | - # push the close event |
| 1238 | - self.general_processor.push_event(close_event) |
| 1239 | - |
| 1240 | - def process_IN_MOVED_FROM(self, event): |
| 1241 | - """Capture the MOVED_FROM to maybe syntethize FILE_MOVED.""" |
| 1242 | - if self.held_event is not None: |
| 1243 | - self.general_processor.log.warn('Lost pair event of %s', |
| 1244 | - self.held_event) |
| 1245 | - self.held_event = event |
| 1246 | - |
| 1247 | - def _fake_create_event(self, event): |
| 1248 | - """Fake the creation of an event.""" |
| 1249 | - # this is the case of a MOVE from an ignored path (links for example) |
| 1250 | - # to a valid path |
| 1251 | - if event.dir: |
| 1252 | - evtname = "FS_DIR_" |
| 1253 | - else: |
| 1254 | - evtname = "FS_FILE_" |
| 1255 | - self.general_processor.eq_push(evtname + "CREATE", path=event.pathname) |
| 1256 | - if not event.dir: |
| 1257 | - self.general_processor.eq_push('FS_FILE_CLOSE_WRITE', |
| 1258 | - path=event.pathname) |
| 1259 | - |
| 1260 | - def _fake_delete_create_event(self, event): |
| 1261 | - """Fake the deletion and the creation.""" |
| 1262 | - # this is the case of a MOVE from a watch UDF to a diff UDF which |
| 1263 | - # means that we have to copy the way linux works. |
| 1264 | - if event.dir: |
| 1265 | - evtname = "FS_DIR_" |
| 1266 | - else: |
| 1267 | - evtname = "FS_FILE_" |
| 1268 | - m = "Delete because of different shares: %r" |
| 1269 | - self.log.info(m, self.held_event.pathname) |
| 1270 | - self.general_processor.eq_push(evtname + "DELETE", |
| 1271 | - path=self.held_event.pathname) |
| 1272 | - self.general_processor.eq_push(evtname + "CREATE", path=event.pathname) |
| 1273 | - if not event.dir: |
| 1274 | - self.general_processor.eq_push('FS_FILE_CLOSE_WRITE', |
| 1275 | - path=event.pathname) |
| 1276 | - |
| 1277 | - def process_IN_MOVED_TO(self, event): |
| 1278 | - """Capture the MOVED_TO to maybe syntethize FILE_MOVED.""" |
| 1279 | - if self.held_event is not None: |
| 1280 | - if event.cookie == self.held_event.cookie: |
| 1281 | - f_path_dir = os.path.split(self.held_event.pathname)[0] |
| 1282 | - t_path_dir = os.path.split(event.pathname)[0] |
| 1283 | - |
| 1284 | - is_from_forreal = not self.is_ignored(self.held_event.pathname) |
| 1285 | - is_to_forreal = not self.is_ignored(event.pathname) |
| 1286 | - if is_from_forreal and is_to_forreal: |
| 1287 | - f_share_id = self.general_processor.get_path_share_id( |
| 1288 | - f_path_dir) |
| 1289 | - t_share_id = self.general_processor.get_path_share_id( |
| 1290 | - t_path_dir) |
| 1291 | - if f_share_id != t_share_id: |
| 1292 | - # if the share_id are != push a delete/create |
| 1293 | - self._fake_delete_create_event(event) |
| 1294 | - else: |
| 1295 | - if event.dir: |
| 1296 | - evtname = "FS_DIR_" |
| 1297 | - else: |
| 1298 | - evtname = "FS_FILE_" |
| 1299 | - self.general_processor.eq_push(evtname + "MOVE", |
| 1300 | - path_from=self.held_event.pathname, |
| 1301 | - path_to=event.pathname) |
| 1302 | - elif is_to_forreal: |
| 1303 | - # this is the case of a MOVE from something ignored |
| 1304 | - # to a valid filename |
| 1305 | - self._fake_create_event(event) |
| 1306 | - |
| 1307 | - self.held_event = None |
| 1308 | - return |
| 1309 | - else: |
| 1310 | - self.release_held_event() |
| 1311 | - self.general_processor.push_event(event) |
| 1312 | - else: |
| 1313 | - # We should never get here, I really do not know how we |
| 1314 | - # got here |
| 1315 | - self.general_processor.log.warn( |
| 1316 | - 'Cookie does not match the previoues held event!') |
| 1317 | - self.general_processor.log.warn('Ignoring %s', event) |
| 1318 | - |
| 1319 | - def process_default(self, event): |
| 1320 | - """Push the event into the EventQueue.""" |
| 1321 | - if self.held_event is not None: |
| 1322 | - self.release_held_event() |
| 1323 | - self.general_processor.push_event(event) |
| 1324 | - |
| 1325 | - @is_valid_syncdaemon_path(path_indexes=[1]) |
| 1326 | - def handle_dir_delete(self, fullpath): |
| 1327 | - """Some special work when a directory is deleted.""" |
| 1328 | - # remove the watch on that dir from our structures, this mainly tells |
| 1329 | - # the monitor to remove the watch which is fowaded to a watch manager. |
| 1330 | - self.general_processor.rm_watch(fullpath) |
| 1331 | - |
| 1332 | - # handle the case of move a dir to a non-watched directory |
| 1333 | - paths = self.general_processor.get_paths_starting_with(fullpath, |
| 1334 | - include_base=False) |
| 1335 | - |
| 1336 | - paths.sort(reverse=True) |
| 1337 | - for path, is_dir in paths: |
| 1338 | - m = "Pushing deletion because of parent dir move: (is_dir=%s) %r" |
| 1339 | - self.general_processor.log.info(m, is_dir, path) |
| 1340 | - if is_dir: |
| 1341 | - # same as the above remove |
| 1342 | - self.general_processor.rm_watch(path) |
| 1343 | - self.general_processor.eq_push('FS_DIR_DELETE', path=path) |
| 1344 | - else: |
| 1345 | - self.general_processor.eq_push('FS_FILE_DELETE', path=path) |
| 1346 | - |
| 1347 | - @is_valid_syncdaemon_path(path_indexes=[1]) |
| 1348 | - def freeze_begin(self, path): |
| 1349 | - """Puts in hold all the events for this path.""" |
| 1350 | - self.general_processor.freeze_begin(path) |
| 1351 | - |
| 1352 | - def freeze_rollback(self): |
| 1353 | - """Unfreezes the frozen path, reseting to idle state.""" |
| 1354 | - self.general_processor.freeze_rollback() |
| 1355 | - |
| 1356 | - def freeze_commit(self, events): |
| 1357 | - """Unfreezes the frozen path, sending received events if not dirty. |
| 1358 | - |
| 1359 | - If events for that path happened: |
| 1360 | - - return True |
| 1361 | - else: |
| 1362 | - - push the here received events, return False |
| 1363 | - """ |
| 1364 | - return self.general_processor.freeze_commit(events) |
| 1365 | - |
| 1366 | - @property |
| 1367 | - def mute_filter(self): |
| 1368 | - """Return the mute filter used by the processor.""" |
| 1369 | - return self.general_processor.filter |
| 1370 | - |
| 1371 | - @property |
| 1372 | - def frozen_path(self): |
| 1373 | - """Return the frozen path.""" |
| 1374 | - return self.general_processor.frozen_path |
| 1375 | - |
| 1376 | - @property |
| 1377 | - def log(self): |
| 1378 | - """Return the logger of the instance.""" |
| 1379 | - return self.general_processor.log |
| 1380 | - |
| 1381 | - |
| 1382 | class FilesystemMonitor(object): |
| 1383 | """Manages the signals from filesystem.""" |
| 1384 | |
| 1385 | @@ -588,9 +369,15 @@ |
| 1386 | self.log.setLevel(TRACE) |
| 1387 | self.fs = fs |
| 1388 | self.eq = eq |
| 1389 | - self._processor = NotifyProcessor(self, ignore_config) |
| 1390 | + self._processor = notify_processor.NotifyProcessor(self, ignore_config) |
| 1391 | self._watch_manager = WatchManager(self._processor) |
| 1392 | |
| 1393 | + @classmethod |
| 1394 | + def is_available_monitor(cls): |
| 1395 | + """Return if the monitor can be used in the platform.""" |
| 1396 | + # we can always use this monitor |
| 1397 | + return defer.succeed(True) |
| 1398 | + |
| 1399 | def add_to_mute_filter(self, event, **info): |
| 1400 | """Add info to mute filter in the processor.""" |
| 1401 | self._processor.add_to_mute_filter(event, info) |
| 1402 | |
| 1403 | === added directory 'ubuntuone/platform/filesystem_notifications/monitor/darwin' |
| 1404 | === renamed file 'ubuntuone/platform/filesystem_notifications/darwin.py' => 'ubuntuone/platform/filesystem_notifications/monitor/darwin/__init__.py' |
| 1405 | --- ubuntuone/platform/filesystem_notifications/darwin.py 2012-07-13 12:39:33 +0000 |
| 1406 | +++ ubuntuone/platform/filesystem_notifications/monitor/darwin/__init__.py 2012-07-31 17:27:22 +0000 |
| 1407 | @@ -28,132 +28,10 @@ |
| 1408 | # files in the program, then also delete it here. |
| 1409 | """Filesystem Notifications module for MAC OS.""" |
| 1410 | |
| 1411 | -import os |
| 1412 | - |
| 1413 | -import fsevents |
| 1414 | -from twisted.internet import defer, reactor |
| 1415 | - |
| 1416 | -from ubuntuone.platform.filesystem_notifications.pyinotify_agnostic import ( |
| 1417 | - IN_DELETE, |
| 1418 | - IN_CREATE, |
| 1419 | - IN_MODIFY, |
| 1420 | - IN_MOVED_FROM, |
| 1421 | - IN_MOVED_TO, |
| 1422 | +from ubuntuone.platform.filesystem_notifications.monitor.darwin import ( |
| 1423 | + fsevents_daemon, |
| 1424 | ) |
| 1425 | |
| 1426 | -# a map between the few events that we have on common platforms and those |
| 1427 | -# found in pyinotify |
| 1428 | -ACTIONS = { |
| 1429 | - fsevents.IN_CREATE: IN_CREATE, |
| 1430 | - fsevents.IN_DELETE: IN_DELETE, |
| 1431 | - fsevents.IN_MODIFY: IN_MODIFY, |
| 1432 | - fsevents.IN_MOVED_FROM: IN_MOVED_FROM, |
| 1433 | - fsevents.IN_MOVED_TO: IN_MOVED_TO, |
| 1434 | -} |
| 1435 | - |
| 1436 | -# a map of the actions to names so that we have better logs. |
| 1437 | -ACTIONS_NAMES = { |
| 1438 | - fsevents.IN_CREATE: 'IN_CREATE', |
| 1439 | - fsevents.IN_DELETE: 'IN_DELETE', |
| 1440 | - fsevents.IN_MODIFY: 'IN_MODIFY', |
| 1441 | - fsevents.IN_MOVED_FROM: 'IN_MOVED_FROM', |
| 1442 | - fsevents.IN_MOVED_TO: 'IN_MOVED_TO', |
| 1443 | -} |
| 1444 | - |
| 1445 | - |
| 1446 | -def path_is_ignored(path): |
| 1447 | - """Should we ignore this path in the current platform.?""" |
| 1448 | - # don't support links yet |
| 1449 | - if os.path.islink(path): |
| 1450 | - return True |
| 1451 | - return False |
| 1452 | - |
| 1453 | - |
| 1454 | -# The implementation of the code that is provided as the pyinotify substitute |
| 1455 | -class Watch(object): |
| 1456 | - """Implement the same functions as pyinotify.Watch.""" |
| 1457 | - |
| 1458 | - def __init__(self, path, process_events): |
| 1459 | - """Create a new instance for the given path. |
| 1460 | - |
| 1461 | - The process_events parameter is a callback to be executed in the main |
| 1462 | - reactor thread to convert events in pyinotify events and add them to |
| 1463 | - the state machine. |
| 1464 | - """ |
| 1465 | - self.path = os.path.abspath(path) |
| 1466 | - self.process_events = process_events |
| 1467 | - self.watching = False |
| 1468 | - self.ignore_paths = [] |
| 1469 | - # Create stream with folder to watch |
| 1470 | - self.stream = fsevents.Stream(self._process_events, |
| 1471 | - path, file_events=True) |
| 1472 | - |
| 1473 | - def _process_events(self, event): |
| 1474 | - """Receive the filesystem event and move it to the main thread.""" |
| 1475 | - reactor.callFromThread(self._process_events_in_main_thread, event) |
| 1476 | - |
| 1477 | - def _process_events_in_main_thread(self, event): |
| 1478 | - """Process the events from the queue.""" |
| 1479 | - action, cookie, file_name = (event.mask, event.cookie, event.name) |
| 1480 | - |
| 1481 | - syncdaemon_path = os.path.join(self.path, file_name) |
| 1482 | - self.process_events(action, file_name, cookie, |
| 1483 | - syncdaemon_path) |
| 1484 | - |
| 1485 | - def start_watching(self): |
| 1486 | - """Start watching.""" |
| 1487 | - self.watching = True |
| 1488 | - return defer.succeed(self.watching) |
| 1489 | - |
| 1490 | - def stop_watching(self): |
| 1491 | - """Stop watching.""" |
| 1492 | - self.watching = False |
| 1493 | - return defer.succeed(self.watching) |
| 1494 | - |
| 1495 | - # For API compatibility |
| 1496 | - @property |
| 1497 | - def started(self): |
| 1498 | - """A deferred that will be called when the watch is running.""" |
| 1499 | - return defer.succeed(self.watching) |
| 1500 | - |
| 1501 | - @property |
| 1502 | - def stopped(self): |
| 1503 | - """A deferred fired when the watch thread has finished.""" |
| 1504 | - return defer.succeed(self.watching) |
| 1505 | - |
| 1506 | - |
| 1507 | -class WatchManager(object): |
| 1508 | - """Implement the same functions as pyinotify.WatchManager. |
| 1509 | - |
| 1510 | - All paths passed to methods in this class should be darwin paths. |
| 1511 | - |
| 1512 | - """ |
| 1513 | - |
| 1514 | - def __init__(self, log): |
| 1515 | - """Init the manager to keep track of the different watches.""" |
| 1516 | - self.log = log |
| 1517 | - self.observer = fsevents.Observer() |
| 1518 | - self.observer.start() |
| 1519 | - |
| 1520 | - def stop_watch(self, watch): |
| 1521 | - """Stop a given watch.""" |
| 1522 | - watch.stop_watching() |
| 1523 | - self.observer.unschedule(watch.platform_watch.stream) |
| 1524 | - return defer.succeed(True) |
| 1525 | - |
| 1526 | - def stop(self): |
| 1527 | - """Stop the manager.""" |
| 1528 | - self.observer.stop() |
| 1529 | - self.observer.join() |
| 1530 | - |
| 1531 | - def del_watch(self, watch): |
| 1532 | - """Delete the watch and clean resources.""" |
| 1533 | - self.observer.unschedule(watch.platform_watch.stream) |
| 1534 | - |
| 1535 | - def add_watch(self, watch): |
| 1536 | - """This method perform actually the action of registering the watch.""" |
| 1537 | - self.observer.schedule(watch.platform_watch.stream) |
| 1538 | - return True |
| 1539 | - |
| 1540 | - def rm_watch(self, watch): |
| 1541 | - """Remove the the watch with the given wd.""" |
| 1542 | + |
| 1543 | +FilesystemMonitor = fsevents_daemon.FilesystemMonitor |
| 1544 | +NotifyProcessor = fsevents_daemon.NotifyProcessor |
| 1545 | |
| 1546 | === added file 'ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_client.py' |
| 1547 | --- ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_client.py 1970-01-01 00:00:00 +0000 |
| 1548 | +++ ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_client.py 2012-07-31 17:27:22 +0000 |
| 1549 | @@ -0,0 +1,151 @@ |
| 1550 | +# -*- coding: utf-8 *-* |
| 1551 | +# |
| 1552 | +# Copyright 2012 Canonical Ltd. |
| 1553 | +# |
| 1554 | +# This program is free software: you can redistribute it and/or modify it |
| 1555 | +# under the terms of the GNU General Public License version 3, as published |
| 1556 | +# by the Free Software Foundation. |
| 1557 | +# |
| 1558 | +# This program is distributed in the hope that it will be useful, but |
| 1559 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
| 1560 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
| 1561 | +# PURPOSE. See the GNU General Public License for more details. |
| 1562 | +# |
| 1563 | +# You should have received a copy of the GNU General Public License along |
| 1564 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
| 1565 | +# |
| 1566 | +# In addition, as a special exception, the copyright holders give |
| 1567 | +# permission to link the code of portions of this program with the |
| 1568 | +# OpenSSL library under certain conditions as described in each |
| 1569 | +# individual source file, and distribute linked combinations |
| 1570 | +# including the two. |
| 1571 | +# You must obey the GNU General Public License in all respects |
| 1572 | +# for all of the code used other than OpenSSL. If you modify |
| 1573 | +# file(s) with this exception, you may extend this exception to your |
| 1574 | +# version of the file(s), but you are not obligated to do so. If you |
| 1575 | +# do not wish to do so, delete this exception statement from your |
| 1576 | +# version. If you delete this exception statement from all source |
| 1577 | +# files in the program, then also delete it here. |
| 1578 | +"""Filesystem Notifications module for MAC OS.""" |
| 1579 | + |
| 1580 | +import os |
| 1581 | + |
| 1582 | +import fsevents |
| 1583 | +from twisted.internet import defer, reactor |
| 1584 | + |
| 1585 | +from ubuntuone.platform.filesystem_notifications.pyinotify_agnostic import ( |
| 1586 | + IN_DELETE, |
| 1587 | + IN_CREATE, |
| 1588 | + IN_MODIFY, |
| 1589 | + IN_MOVED_FROM, |
| 1590 | + IN_MOVED_TO, |
| 1591 | +) |
| 1592 | + |
| 1593 | +# a map between the few events that we have on common platforms and those |
| 1594 | +# found in pyinotify |
| 1595 | +ACTIONS = { |
| 1596 | + fsevents.IN_CREATE: IN_CREATE, |
| 1597 | + fsevents.IN_DELETE: IN_DELETE, |
| 1598 | + fsevents.IN_MODIFY: IN_MODIFY, |
| 1599 | + fsevents.IN_MOVED_FROM: IN_MOVED_FROM, |
| 1600 | + fsevents.IN_MOVED_TO: IN_MOVED_TO, |
| 1601 | +} |
| 1602 | + |
| 1603 | +# a map of the actions to names so that we have better logs. |
| 1604 | +ACTIONS_NAMES = { |
| 1605 | + fsevents.IN_CREATE: 'IN_CREATE', |
| 1606 | + fsevents.IN_DELETE: 'IN_DELETE', |
| 1607 | + fsevents.IN_MODIFY: 'IN_MODIFY', |
| 1608 | + fsevents.IN_MOVED_FROM: 'IN_MOVED_FROM', |
| 1609 | + fsevents.IN_MOVED_TO: 'IN_MOVED_TO', |
| 1610 | +} |
| 1611 | + |
| 1612 | + |
| 1613 | +# The implementation of the code that is provided as the pyinotify substitute |
| 1614 | +class Watch(object): |
| 1615 | + """Implement the same functions as pyinotify.Watch.""" |
| 1616 | + |
| 1617 | + def __init__(self, path, process_events): |
| 1618 | + """Create a new instance for the given path. |
| 1619 | + |
| 1620 | + The process_events parameter is a callback to be executed in the main |
| 1621 | + reactor thread to convert events in pyinotify events and add them to |
| 1622 | + the state machine. |
| 1623 | + """ |
| 1624 | + self.path = os.path.abspath(path) |
| 1625 | + self.process_events = process_events |
| 1626 | + self.watching = False |
| 1627 | + self.ignore_paths = [] |
| 1628 | + # Create stream with folder to watch |
| 1629 | + self.stream = fsevents.Stream(self._process_events, |
| 1630 | + path, file_events=True) |
| 1631 | + |
| 1632 | + def _process_events(self, event): |
| 1633 | + """Receive the filesystem event and move it to the main thread.""" |
| 1634 | + reactor.callFromThread(self._process_events_in_main_thread, event) |
| 1635 | + |
| 1636 | + def _process_events_in_main_thread(self, event): |
| 1637 | + """Process the events from the queue.""" |
| 1638 | + action, cookie, file_name = (event.mask, event.cookie, event.name) |
| 1639 | + |
| 1640 | + syncdaemon_path = os.path.join(self.path, file_name) |
| 1641 | + self.process_events(action, file_name, cookie, |
| 1642 | + syncdaemon_path) |
| 1643 | + |
| 1644 | + def start_watching(self): |
| 1645 | + """Start watching.""" |
| 1646 | + self.watching = True |
| 1647 | + return defer.succeed(self.watching) |
| 1648 | + |
| 1649 | + def stop_watching(self): |
| 1650 | + """Stop watching.""" |
| 1651 | + self.watching = False |
| 1652 | + return defer.succeed(self.watching) |
| 1653 | + |
| 1654 | + # For API compatibility |
| 1655 | + @property |
| 1656 | + def started(self): |
| 1657 | + """A deferred that will be called when the watch is running.""" |
| 1658 | + return defer.succeed(self.watching) |
| 1659 | + |
| 1660 | + @property |
| 1661 | + def stopped(self): |
| 1662 | + """A deferred fired when the watch thread has finished.""" |
| 1663 | + return defer.succeed(self.watching) |
| 1664 | + |
| 1665 | + |
| 1666 | +class WatchManager(object): |
| 1667 | + """Implement the same functions as pyinotify.WatchManager. |
| 1668 | + |
| 1669 | + All paths passed to methods in this class should be darwin paths. |
| 1670 | + |
| 1671 | + """ |
| 1672 | + |
| 1673 | + def __init__(self, log): |
| 1674 | + """Init the manager to keep track of the different watches.""" |
| 1675 | + self.log = log |
| 1676 | + self.observer = fsevents.Observer() |
| 1677 | + self.observer.start() |
| 1678 | + |
| 1679 | + def stop_watch(self, watch): |
| 1680 | + """Stop a given watch.""" |
| 1681 | + watch.stop_watching() |
| 1682 | + self.observer.unschedule(watch.platform_watch.stream) |
| 1683 | + return defer.succeed(True) |
| 1684 | + |
| 1685 | + def stop(self): |
| 1686 | + """Stop the manager.""" |
| 1687 | + self.observer.stop() |
| 1688 | + self.observer.join() |
| 1689 | + |
| 1690 | + def del_watch(self, watch): |
| 1691 | + """Delete the watch and clean resources.""" |
| 1692 | + self.observer.unschedule(watch.platform_watch.stream) |
| 1693 | + |
| 1694 | + def add_watch(self, watch): |
| 1695 | + """This method perform actually the action of registering the watch.""" |
| 1696 | + self.observer.schedule(watch.platform_watch.stream) |
| 1697 | + return True |
| 1698 | + |
| 1699 | + def rm_watch(self, watch): |
| 1700 | + """Remove the the watch with the given wd.""" |
| 1701 | |
| 1702 | === added file 'ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_daemon.py' |
| 1703 | --- ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_daemon.py 1970-01-01 00:00:00 +0000 |
| 1704 | +++ ubuntuone/platform/filesystem_notifications/monitor/darwin/fsevents_daemon.py 2012-07-31 17:27:22 +0000 |
| 1705 | @@ -0,0 +1,439 @@ |
| 1706 | +# -*- coding: utf-8 *-* |
| 1707 | +# |
| 1708 | +# Copyright 2012 Canonical Ltd. |
| 1709 | +# |
| 1710 | +# This program is free software: you can redistribute it and/or modify it |
| 1711 | +# under the terms of the GNU General Public License version 3, as published |
| 1712 | +# by the Free Software Foundation. |
| 1713 | +# |
| 1714 | +# This program is distributed in the hope that it will be useful, but |
| 1715 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
| 1716 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
| 1717 | +# PURPOSE. See the GNU General Public License for more details. |
| 1718 | +# |
| 1719 | +# You should have received a copy of the GNU General Public License along |
| 1720 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
| 1721 | +# |
| 1722 | +# permission to link the code of portions of this program with the |
| 1723 | +# OpenSSL library under certain conditions as described in each |
| 1724 | +# individual source file, and distribute linked combinations |
| 1725 | +# including the two. |
| 1726 | +# You must obey the GNU General Public License in all respects |
| 1727 | +# for all of the code used other than OpenSSL. If you modify |
| 1728 | +# file(s) with this exception, you may extend this exception to your |
| 1729 | +# version of the file(s), but you are not obligated to do so. If you |
| 1730 | +# do not wish to do so, delete this exception statement from your |
| 1731 | +# version. If you delete this exception statement from all source |
| 1732 | +# files in the program, then also delete it here. |
| 1733 | +"""Filesystem notifications based on the fsevents daemon..""" |
| 1734 | + |
| 1735 | +import logging |
| 1736 | +import os |
| 1737 | +import unicodedata |
| 1738 | + |
| 1739 | +from uuid import uuid4 |
| 1740 | + |
| 1741 | +from twisted.internet import defer, endpoints, reactor |
| 1742 | + |
| 1743 | +from ubuntu_sso.utils.tcpactivation import ( |
| 1744 | + ActivationConfig, |
| 1745 | + ActivationInstance, |
| 1746 | + AlreadyStartedError, |
| 1747 | +) |
| 1748 | + |
| 1749 | +from ubuntuone import logger |
| 1750 | +from ubuntuone.darwin import fsevents |
| 1751 | +from ubuntuone.platform.filesystem_notifications.notify_processor import ( |
| 1752 | + NotifyProcessor, |
| 1753 | +) |
| 1754 | +from ubuntuone.platform.filesystem_notifications.pyinotify_agnostic import ( |
| 1755 | + Event, |
| 1756 | + IN_OPEN, |
| 1757 | + IN_CLOSE_NOWRITE, |
| 1758 | + IN_CLOSE_WRITE, |
| 1759 | + IN_CREATE, |
| 1760 | + IN_ISDIR, |
| 1761 | + IN_DELETE, |
| 1762 | + IN_MOVED_FROM, |
| 1763 | + IN_MOVED_TO, |
| 1764 | + IN_MODIFY, |
| 1765 | +) |
| 1766 | + |
| 1767 | +TRACE = logger.TRACE |
| 1768 | + |
| 1769 | +# map the fsevents actions to those from pyinotify |
| 1770 | +DARWIN_ACTIONS = { |
| 1771 | + fsevents.FSE_CREATE_FILE: IN_CREATE, |
| 1772 | + fsevents.FSE_DELETE: IN_DELETE, |
| 1773 | + fsevents.FSE_STAT_CHANGED: IN_MODIFY, |
| 1774 | + fsevents.FSE_CONTENT_MODIFIED: IN_MODIFY, |
| 1775 | + fsevents.FSE_CREATE_DIR: IN_CREATE, |
| 1776 | +} |
| 1777 | + |
| 1778 | +# list of those events from which we do not care |
| 1779 | +DARWIN_IGNORED_ACTIONS = ( |
| 1780 | + fsevents.FSE_UNKNOWN, |
| 1781 | + fsevents.FSE_INVALID, |
| 1782 | + fsevents.FSE_EXCHANGE, |
| 1783 | + fsevents.FSE_FINDER_INFO_CHANGED, |
| 1784 | + fsevents.FSE_CHOWN, |
| 1785 | + fsevents.FSE_XATTR_MODIFIED, |
| 1786 | + fsevents.FSE_XATTR_REMOVED, |
| 1787 | +) |
| 1788 | + |
| 1789 | +# translates quickly the event and it's is_dir state to our standard events |
| 1790 | +NAME_TRANSLATIONS = { |
| 1791 | + IN_OPEN: 'FS_FILE_OPEN', |
| 1792 | + IN_CLOSE_NOWRITE: 'FS_FILE_CLOSE_NOWRITE', |
| 1793 | + IN_CLOSE_WRITE: 'FS_FILE_CLOSE_WRITE', |
| 1794 | + IN_CREATE: 'FS_FILE_CREATE', |
| 1795 | + IN_CREATE | IN_ISDIR: 'FS_DIR_CREATE', |
| 1796 | + IN_DELETE: 'FS_FILE_DELETE', |
| 1797 | + IN_DELETE | IN_ISDIR: 'FS_DIR_DELETE', |
| 1798 | + IN_MOVED_FROM: 'FS_FILE_DELETE', |
| 1799 | + IN_MOVED_FROM | IN_ISDIR: 'FS_DIR_DELETE', |
| 1800 | + IN_MOVED_TO: 'FS_FILE_CREATE', |
| 1801 | + IN_MOVED_TO | IN_ISDIR: 'FS_DIR_CREATE'} |
| 1802 | + |
| 1803 | +# TODO: This should be in fsevents to be imported! |
| 1804 | +# Path to the socket used by the daemon |
| 1805 | +DAEMON_SOCKET = '/var/run/ubuntuone_fsevents_daemon' |
| 1806 | + |
| 1807 | + |
| 1808 | +class DescriptionFactory(object): |
| 1809 | + """Factory that provides the server and client descriptions.""" |
| 1810 | + |
| 1811 | + client_description_pattern = 'unix:path=%s' |
| 1812 | + server_description_pattern = 'unix:%s' |
| 1813 | + |
| 1814 | + def __init__(self): |
| 1815 | + """Create a new instance.""" |
| 1816 | + self.server = self.server_description_pattern % DAEMON_SOCKET |
| 1817 | + self.client = self.client_description_pattern % DAEMON_SOCKET |
| 1818 | + |
| 1819 | + |
| 1820 | +def get_activation_config(): |
| 1821 | + """Get the configuration to activate the sso service.""" |
| 1822 | + description = DescriptionFactory() |
| 1823 | + return ActivationConfig(None, None, description) |
| 1824 | + |
| 1825 | + |
| 1826 | +@defer.inlineCallbacks |
| 1827 | +def is_daemon_running(): |
| 1828 | + """Return if the sd is running by trying to get the port.""" |
| 1829 | + ai = ActivationInstance(get_activation_config()) |
| 1830 | + try: |
| 1831 | + yield ai.get_server_description() |
| 1832 | + defer.returnValue(False) |
| 1833 | + except AlreadyStartedError: |
| 1834 | + defer.returnValue(True) |
| 1835 | + |
| 1836 | + |
| 1837 | +def get_syncdaemon_valid_path(path): |
| 1838 | + """Return a valid encoded path.""" |
| 1839 | + return unicodedata.normalize('NFC', path).encode('utf-8') |
| 1840 | + |
| 1841 | + |
| 1842 | +class PyInotifyEventsFactory(fsevents.FsEventsFactory): |
| 1843 | + """Factory that process events and converts them in pyinotify ones.""" |
| 1844 | + |
| 1845 | + def __init__(self, processor, |
| 1846 | + ignored_events=DARWIN_IGNORED_ACTIONS): |
| 1847 | + """Create a new instance.""" |
| 1848 | + # old style class |
| 1849 | + fsevents.FsEventsFactory.__init__(self) |
| 1850 | + self._processor = processor |
| 1851 | + self._ignored_events = ignored_events |
| 1852 | + self.watched_paths = [] |
| 1853 | + self.ignored_paths = [] |
| 1854 | + |
| 1855 | + def events_dropper(self): |
| 1856 | + """Deal with the fact that the daemon dropped events.""" |
| 1857 | + |
| 1858 | + def path_is_not_interesting(self, path): |
| 1859 | + """Return if the factory is interested in the path.""" |
| 1860 | + is_watched = any(path.startswith(watched_path) |
| 1861 | + for watched_path in self.watched_paths) |
| 1862 | + is_ignored = any(path.startswith(ignored_path) |
| 1863 | + for ignored_path in self.ignored_paths) |
| 1864 | + return not is_watched or (is_watched and is_ignored) |
| 1865 | + |
| 1866 | + def is_create(self, event): |
| 1867 | + """Decide if a rename event should be considered a create.""" |
| 1868 | + # is a create if the creation path (first path) is either not watched or |
| 1869 | + # in the ignored paths |
| 1870 | + source_path = get_syncdaemon_valid_path(event.event_paths[0]) |
| 1871 | + return self.path_is_not_interesting(source_path) |
| 1872 | + |
| 1873 | + def is_delete(self, event): |
| 1874 | + """Decide if a rename event should be considered a delete.""" |
| 1875 | + # is a delete if the destination path (second path) is either not |
| 1876 | + # watched or in the ignored paths |
| 1877 | + dest_path = get_syncdaemon_valid_path(event.event_paths[1]) |
| 1878 | + return self.path_is_not_interesting(dest_path) |
| 1879 | + |
| 1880 | + def generate_from_event(self, event, cookie): |
| 1881 | + """Return a fake from event from a rename one.""" |
| 1882 | + source_path = get_syncdaemon_valid_path(event.event_paths[0]) |
| 1883 | + mask = IN_MOVED_FROM |
| 1884 | + if event.is_directory: |
| 1885 | + mask |= IN_ISDIR |
| 1886 | + head, tail = os.path.split(source_path) |
| 1887 | + event_raw_data = { |
| 1888 | + 'wd': 0, # we only have one factory |
| 1889 | + 'dir': event.is_directory, |
| 1890 | + 'mask': mask, |
| 1891 | + 'name': tail, |
| 1892 | + 'cookie': cookie, |
| 1893 | + 'path': '.'} |
| 1894 | + move_from_event = Event(event_raw_data) |
| 1895 | + move_from_event.pathname = source_path |
| 1896 | + return move_from_event |
| 1897 | + |
| 1898 | + def generate_to_event(self, event, cookie): |
| 1899 | + """Return a fake to event from a rename one.""" |
| 1900 | + source_path = get_syncdaemon_valid_path(event.event_paths[0]) |
| 1901 | + destination_path = get_syncdaemon_valid_path(event.event_paths[1]) |
| 1902 | + mask = IN_MOVED_TO |
| 1903 | + if event.is_directory: |
| 1904 | + mask |= IN_ISDIR |
| 1905 | + source_head, source_tail = os.path.split(source_path) |
| 1906 | + head, tail = os.path.split(destination_path) |
| 1907 | + event_raw_data = { |
| 1908 | + 'wd': 0, # we only have one factory |
| 1909 | + 'dir': event.is_directory, |
| 1910 | + 'mask': mask, |
| 1911 | + 'name': tail, |
| 1912 | + 'cookie': cookie, |
| 1913 | + 'src_pathname' : source_tail, |
| 1914 | + 'path': '.'} |
| 1915 | + move_to_event = Event(event_raw_data) |
| 1916 | + move_to_event.pathname = destination_path |
| 1917 | + return move_to_event |
| 1918 | + |
| 1919 | + def convert_in_pyinotify_event(self, event): |
| 1920 | + """Get an event from the daemon and convert it in a pyinotify one.""" |
| 1921 | + # the rename is a special type of event because it has to be either |
| 1922 | + # converted is a pair of events or in a single one (CREATE or DELETE) |
| 1923 | + if event.event_type == fsevents.FSE_RENAME: |
| 1924 | + is_create = self.is_create(event) |
| 1925 | + if is_create or self.is_delete(event): |
| 1926 | + mask = IN_CREATE if is_create else IN_DELETE |
| 1927 | + if event.is_directory: |
| 1928 | + mask |= IN_ISDIR |
| 1929 | + # a create means that we moved from a not watched path to a |
| 1930 | + # watched one and therefore we are interested in the SECOND |
| 1931 | + # path of the event. A delete means that we moved from a |
| 1932 | + # watched path for a not watched one and we care about the |
| 1933 | + # FIRST path of the event |
| 1934 | + path = event.event_paths[1] if is_create\ |
| 1935 | + else event.event_paths[0] |
| 1936 | + path = get_syncdaemon_valid_path(path) |
| 1937 | + head, tail = os.path.split(path) |
| 1938 | + event_raw_data = { |
| 1939 | + 'wd': 0, # we only have one factory |
| 1940 | + 'dir': event.is_directory, |
| 1941 | + 'mask': mask, |
| 1942 | + 'name': tail, |
| 1943 | + 'path': '.'} |
| 1944 | + pyinotify_event = Event(event_raw_data) |
| 1945 | + pyinotify_event.pathname = path |
| 1946 | + return [pyinotify_event] |
| 1947 | + else: |
| 1948 | + # we do have a rename within watched paths lets generate to |
| 1949 | + # fake events |
| 1950 | + cookie = str(uuid4()) |
| 1951 | + return [self.generate_from_event(event, cookie), |
| 1952 | + self.generate_to_event(event, cookie)] |
| 1953 | + else: |
| 1954 | + mask = DARWIN_ACTIONS[event.event_type] |
| 1955 | + if event.is_directory: |
| 1956 | + mask |= IN_ISDIR |
| 1957 | + # we do know that we are not dealing with a move which are the only |
| 1958 | + # events that have more than one path |
| 1959 | + path = get_syncdaemon_valid_path(event.event_paths[0]) |
| 1960 | + head, tail = os.path.split(path) |
| 1961 | + event_raw_data = { |
| 1962 | + 'wd': 0, # we only have one factory |
| 1963 | + 'dir': event.is_directory, |
| 1964 | + 'mask': mask, |
| 1965 | + 'name': tail, |
| 1966 | + 'path': '.'} |
| 1967 | + pyinotify_event = Event(event_raw_data) |
| 1968 | + # FIXME: event deduces the pathname wrong and we need to manually |
| 1969 | + # set it |
| 1970 | + pyinotify_event.pathname = path |
| 1971 | + return [pyinotify_event] |
| 1972 | + |
| 1973 | + def _is_ignored_path(self, path): |
| 1974 | + """Returns if the path is ignored.""" |
| 1975 | + if not path[-1] == os.path.sep: |
| 1976 | + path += os.path.sep |
| 1977 | + |
| 1978 | + is_ignored_child = any(ignored in path for ignored in self.ignored_paths) |
| 1979 | + return path in self.ignored_paths or is_ignored_child |
| 1980 | + |
| 1981 | + def process_event(self, event): |
| 1982 | + """Process an event from the fsevent daemon.""" |
| 1983 | + if event.event_type in self._ignored_events: |
| 1984 | + # Do nothing because sd does not care about such info |
| 1985 | + return |
| 1986 | + if event.event_type == fsevents.FSE_EVENTS_DROPPED: |
| 1987 | + # this should not be very common but we have to deal with it |
| 1988 | + return self.events_dropper() |
| 1989 | + events = self.convert_in_pyinotify_event(event) |
| 1990 | + for pyinotify_event in events: |
| 1991 | + # assert that the path name is valid |
| 1992 | + if not any([pyinotify_event.pathname.startswith(path) |
| 1993 | + for path in self.ignored_paths]): |
| 1994 | + # by definition we are being callFromThread so we do know that |
| 1995 | + # the events are executed in the right order \o/ |
| 1996 | + if not self._is_ignored_path(pyinotify_event.pathname): |
| 1997 | + self._processor(pyinotify_event) |
| 1998 | + |
| 1999 | + |
| 2000 | +class FilesystemMonitor(object): |
| 2001 | + """Implementation that allows to receive events from the system.""" |
| 2002 | + |
| 2003 | + def __init__(self, eq, fs, ignore_config=None, timeout=1): |
| 2004 | + self.log = logging.getLogger('ubuntuone.SyncDaemon.FSMonitor') |
| 2005 | + self.log.setLevel(TRACE) |
| 2006 | + self._processor = NotifyProcessor(self, ignore_config) |
| 2007 | + self.fs = fs |
| 2008 | + self.eq = eq |
| 2009 | + self._factory = PyInotifyEventsFactory(self._processor) |
| 2010 | + self._protocol = None |
| 2011 | + |
| 2012 | + @classmethod |
| 2013 | + def is_available_monitor(cls): |
| 2014 | + """Return if the monitor can be used in the platform.""" |
| 2015 | + # can only be used if the daemon is running |
| 2016 | + return is_daemon_running() |
| 2017 | + |
| 2018 | + @defer.inlineCallbacks |
| 2019 | + def _connect_to_daemon(self): |
| 2020 | + """Connect to the daemon so that we can receive events.""" |
| 2021 | + description = 'unix:path=%s' % DAEMON_SOCKET |
| 2022 | + client = endpoints.clientFromString(reactor, description) |
| 2023 | + self._protocol = yield client.connect(self._factory) |
| 2024 | + # add the user with no paths |
| 2025 | + yield self._protocol.add_user([]) |
| 2026 | + |
| 2027 | + def add_to_mute_filter(self, event, **info): |
| 2028 | + """Add info to mute filter in the processor.""" |
| 2029 | + self._processor.add_to_mute_filter(event, info) |
| 2030 | + |
| 2031 | + def rm_from_mute_filter(self, event, **info): |
| 2032 | + """Remove info to mute filter in the processor.""" |
| 2033 | + self._processor.rm_from_mute_filter(event, info) |
| 2034 | + |
| 2035 | + def shutdown(self): |
| 2036 | + """Prepares the EQ to be closed.""" |
| 2037 | + if self._protocol is not None: |
| 2038 | + |
| 2039 | + def on_user_removed(data): |
| 2040 | + """We managed to remove the user.""" |
| 2041 | + self._protocol.transport.loseConnection() |
| 2042 | + self._protocol = None |
| 2043 | + return True |
| 2044 | + |
| 2045 | + def on_user_not_removed(reason): |
| 2046 | + """We did not manage to remove the user.""" |
| 2047 | + return True |
| 2048 | + |
| 2049 | + d = self._protocol.remove_user() |
| 2050 | + d.addCallback(on_user_removed) |
| 2051 | + d.addErrback(on_user_not_removed) |
| 2052 | + return d |
| 2053 | + return defer.succeed(True) |
| 2054 | + |
| 2055 | + @defer.inlineCallbacks |
| 2056 | + def rm_watch(self, dirpath): |
| 2057 | + """Remove watch from a dir.""" |
| 2058 | + # in mac os x we are only watching the parent watches, this is an |
| 2059 | + # important details because we will only send a real remove_path to the |
| 2060 | + # daemon if the path is the parent path else we will filter it in the |
| 2061 | + # factory level |
| 2062 | + |
| 2063 | + if not dirpath[-1] == os.path.sep: |
| 2064 | + dirpath += os.path.sep |
| 2065 | + |
| 2066 | + if dirpath not in self._factory.watched_paths: |
| 2067 | + # we are watching a parent path but we are not a root one |
| 2068 | + # therefore we are going to add it as an ignored path and |
| 2069 | + # return |
| 2070 | + self._factory.ignored_paths.append(dirpath) |
| 2071 | + defer.returnValue(None) |
| 2072 | + |
| 2073 | + # if we got to this point we want to remove a root dir, this is an |
| 2074 | + # important detail to take care of. Connect if needed and tell the |
| 2075 | + # daemon to remove the path |
| 2076 | + if self._protocol is None: |
| 2077 | + # we have not yet connected, lets do it! |
| 2078 | + yield self._connect_to_daemon() |
| 2079 | + was_removed = yield self._protocol.remove_path(dirpath) |
| 2080 | + # only remove it if we really removed it |
| 2081 | + if was_removed: |
| 2082 | + self._factory.watched_paths.remove(dirpath) |
| 2083 | + |
| 2084 | + @defer.inlineCallbacks |
| 2085 | + def add_watch(self, dirpath): |
| 2086 | + """Add watch to a dir.""" |
| 2087 | + if not dirpath[-1] == os.path.sep: |
| 2088 | + dirpath = dirpath + os.path.sep |
| 2089 | + |
| 2090 | + # if we are watching a parent dir we can just ensure that it is not ignored |
| 2091 | + if any(dirpath.startswith(watched_path) for watched_path in |
| 2092 | + self._factory.watched_paths): |
| 2093 | + if dirpath in self._factory.ignored_paths: |
| 2094 | + self._factory.ignored_paths.remove(dirpath) |
| 2095 | + defer.returnValue(True) |
| 2096 | + |
| 2097 | + if dirpath in self._factory.ignored_paths: |
| 2098 | + self._factory.ignored_paths.remove(dirpath) |
| 2099 | + defer.returnValue(True) |
| 2100 | + |
| 2101 | + if self._protocol is None: |
| 2102 | + # we have not yet connected, lets do it! |
| 2103 | + yield self._connect_to_daemon() |
| 2104 | + |
| 2105 | + was_added = yield self._protocol.add_path(dirpath) |
| 2106 | + if was_added: |
| 2107 | + self._factory.watched_paths.append(dirpath) |
| 2108 | + defer.returnValue(True) |
| 2109 | + |
| 2110 | + def add_watches_to_udf_ancestors(self, volume): |
| 2111 | + """Add a inotify watch to volume's ancestors if it's an UDF.""" |
| 2112 | + # On Mac OS X we do no need to add watches to the ancestors because we |
| 2113 | + # will get the events from them with no problem. |
| 2114 | + return defer.succeed(True) |
| 2115 | + |
| 2116 | + def is_frozen(self): |
| 2117 | + """Checks if there's something frozen.""" |
| 2118 | + return self._processor.frozen_path is not None |
| 2119 | + |
| 2120 | + def freeze_begin(self, path): |
| 2121 | + """Puts in hold all the events for this path.""" |
| 2122 | + if self._processor.frozen_path is not None: |
| 2123 | + raise ValueError("There's something already frozen!") |
| 2124 | + self._processor.freeze_begin(path) |
| 2125 | + |
| 2126 | + def freeze_rollback(self): |
| 2127 | + """Unfreezes the frozen path, reseting to idle state.""" |
| 2128 | + if self._processor.frozen_path is None: |
| 2129 | + raise ValueError("Rolling back with nothing frozen!") |
| 2130 | + self._processor.freeze_rollback() |
| 2131 | + |
| 2132 | + def freeze_commit(self, events): |
| 2133 | + """Unfreezes the frozen path, sending received events if not dirty. |
| 2134 | + |
| 2135 | + If events for that path happened: |
| 2136 | + - return True |
| 2137 | + else: |
| 2138 | + - push the here received events, return False |
| 2139 | + """ |
| 2140 | + if self._processor.frozen_path is None: |
| 2141 | + raise ValueError("Committing with nothing frozen!") |
| 2142 | + |
| 2143 | + d = defer.execute(self._processor.freeze_commit, events) |
| 2144 | + return d |
| 2145 | |
| 2146 | === renamed file 'ubuntuone/platform/filesystem_notifications/linux.py' => 'ubuntuone/platform/filesystem_notifications/monitor/linux.py' |
| 2147 | --- ubuntuone/platform/filesystem_notifications/linux.py 2012-05-23 13:06:42 +0000 |
| 2148 | +++ ubuntuone/platform/filesystem_notifications/monitor/linux.py 2012-07-31 17:27:22 +0000 |
| 2149 | @@ -33,8 +33,10 @@ |
| 2150 | import os |
| 2151 | |
| 2152 | import pyinotify |
| 2153 | -from twisted.internet import abstract, reactor, error, defer |
| 2154 | +from twisted.internet import abstract, reactor, defer |
| 2155 | + |
| 2156 | from ubuntuone.platform.os_helper import access |
| 2157 | +from ubuntuone.platform.filesystem_notifications import notify_processor |
| 2158 | |
| 2159 | |
| 2160 | # translates quickly the event and it's is_dir state to our standard events |
| 2161 | @@ -69,24 +71,6 @@ |
| 2162 | pyinotify.IN_MOVE_SELF) |
| 2163 | |
| 2164 | |
| 2165 | -def validate_filename(real_func): |
| 2166 | - """Decorator that validates the filename.""" |
| 2167 | - def func(self, event): |
| 2168 | - """If valid, executes original function.""" |
| 2169 | - try: |
| 2170 | - # validate UTF-8 |
| 2171 | - event.name.decode("utf8") |
| 2172 | - except UnicodeDecodeError: |
| 2173 | - dirname = event.path.decode("utf8") |
| 2174 | - self.general_processor.invnames_log.info("%s in %r: path %r", |
| 2175 | - event.maskname, dirname, event.name) |
| 2176 | - self.general_processor.monitor.eq.push('FS_INVALID_NAME', |
| 2177 | - dirname=dirname, filename=event.name) |
| 2178 | - else: |
| 2179 | - real_func(self, event) |
| 2180 | - return func |
| 2181 | - |
| 2182 | - |
| 2183 | class _AncestorsINotifyProcessor(pyinotify.ProcessEvent): |
| 2184 | """inotify's processor when an event happens on an UDFs ancestor.""" |
| 2185 | def __init__(self, monitor): |
| 2186 | @@ -153,244 +137,6 @@ |
| 2187 | self.monitor.rm_watch(ancestor) |
| 2188 | |
| 2189 | |
| 2190 | -class _GeneralINotifyProcessor(pyinotify.ProcessEvent): |
| 2191 | - """inotify's processor when a general event happens. |
| 2192 | - |
| 2193 | - This class also catchs the MOVEs events, and synthetises a new |
| 2194 | - FS_(DIR|FILE)_MOVE event when possible. |
| 2195 | - """ |
| 2196 | - def __init__(self, monitor, ignore_config=None): |
| 2197 | - # XXX: avoid circular imports |
| 2198 | - from ubuntuone.syncdaemon.filesystem_notifications import ( |
| 2199 | - GeneralINotifyProcessor, |
| 2200 | - ) |
| 2201 | - self.general_processor = GeneralINotifyProcessor(monitor, |
| 2202 | - self.handle_dir_delete, NAME_TRANSLATIONS, |
| 2203 | - self.platform_is_ignored, pyinotify.IN_IGNORED, |
| 2204 | - ignore_config=ignore_config) |
| 2205 | - self.held_event = None |
| 2206 | - self.timer = None |
| 2207 | - |
| 2208 | - def shutdown(self): |
| 2209 | - """Shut down the processor.""" |
| 2210 | - if self.timer is not None and self.timer.active(): |
| 2211 | - self.timer.cancel() |
| 2212 | - |
| 2213 | - def rm_from_mute_filter(self, event, paths): |
| 2214 | - """Remove an event and path(s) from the mute filter.""" |
| 2215 | - self.general_processor.rm_from_mute_filter(event, paths) |
| 2216 | - |
| 2217 | - def add_to_mute_filter(self, event, paths): |
| 2218 | - """Add an event and path(s) to the mute filter.""" |
| 2219 | - self.general_processor.add_to_mute_filter(event, paths) |
| 2220 | - |
| 2221 | - def on_timeout(self): |
| 2222 | - """Called on timeout.""" |
| 2223 | - if self.held_event is not None: |
| 2224 | - self.release_held_event(True) |
| 2225 | - |
| 2226 | - def release_held_event(self, timed_out=False): |
| 2227 | - """Release the event on hold to fulfill its destiny.""" |
| 2228 | - if not timed_out: |
| 2229 | - try: |
| 2230 | - self.timer.cancel() |
| 2231 | - except error.AlreadyCalled: |
| 2232 | - # self.timeout() was *just* called, do nothing here |
| 2233 | - return |
| 2234 | - self.general_processor.push_event(self.held_event) |
| 2235 | - self.held_event = None |
| 2236 | - |
| 2237 | - @validate_filename |
| 2238 | - def process_IN_OPEN(self, event): |
| 2239 | - """Filter IN_OPEN to make it happen only in files.""" |
| 2240 | - if not (event.mask & pyinotify.IN_ISDIR): |
| 2241 | - self.general_processor.push_event(event) |
| 2242 | - |
| 2243 | - @validate_filename |
| 2244 | - def process_IN_CLOSE_NOWRITE(self, event): |
| 2245 | - """Filter IN_CLOSE_NOWRITE to make it happen only in files.""" |
| 2246 | - if not (event.mask & pyinotify.IN_ISDIR): |
| 2247 | - self.general_processor.push_event(event) |
| 2248 | - |
| 2249 | - @validate_filename |
| 2250 | - def process_IN_CLOSE_WRITE(self, event): |
| 2251 | - """Filter IN_CLOSE_WRITE to make it happen only in files. |
| 2252 | - |
| 2253 | - eCryptFS sends IN_CLOSE_WRITE event for lower directories. |
| 2254 | - |
| 2255 | - """ |
| 2256 | - if not (event.mask & pyinotify.IN_ISDIR): |
| 2257 | - self.general_processor.push_event(event) |
| 2258 | - |
| 2259 | - def process_IN_MOVE_SELF(self, event): |
| 2260 | - """Don't do anything here. |
| 2261 | - |
| 2262 | - We just turned this event on because pyinotify does some |
| 2263 | - path-fixing in its internal processing when this happens. |
| 2264 | - |
| 2265 | - """ |
| 2266 | - |
| 2267 | - @validate_filename |
| 2268 | - def process_IN_MOVED_FROM(self, event): |
| 2269 | - """Capture the MOVED_FROM to maybe syntethize FILE_MOVED.""" |
| 2270 | - if self.held_event is not None: |
| 2271 | - self.release_held_event() |
| 2272 | - |
| 2273 | - self.held_event = event |
| 2274 | - self.timer = reactor.callLater(1, self.on_timeout) |
| 2275 | - |
| 2276 | - def platform_is_ignored(self, path): |
| 2277 | - """Should we ignore this path in the current platform.?""" |
| 2278 | - # don't support links yet |
| 2279 | - if os.path.islink(path): |
| 2280 | - return True |
| 2281 | - return False |
| 2282 | - |
| 2283 | - def is_ignored(self, path): |
| 2284 | - """Should we ignore this path?""" |
| 2285 | - return self.general_processor.is_ignored(path) |
| 2286 | - |
| 2287 | - @validate_filename |
| 2288 | - def process_IN_MOVED_TO(self, event): |
| 2289 | - """Capture the MOVED_TO to maybe syntethize FILE_MOVED.""" |
| 2290 | - if self.held_event is not None: |
| 2291 | - if event.cookie == self.held_event.cookie: |
| 2292 | - try: |
| 2293 | - self.timer.cancel() |
| 2294 | - except error.AlreadyCalled: |
| 2295 | - # self.timeout() was *just* called, do nothing here |
| 2296 | - pass |
| 2297 | - else: |
| 2298 | - f_path_dir = self.held_event.path |
| 2299 | - f_path = os.path.join(f_path_dir, self.held_event.name) |
| 2300 | - t_path_dir = event.path |
| 2301 | - t_path = os.path.join(t_path_dir, event.name) |
| 2302 | - |
| 2303 | - is_from_forreal = not self.is_ignored(f_path) |
| 2304 | - is_to_forreal = not self.is_ignored(t_path) |
| 2305 | - if is_from_forreal and is_to_forreal: |
| 2306 | - f_share_id = self.general_processor.get_path_share_id( |
| 2307 | - f_path_dir) |
| 2308 | - t_share_id = self.general_processor.get_path_share_id( |
| 2309 | - t_path_dir) |
| 2310 | - if event.dir: |
| 2311 | - evtname = "FS_DIR_" |
| 2312 | - else: |
| 2313 | - evtname = "FS_FILE_" |
| 2314 | - if f_share_id != t_share_id: |
| 2315 | - # if the share_id are != push a delete/create |
| 2316 | - m = "Delete because of different shares: %r" |
| 2317 | - self.general_processor.log.info(m, f_path) |
| 2318 | - self.general_processor.eq_push(evtname + "DELETE", |
| 2319 | - path=f_path) |
| 2320 | - self.general_processor.eq_push(evtname + "CREATE", |
| 2321 | - path=t_path) |
| 2322 | - if not event.dir: |
| 2323 | - self.general_processor.eq_push( |
| 2324 | - 'FS_FILE_CLOSE_WRITE', path=t_path) |
| 2325 | - else: |
| 2326 | - self.general_processor.monitor.inotify_watch_fix( |
| 2327 | - f_path, t_path) |
| 2328 | - self.general_processor.eq_push(evtname + "MOVE", |
| 2329 | - path_from=f_path, path_to=t_path) |
| 2330 | - elif is_to_forreal: |
| 2331 | - # this is the case of a MOVE from something ignored |
| 2332 | - # to a valid filename |
| 2333 | - if event.dir: |
| 2334 | - evtname = "FS_DIR_" |
| 2335 | - else: |
| 2336 | - evtname = "FS_FILE_" |
| 2337 | - self.general_processor.eq_push(evtname + "CREATE", |
| 2338 | - path=t_path) |
| 2339 | - if not event.dir: |
| 2340 | - self.general_processor.eq_push( |
| 2341 | - 'FS_FILE_CLOSE_WRITE', path=t_path) |
| 2342 | - |
| 2343 | - else: |
| 2344 | - # this is the case of a MOVE from something valid |
| 2345 | - # to an ignored filename |
| 2346 | - if event.dir: |
| 2347 | - evtname = "FS_DIR_" |
| 2348 | - else: |
| 2349 | - evtname = "FS_FILE_" |
| 2350 | - self.general_processor.eq_push(evtname + "DELETE", |
| 2351 | - path=f_path) |
| 2352 | - |
| 2353 | - self.held_event = None |
| 2354 | - return |
| 2355 | - else: |
| 2356 | - self.release_held_event() |
| 2357 | - self.general_processor.push_event(event) |
| 2358 | - else: |
| 2359 | - # we don't have a held_event so this is a move from outside. |
| 2360 | - # if it's a file move it's atomic on POSIX, so we aren't going to |
| 2361 | - # receive a IN_CLOSE_WRITE, so let's fake it for files |
| 2362 | - self.general_processor.push_event(event) |
| 2363 | - if not event.dir: |
| 2364 | - t_path = os.path.join(event.path, event.name) |
| 2365 | - self.general_processor.eq_push('FS_FILE_CLOSE_WRITE', |
| 2366 | - path=t_path) |
| 2367 | - |
| 2368 | - @validate_filename |
| 2369 | - def process_default(self, event): |
| 2370 | - """Push the event into the EventQueue.""" |
| 2371 | - if self.held_event is not None: |
| 2372 | - self.release_held_event() |
| 2373 | - self.general_processor.push_event(event) |
| 2374 | - |
| 2375 | - def freeze_begin(self, path): |
| 2376 | - """Puts in hold all the events for this path.""" |
| 2377 | - self.general_processor.freeze_begin(path) |
| 2378 | - |
| 2379 | - def freeze_rollback(self): |
| 2380 | - """Unfreezes the frozen path, reseting to idle state.""" |
| 2381 | - self.general_processor.freeze_rollback() |
| 2382 | - |
| 2383 | - def freeze_commit(self, events): |
| 2384 | - """Unfreezes the frozen path, sending received events if not dirty. |
| 2385 | - |
| 2386 | - If events for that path happened: |
| 2387 | - - return True |
| 2388 | - else: |
| 2389 | - - push the here received events, return False |
| 2390 | - """ |
| 2391 | - return self.general_processor.freeze_commit(events) |
| 2392 | - |
| 2393 | - def handle_dir_delete(self, fullpath): |
| 2394 | - """Some special work when a directory is deleted.""" |
| 2395 | - # remove the watch on that dir from our structures |
| 2396 | - self.general_processor.rm_watch(fullpath) |
| 2397 | - |
| 2398 | - # handle the case of move a dir to a non-watched directory |
| 2399 | - paths = self.general_processor.get_paths_starting_with(fullpath, |
| 2400 | - include_base=False) |
| 2401 | - |
| 2402 | - paths.sort(reverse=True) |
| 2403 | - for path, is_dir in paths: |
| 2404 | - m = "Pushing deletion because of parent dir move: (is_dir=%s) %r" |
| 2405 | - self.general_processor.log.info(m, is_dir, path) |
| 2406 | - if is_dir: |
| 2407 | - self.general_processor.rm_watch(path) |
| 2408 | - self.general_processor.eq_push('FS_DIR_DELETE', path=path) |
| 2409 | - else: |
| 2410 | - self.general_processor.eq_push('FS_FILE_DELETE', path=path) |
| 2411 | - |
| 2412 | - @property |
| 2413 | - def mute_filter(self): |
| 2414 | - """Return the mute filter used by the processor.""" |
| 2415 | - return self.general_processor.filter |
| 2416 | - |
| 2417 | - @property |
| 2418 | - def frozen_path(self): |
| 2419 | - """Return the frozen path.""" |
| 2420 | - return self.general_processor.frozen_path |
| 2421 | - |
| 2422 | - @property |
| 2423 | - def log(self): |
| 2424 | - """Return the logger of the instance.""" |
| 2425 | - return self.general_processor.log |
| 2426 | - |
| 2427 | - |
| 2428 | class FilesystemMonitor(object): |
| 2429 | """Manages the signals from filesystem.""" |
| 2430 | |
| 2431 | @@ -401,7 +147,7 @@ |
| 2432 | |
| 2433 | # general inotify |
| 2434 | self._inotify_general_wm = wm = pyinotify.WatchManager() |
| 2435 | - self._processor = _GeneralINotifyProcessor(self, ignore_config) |
| 2436 | + self._processor = notify_processor.NotifyProcessor(self, ignore_config) |
| 2437 | self._inotify_notifier_gral = pyinotify.Notifier(wm, self._processor) |
| 2438 | self._inotify_reader_gral = self._hook_inotify_to_twisted( |
| 2439 | wm, self._inotify_notifier_gral) |
| 2440 | @@ -415,6 +161,12 @@ |
| 2441 | wm, self._inotify_notifier_antr) |
| 2442 | self._ancestors_watchs = {} |
| 2443 | |
| 2444 | + @classmethod |
| 2445 | + def is_available_monitor(cls): |
| 2446 | + """Return if the monitor can be used in the platform.""" |
| 2447 | + # we can always use this monitor |
| 2448 | + return defer.succeed(True) |
| 2449 | + |
| 2450 | def add_to_mute_filter(self, event, **info): |
| 2451 | """Add info to mute filter in the processor.""" |
| 2452 | self._processor.add_to_mute_filter(event, info) |
| 2453 | |
| 2454 | === renamed file 'ubuntuone/platform/filesystem_notifications/windows.py' => 'ubuntuone/platform/filesystem_notifications/monitor/windows.py' |
| 2455 | --- ubuntuone/platform/filesystem_notifications/windows.py 2012-07-13 12:39:33 +0000 |
| 2456 | +++ ubuntuone/platform/filesystem_notifications/monitor/windows.py 2012-07-31 17:27:22 +0000 |
| 2457 | @@ -64,7 +64,6 @@ |
| 2458 | WAIT_OBJECT_0) |
| 2459 | |
| 2460 | from ubuntuone.platform.os_helper.windows import ( |
| 2461 | - is_valid_syncdaemon_path, |
| 2462 | get_syncdaemon_valid_path, |
| 2463 | ) |
| 2464 | |
| 2465 | @@ -113,15 +112,6 @@ |
| 2466 | FILE_NOTIFY_CHANGE_LAST_ACCESS |
| 2467 | |
| 2468 | |
| 2469 | -@is_valid_syncdaemon_path() |
| 2470 | -def path_is_ignored(path): |
| 2471 | - """Should we ignore this path in the current platform.?""" |
| 2472 | - # don't support links yet |
| 2473 | - if path.endswith('.lnk'): |
| 2474 | - return True |
| 2475 | - return False |
| 2476 | - |
| 2477 | - |
| 2478 | class Watch(object): |
| 2479 | """Implement the same functions as pyinotify.Watch.""" |
| 2480 | |
| 2481 | |
| 2482 | === added directory 'ubuntuone/platform/filesystem_notifications/notify_processor' |
| 2483 | === added file 'ubuntuone/platform/filesystem_notifications/notify_processor/__init__.py' |
| 2484 | --- ubuntuone/platform/filesystem_notifications/notify_processor/__init__.py 1970-01-01 00:00:00 +0000 |
| 2485 | +++ ubuntuone/platform/filesystem_notifications/notify_processor/__init__.py 2012-07-31 17:27:22 +0000 |
| 2486 | @@ -0,0 +1,46 @@ |
| 2487 | +# -*- coding: utf-8 *-* |
| 2488 | +# |
| 2489 | +# Copyright 2012 Canonical Ltd. |
| 2490 | +# |
| 2491 | +# This program is free software: you can redistribute it and/or modify it |
| 2492 | +# under the terms of the GNU General Public License version 3, as published |
| 2493 | +# by the Free Software Foundation. |
| 2494 | +# |
| 2495 | +# This program is distributed in the hope that it will be useful, but |
| 2496 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
| 2497 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
| 2498 | +# PURPOSE. See the GNU General Public License for more details. |
| 2499 | +# |
| 2500 | +# You should have received a copy of the GNU General Public License along |
| 2501 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
| 2502 | +# |
| 2503 | +# In addition, as a special exception, the copyright holders give |
| 2504 | +# permission to link the code of portions of this program with the |
| 2505 | +# OpenSSL library under certain conditions as described in each |
| 2506 | +# individual source file, and distribute linked combinations |
| 2507 | +# including the two. |
| 2508 | +# You must obey the GNU General Public License in all respects |
| 2509 | +# for all of the code used other than OpenSSL. If you modify |
| 2510 | +# file(s) with this exception, you may extend this exception to your |
| 2511 | +# version of the file(s), but you are not obligated to do so. If you |
| 2512 | +# do not wish to do so, delete this exception statement from your |
| 2513 | +# version. If you delete this exception statement from all source |
| 2514 | +# files in the program, then also delete it here |
| 2515 | +"""Notify processor diff implementations per platform.""" |
| 2516 | + |
| 2517 | +import sys |
| 2518 | + |
| 2519 | +if sys.platform in ('win32', 'darwin'): |
| 2520 | + from ubuntuone.platform.filesystem_notifications.notify_processor import ( |
| 2521 | + common, |
| 2522 | + ) |
| 2523 | + # workaround due to pyflakes :( |
| 2524 | + source = common |
| 2525 | +else: |
| 2526 | + from ubuntuone.platform.filesystem_notifications.notify_processor import ( |
| 2527 | + linux, |
| 2528 | + ) |
| 2529 | + # workaround due to pyflakes :( |
| 2530 | + source = linux |
| 2531 | + |
| 2532 | +NotifyProcessor = source.NotifyProcessor |
| 2533 | |
| 2534 | === added file 'ubuntuone/platform/filesystem_notifications/notify_processor/common.py' |
| 2535 | --- ubuntuone/platform/filesystem_notifications/notify_processor/common.py 1970-01-01 00:00:00 +0000 |
| 2536 | +++ ubuntuone/platform/filesystem_notifications/notify_processor/common.py 2012-07-31 17:27:22 +0000 |
| 2537 | @@ -0,0 +1,289 @@ |
| 2538 | +# -*- coding: utf-8 *-* |
| 2539 | +# |
| 2540 | +# Copyright 2011-2012 Canonical Ltd. |
| 2541 | +# |
| 2542 | +# This program is free software: you can redistribute it and/or modify it |
| 2543 | +# under the terms of the GNU General Public License version 3, as published |
| 2544 | +# by the Free Software Foundation. |
| 2545 | +# |
| 2546 | +# This program is distributed in the hope that it will be useful, but |
| 2547 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
| 2548 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
| 2549 | +# PURPOSE. See the GNU General Public License for more details. |
| 2550 | +# |
| 2551 | +# You should have received a copy of the GNU General Public License along |
| 2552 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
| 2553 | +# |
| 2554 | +# In addition, as a special exception, the copyright holders give |
| 2555 | +# permission to link the code of portions of this program with the |
| 2556 | +# OpenSSL library under certain conditions as described in each |
| 2557 | +# individual source file, and distribute linked combinations |
| 2558 | +# including the two. |
| 2559 | +# You must obey the GNU General Public License in all respects |
| 2560 | +# for all of the code used other than OpenSSL. If you modify |
| 2561 | +# file(s) with this exception, you may extend this exception to your |
| 2562 | +# version of the file(s), but you are not obligated to do so. If you |
| 2563 | +# do not wish to do so, delete this exception statement from your |
| 2564 | +# version. If you delete this exception statement from all source |
| 2565 | +# files in the program, then also delete it here. |
| 2566 | +"""Win and darwin implementation.""" |
| 2567 | + |
| 2568 | +import os |
| 2569 | +import sys |
| 2570 | + |
| 2571 | +from ubuntuone.syncdaemon.filesystem_notifications import ( |
| 2572 | + GeneralINotifyProcessor, |
| 2573 | +) |
| 2574 | +from ubuntuone.platform.filesystem_notifications.pyinotify_agnostic import ( |
| 2575 | + Event, |
| 2576 | + ProcessEvent, |
| 2577 | + IN_OPEN, |
| 2578 | + IN_CLOSE_NOWRITE, |
| 2579 | + IN_CLOSE_WRITE, |
| 2580 | + IN_CREATE, |
| 2581 | + IN_IGNORED, |
| 2582 | + IN_ISDIR, |
| 2583 | + IN_DELETE, |
| 2584 | + IN_MOVED_FROM, |
| 2585 | + IN_MOVED_TO, |
| 2586 | +) |
| 2587 | + |
| 2588 | +from ubuntuone.platform.os_helper import ( |
| 2589 | + is_valid_syncdaemon_path, |
| 2590 | +) |
| 2591 | + |
| 2592 | +if sys.platform == 'win32': |
| 2593 | + |
| 2594 | + @is_valid_syncdaemon_path() |
| 2595 | + def win_is_ignored(path): |
| 2596 | + """Should we ignore this path in the current platform.?""" |
| 2597 | + # don't support links yet |
| 2598 | + if path.endswith('.lnk'): |
| 2599 | + return True |
| 2600 | + return False |
| 2601 | + |
| 2602 | + # work around for pyflakes :( |
| 2603 | + path_is_ignored = win_is_ignored |
| 2604 | +else: |
| 2605 | + |
| 2606 | + def unix_is_ignored(path): |
| 2607 | + """Should we ignore this path in the current platform.?""" |
| 2608 | + # don't support links yet |
| 2609 | + if os.path.islink(path): |
| 2610 | + return True |
| 2611 | + return False |
| 2612 | + |
| 2613 | + # work around for pyflakes :( |
| 2614 | + path_is_ignored = unix_is_ignored |
| 2615 | + |
| 2616 | +# translates quickly the event and it's is_dir state to our standard events |
| 2617 | +NAME_TRANSLATIONS = { |
| 2618 | + IN_OPEN: 'FS_FILE_OPEN', |
| 2619 | + IN_CLOSE_NOWRITE: 'FS_FILE_CLOSE_NOWRITE', |
| 2620 | + IN_CLOSE_WRITE: 'FS_FILE_CLOSE_WRITE', |
| 2621 | + IN_CREATE: 'FS_FILE_CREATE', |
| 2622 | + IN_CREATE | IN_ISDIR: 'FS_DIR_CREATE', |
| 2623 | + IN_DELETE: 'FS_FILE_DELETE', |
| 2624 | + IN_DELETE | IN_ISDIR: 'FS_DIR_DELETE', |
| 2625 | + IN_MOVED_FROM: 'FS_FILE_DELETE', |
| 2626 | + IN_MOVED_FROM | IN_ISDIR: 'FS_DIR_DELETE', |
| 2627 | + IN_MOVED_TO: 'FS_FILE_CREATE', |
| 2628 | + IN_MOVED_TO | IN_ISDIR: 'FS_DIR_CREATE'} |
| 2629 | + |
| 2630 | + |
| 2631 | +class NotifyProcessor(ProcessEvent): |
| 2632 | + """Processor that takes care of dealing with the events. |
| 2633 | + |
| 2634 | + This interface will be exposed to syncdaemon, ergo all passed |
| 2635 | + and returned paths must be a sequence of BYTES encoded with utf8. |
| 2636 | + """ |
| 2637 | + |
| 2638 | + def __init__(self, monitor, ignore_config=None): |
| 2639 | + self.general_processor = GeneralINotifyProcessor(monitor, |
| 2640 | + self.handle_dir_delete, NAME_TRANSLATIONS, |
| 2641 | + path_is_ignored, IN_IGNORED, ignore_config=ignore_config) |
| 2642 | + self.held_event = None |
| 2643 | + |
| 2644 | + def rm_from_mute_filter(self, event, paths): |
| 2645 | + """Remove event from the mute filter.""" |
| 2646 | + self.general_processor.rm_from_mute_filter(event, paths) |
| 2647 | + |
| 2648 | + def add_to_mute_filter(self, event, paths): |
| 2649 | + """Add an event and path(s) to the mute filter.""" |
| 2650 | + self.general_processor.add_to_mute_filter(event, paths) |
| 2651 | + |
| 2652 | + @is_valid_syncdaemon_path(path_indexes=[1]) |
| 2653 | + def is_ignored(self, path): |
| 2654 | + """Should we ignore this path?""" |
| 2655 | + return self.general_processor.is_ignored(path) |
| 2656 | + |
| 2657 | + def release_held_event(self, timed_out=False): |
| 2658 | + """Release the event on hold to fulfill its destiny.""" |
| 2659 | + self.general_processor.push_event(self.held_event) |
| 2660 | + self.held_event = None |
| 2661 | + |
| 2662 | + def process_IN_MODIFY(self, event): |
| 2663 | + """Capture a modify event and fake an open ^ close write events.""" |
| 2664 | + # lets ignore dir changes |
| 2665 | + if event.dir: |
| 2666 | + return |
| 2667 | + # on someplatforms we just get IN_MODIFY, lets always fake |
| 2668 | + # an OPEN & CLOSE_WRITE couple |
| 2669 | + raw_open = raw_close = { |
| 2670 | + 'wd': event.wd, |
| 2671 | + 'dir': event.dir, |
| 2672 | + 'name': event.name, |
| 2673 | + 'path': event.path} |
| 2674 | + # caculate the open mask |
| 2675 | + raw_open['mask'] = IN_OPEN |
| 2676 | + # create the event using the raw data, then fix the pathname param |
| 2677 | + open_event = Event(raw_open) |
| 2678 | + open_event.pathname = event.pathname |
| 2679 | + # push the open |
| 2680 | + self.general_processor.push_event(open_event) |
| 2681 | + raw_close['mask'] = IN_CLOSE_WRITE |
| 2682 | + close_event = Event(raw_close) |
| 2683 | + close_event.pathname = event.pathname |
| 2684 | + # push the close event |
| 2685 | + self.general_processor.push_event(close_event) |
| 2686 | + |
| 2687 | + def process_IN_MOVED_FROM(self, event): |
| 2688 | + """Capture the MOVED_FROM to maybe syntethize FILE_MOVED.""" |
| 2689 | + if self.held_event is not None: |
| 2690 | + self.general_processor.log.warn('Lost pair event of %s', |
| 2691 | + self.held_event) |
| 2692 | + self.held_event = event |
| 2693 | + |
| 2694 | + def _fake_create_event(self, event): |
| 2695 | + """Fake the creation of an event.""" |
| 2696 | + # this is the case of a MOVE from an ignored path (links for example) |
| 2697 | + # to a valid path |
| 2698 | + if event.dir: |
| 2699 | + evtname = "FS_DIR_" |
| 2700 | + else: |
| 2701 | + evtname = "FS_FILE_" |
| 2702 | + self.general_processor.eq_push(evtname + "CREATE", path=event.pathname) |
| 2703 | + if not event.dir: |
| 2704 | + self.general_processor.eq_push('FS_FILE_CLOSE_WRITE', |
| 2705 | + path=event.pathname) |
| 2706 | + |
| 2707 | + def _fake_delete_create_event(self, event): |
| 2708 | + """Fake the deletion and the creation.""" |
| 2709 | + # this is the case of a MOVE from a watch UDF to a diff UDF which |
| 2710 | + # means that we have to copy the way linux works. |
| 2711 | + if event.dir: |
| 2712 | + evtname = "FS_DIR_" |
| 2713 | + else: |
| 2714 | + evtname = "FS_FILE_" |
| 2715 | + m = "Delete because of different shares: %r" |
| 2716 | + self.log.info(m, self.held_event.pathname) |
| 2717 | + self.general_processor.eq_push(evtname + "DELETE", |
| 2718 | + path=self.held_event.pathname) |
| 2719 | + self.general_processor.eq_push(evtname + "CREATE", path=event.pathname) |
| 2720 | + if not event.dir: |
| 2721 | + self.general_processor.eq_push('FS_FILE_CLOSE_WRITE', |
| 2722 | + path=event.pathname) |
| 2723 | + |
| 2724 | + def process_IN_MOVED_TO(self, event): |
| 2725 | + """Capture the MOVED_TO to maybe syntethize FILE_MOVED.""" |
| 2726 | + if self.held_event is not None: |
| 2727 | + if event.cookie == self.held_event.cookie: |
| 2728 | + f_path_dir = os.path.split(self.held_event.pathname)[0] |
| 2729 | + t_path_dir = os.path.split(event.pathname)[0] |
| 2730 | + |
| 2731 | + is_from_forreal = not self.is_ignored(self.held_event.pathname) |
| 2732 | + is_to_forreal = not self.is_ignored(event.pathname) |
| 2733 | + if is_from_forreal and is_to_forreal: |
| 2734 | + f_share_id = self.general_processor.get_path_share_id( |
| 2735 | + f_path_dir) |
| 2736 | + t_share_id = self.general_processor.get_path_share_id( |
| 2737 | + t_path_dir) |
| 2738 | + if f_share_id != t_share_id: |
| 2739 | + # if the share_id are != push a delete/create |
| 2740 | + self._fake_delete_create_event(event) |
| 2741 | + else: |
| 2742 | + if event.dir: |
| 2743 | + evtname = "FS_DIR_" |
| 2744 | + else: |
| 2745 | + evtname = "FS_FILE_" |
| 2746 | + self.general_processor.eq_push(evtname + "MOVE", |
| 2747 | + path_from=self.held_event.pathname, |
| 2748 | + path_to=event.pathname) |
| 2749 | + elif is_to_forreal: |
| 2750 | + # this is the case of a MOVE from something ignored |
| 2751 | + # to a valid filename |
| 2752 | + self._fake_create_event(event) |
| 2753 | + |
| 2754 | + self.held_event = None |
| 2755 | + return |
| 2756 | + else: |
| 2757 | + self.release_held_event() |
| 2758 | + self.general_processor.push_event(event) |
| 2759 | + else: |
| 2760 | + # We should never get here, I really do not know how we |
| 2761 | + # got here |
| 2762 | + self.general_processor.log.warn( |
| 2763 | + 'Cookie does not match the previoues held event!') |
| 2764 | + self.general_processor.log.warn('Ignoring %s', event) |
| 2765 | + |
| 2766 | + def process_default(self, event): |
| 2767 | + """Push the event into the EventQueue.""" |
| 2768 | + if self.held_event is not None: |
| 2769 | + self.release_held_event() |
| 2770 | + self.general_processor.push_event(event) |
| 2771 | + |
| 2772 | + @is_valid_syncdaemon_path(path_indexes=[1]) |
| 2773 | + def handle_dir_delete(self, fullpath): |
| 2774 | + """Some special work when a directory is deleted.""" |
| 2775 | + # remove the watch on that dir from our structures, this mainly tells |
| 2776 | + # the monitor to remove the watch which is fowaded to a watch manager. |
| 2777 | + self.general_processor.rm_watch(fullpath) |
| 2778 | + |
| 2779 | + # handle the case of move a dir to a non-watched directory |
| 2780 | + paths = self.general_processor.get_paths_starting_with(fullpath, |
| 2781 | + include_base=False) |
| 2782 | + |
| 2783 | + paths.sort(reverse=True) |
| 2784 | + for path, is_dir in paths: |
| 2785 | + m = "Pushing deletion because of parent dir move: (is_dir=%s) %r" |
| 2786 | + self.general_processor.log.info(m, is_dir, path) |
| 2787 | + if is_dir: |
| 2788 | + # same as the above remove |
| 2789 | + self.general_processor.rm_watch(path) |
| 2790 | + self.general_processor.eq_push('FS_DIR_DELETE', path=path) |
| 2791 | + else: |
| 2792 | + self.general_processor.eq_push('FS_FILE_DELETE', path=path) |
| 2793 | + |
| 2794 | + @is_valid_syncdaemon_path(path_indexes=[1]) |
| 2795 | + def freeze_begin(self, path): |
| 2796 | + """Puts in hold all the events for this path.""" |
| 2797 | + self.general_processor.freeze_begin(path) |
| 2798 | + |
| 2799 | + def freeze_rollback(self): |
| 2800 | + """Unfreezes the frozen path, reseting to idle state.""" |
| 2801 | + self.general_processor.freeze_rollback() |
| 2802 | + |
| 2803 | + def freeze_commit(self, events): |
| 2804 | + """Unfreezes the frozen path, sending received events if not dirty. |
| 2805 | + |
| 2806 | + If events for that path happened: |
| 2807 | + - return True |
| 2808 | + else: |
| 2809 | + - push the here received events, return False |
| 2810 | + """ |
| 2811 | + return self.general_processor.freeze_commit(events) |
| 2812 | + |
| 2813 | + @property |
| 2814 | + def mute_filter(self): |
| 2815 | + """Return the mute filter used by the processor.""" |
| 2816 | + return self.general_processor.filter |
| 2817 | + |
| 2818 | + @property |
| 2819 | + def frozen_path(self): |
| 2820 | + """Return the frozen path.""" |
| 2821 | + return self.general_processor.frozen_path |
| 2822 | + |
| 2823 | + @property |
| 2824 | + def log(self): |
| 2825 | + """Return the logger of the instance.""" |
| 2826 | + return self.general_processor.log |
| 2827 | |
| 2828 | === added file 'ubuntuone/platform/filesystem_notifications/notify_processor/linux.py' |
| 2829 | --- ubuntuone/platform/filesystem_notifications/notify_processor/linux.py 1970-01-01 00:00:00 +0000 |
| 2830 | +++ ubuntuone/platform/filesystem_notifications/notify_processor/linux.py 2012-07-31 17:27:22 +0000 |
| 2831 | @@ -0,0 +1,322 @@ |
| 2832 | +# -*- coding: utf-8 *-* |
| 2833 | +# |
| 2834 | +# Copyright 2012 Canonical Ltd. |
| 2835 | +# |
| 2836 | +# This program is free software: you can redistribute it and/or modify it |
| 2837 | +# under the terms of the GNU General Public License version 3, as published |
| 2838 | +# by the Free Software Foundation. |
| 2839 | +# |
| 2840 | +# This program is distributed in the hope that it will be useful, but |
| 2841 | +# WITHOUT ANY WARRANTY; without even the implied warranties of |
| 2842 | +# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
| 2843 | +# PURPOSE. See the GNU General Public License for more details. |
| 2844 | +# |
| 2845 | +# You should have received a copy of the GNU General Public License along |
| 2846 | +# with this program. If not, see <http://www.gnu.org/licenses/>. |
| 2847 | +# |
| 2848 | +# In addition, as a special exception, the copyright holders give |
| 2849 | +# permission to link the code of portions of this program with the |
| 2850 | +# OpenSSL library under certain conditions as described in each |
| 2851 | +# individual source file, and distribute linked combinations |
| 2852 | +# including the two. |
| 2853 | +# You must obey the GNU General Public License in all respects |
| 2854 | +# for all of the code used other than OpenSSL. If you modify |
| 2855 | +# file(s) with this exception, you may extend this exception to your |
| 2856 | +# version of the file(s), but you are not obligated to do so. If you |
| 2857 | +# do not wish to do so, delete this exception statement from your |
| 2858 | +# version. If you delete this exception statement from all source |
| 2859 | +# files in the program, then also delete it here. |
| 2860 | +"""Linux implementation.""" |
| 2861 | + |
| 2862 | +import os |
| 2863 | + |
| 2864 | +import pyinotify |
| 2865 | +from twisted.internet import reactor, error |
| 2866 | + |
| 2867 | + |
| 2868 | +# translates quickly the event and it's is_dir state to our standard events |
| 2869 | +NAME_TRANSLATIONS = { |
| 2870 | + pyinotify.IN_OPEN: 'FS_FILE_OPEN', |
| 2871 | + pyinotify.IN_CLOSE_NOWRITE: 'FS_FILE_CLOSE_NOWRITE', |
| 2872 | + pyinotify.IN_CLOSE_WRITE: 'FS_FILE_CLOSE_WRITE', |
| 2873 | + pyinotify.IN_CREATE: 'FS_FILE_CREATE', |
| 2874 | + pyinotify.IN_CREATE | pyinotify.IN_ISDIR: 'FS_DIR_CREATE', |
| 2875 | + pyinotify.IN_DELETE: 'FS_FILE_DELETE', |
| 2876 | + pyinotify.IN_DELETE | pyinotify.IN_ISDIR: 'FS_DIR_DELETE', |
| 2877 | + pyinotify.IN_MOVED_FROM: 'FS_FILE_DELETE', |
| 2878 | + pyinotify.IN_MOVED_FROM | pyinotify.IN_ISDIR: 'FS_DIR_DELETE', |
| 2879 | + pyinotify.IN_MOVED_TO: 'FS_FILE_CREATE', |
| 2880 | + pyinotify.IN_MOVED_TO | pyinotify.IN_ISDIR: 'FS_DIR_CREATE', |
| 2881 | +} |
| 2882 | + |
| 2883 | +# these are the events that will listen from inotify |
| 2884 | +INOTIFY_EVENTS_GENERAL = ( |
| 2885 | + pyinotify.IN_OPEN | |
| 2886 | + pyinotify.IN_CLOSE_NOWRITE | |
| 2887 | + pyinotify.IN_CLOSE_WRITE | |
| 2888 | + pyinotify.IN_CREATE | |
| 2889 | + pyinotify.IN_DELETE | |
| 2890 | + pyinotify.IN_MOVED_FROM | |
| 2891 | + pyinotify.IN_MOVED_TO | |
| 2892 | + pyinotify.IN_MOVE_SELF) |
| 2893 | +INOTIFY_EVENTS_ANCESTORS = ( |
| 2894 | + pyinotify.IN_DELETE | |
| 2895 | + pyinotify.IN_MOVED_FROM | |
| 2896 | + pyinotify.IN_MOVED_TO | |
| 2897 | + pyinotify.IN_MOVE_SELF) |
| 2898 | + |
| 2899 | +from ubuntuone.syncdaemon.filesystem_notifications import ( |
| 2900 | + GeneralINotifyProcessor, |
| 2901 | +) |
| 2902 | + |
| 2903 | + |
| 2904 | +def validate_filename(real_func): |
| 2905 | + """Decorator that validates the filename.""" |
| 2906 | + def func(self, event): |
| 2907 | + """If valid, executes original function.""" |
| 2908 | + try: |
| 2909 | + # validate UTF-8 |
| 2910 | + event.name.decode("utf8") |
| 2911 | + except UnicodeDecodeError: |
| 2912 | + dirname = event.path.decode("utf8") |
| 2913 | + self.general_processor.invnames_log.info("%s in %r: path %r", |
| 2914 | + event.maskname, dirname, event.name) |
| 2915 | + self.general_processor.monitor.eq.push('FS_INVALID_NAME', |
| 2916 | + dirname=dirname, filename=event.name) |
| 2917 | + else: |
| 2918 | + real_func(self, event) |
| 2919 | + return func |
| 2920 | + |
| 2921 | + |
| 2922 | +class NotifyProcessor(pyinotify.ProcessEvent): |
| 2923 | + """inotify's processor when a general event happens. |
| 2924 | + |
| 2925 | + This class also catchs the MOVEs events, and synthetises a new |
| 2926 | + FS_(DIR|FILE)_MOVE event when possible. |
| 2927 | + """ |
| 2928 | + def __init__(self, monitor, ignore_config=None): |
| 2929 | + self.general_processor = GeneralINotifyProcessor(monitor, |
| 2930 | + self.handle_dir_delete, NAME_TRANSLATIONS, |
| 2931 | + self.platform_is_ignored, pyinotify.IN_IGNORED, |
| 2932 | + ignore_config=ignore_config) |
| 2933 | + self.held_event = None |
| 2934 | + self.timer = None |
| 2935 | + |
| 2936 | + def shutdown(self): |
| 2937 | + """Shut down the processor.""" |
| 2938 | + if self.timer is not None and self.timer.active(): |
| 2939 | + self.timer.cancel() |
| 2940 | + |
| 2941 | + def rm_from_mute_filter(self, event, paths): |
| 2942 | + """Remove an event and path(s) from the mute filter.""" |
| 2943 | + self.general_processor.rm_from_mute_filter(event, paths) |
| 2944 | + |
| 2945 | + def add_to_mute_filter(self, event, paths): |
| 2946 | + """Add an event and path(s) to the mute filter.""" |
| 2947 | + self.general_processor.add_to_mute_filter(event, paths) |
| 2948 | + |
| 2949 | + def on_timeout(self): |
| 2950 | + """Called on timeout.""" |
| 2951 | + if self.held_event is not None: |
| 2952 | + self.release_held_event(True) |
| 2953 | + |
| 2954 | + def release_held_event(self, timed_out=False): |
| 2955 | + """Release the event on hold to fulfill its destiny.""" |
| 2956 | + if not timed_out: |
| 2957 | + try: |
| 2958 | + self.timer.cancel() |
| 2959 | + except error.AlreadyCalled: |
| 2960 | + # self.timeout() was *just* called, do nothing here |
| 2961 | + return |
| 2962 | + self.general_processor.push_event(self.held_event) |
| 2963 | + self.held_event = None |
| 2964 | + |
| 2965 | + @validate_filename |
| 2966 | + def process_IN_OPEN(self, event): |
| 2967 | + """Filter IN_OPEN to make it happen only in files.""" |
| 2968 | + if not (event.mask & pyinotify.IN_ISDIR): |
| 2969 | + self.general_processor.push_event(event) |
| 2970 | + |
| 2971 | + @validate_filename |
| 2972 | + def process_IN_CLOSE_NOWRITE(self, event): |
| 2973 | + """Filter IN_CLOSE_NOWRITE to make it happen only in files.""" |
| 2974 | + if not (event.mask & pyinotify.IN_ISDIR): |
| 2975 | + self.general_processor.push_event(event) |
| 2976 | + |
| 2977 | + @validate_filename |
| 2978 | + def process_IN_CLOSE_WRITE(self, event): |
| 2979 | + """Filter IN_CLOSE_WRITE to make it happen only in files. |
| 2980 | + |
| 2981 | + eCryptFS sends IN_CLOSE_WRITE event for lower directories. |
| 2982 | + |
| 2983 | + """ |
| 2984 | + if not (event.mask & pyinotify.IN_ISDIR): |
| 2985 | + self.general_processor.push_event(event) |
| 2986 | + |
| 2987 | + def process_IN_MOVE_SELF(self, event): |
| 2988 | + """Don't do anything here. |
| 2989 | + |
| 2990 | + We just turned this event on because pyinotify does some |
| 2991 | + path-fixing in its internal processing when this happens. |
| 2992 | + |
| 2993 | + """ |
| 2994 | + |
| 2995 | + @validate_filename |
| 2996 | + def process_IN_MOVED_FROM(self, event): |
| 2997 | + """Capture the MOVED_FROM to maybe syntethize FILE_MOVED.""" |
| 2998 | + if self.held_event is not None: |
| 2999 | + self.release_held_event() |
| 3000 | + |
| 3001 | + self.held_event = event |
| 3002 | + self.timer = reactor.callLater(1, self.on_timeout) |
| 3003 | + |
| 3004 | + def platform_is_ignored(self, path): |
| 3005 | + """Should we ignore this path in the current platform.?""" |
| 3006 | + # don't support links yet |
| 3007 | + if os.path.islink(path): |
| 3008 | + return True |
| 3009 | + return False |
| 3010 | + |
| 3011 | + def is_ignored(self, path): |
| 3012 | + """Should we ignore this path?""" |
| 3013 | + return self.general_processor.is_ignored(path) |
| 3014 | + |
| 3015 | + @validate_filename |
| 3016 | + def process_IN_MOVED_TO(self, event): |
| 3017 | + """Capture the MOVED_TO to maybe syntethize FILE_MOVED.""" |
| 3018 | + if self.held_event is not None: |
| 3019 | + if event.cookie == self.held_event.cookie: |
| 3020 | + try: |
| 3021 | + self.timer.cancel() |
| 3022 | + except error.AlreadyCalled: |
| 3023 | + # self.timeout() was *just* called, do nothing here |
| 3024 | + pass |
| 3025 | + else: |
| 3026 | + f_path_dir = self.held_event.path |
| 3027 | + f_path = os.path.join(f_path_dir, self.held_event.name) |
| 3028 | + t_path_dir = event.path |
| 3029 | + t_path = os.path.join(t_path_dir, event.name) |
| 3030 | + |
| 3031 | + is_from_forreal = not self.is_ignored(f_path) |
| 3032 | + is_to_forreal = not self.is_ignored(t_path) |
| 3033 | + if is_from_forreal and is_to_forreal: |
| 3034 | + f_share_id = self.general_processor.get_path_share_id( |
| 3035 | + f_path_dir) |
| 3036 | + t_share_id = self.general_processor.get_path_share_id( |
| 3037 | + t_path_dir) |
| 3038 | + if event.dir: |
| 3039 | + evtname = "FS_DIR_" |
| 3040 | + else: |
| 3041 | + evtname = "FS_FILE_" |
| 3042 | + if f_share_id != t_share_id: |
| 3043 | + # if the share_id are != push a delete/create |
| 3044 | + m = "Delete because of different shares: %r" |
| 3045 | + self.general_processor.log.info(m, f_path) |
| 3046 | + self.general_processor.eq_push(evtname + "DELETE", |
| 3047 | + path=f_path) |
| 3048 | + self.general_processor.eq_push(evtname + "CREATE", |
| 3049 | + path=t_path) |
| 3050 | + if not event.dir: |
| 3051 | + self.general_processor.eq_push( |
| 3052 | + 'FS_FILE_CLOSE_WRITE', path=t_path) |
| 3053 | + else: |
| 3054 | + self.general_processor.monitor.inotify_watch_fix( |
| 3055 | + f_path, t_path) |
| 3056 | + self.general_processor.eq_push(evtname + "MOVE", |
| 3057 | + path_from=f_path, path_to=t_path) |
| 3058 | + elif is_to_forreal: |
| 3059 | + # this is the case of a MOVE from something ignored |
| 3060 | + # to a valid filename |
| 3061 | + if event.dir: |
| 3062 | + evtname = "FS_DIR_" |
| 3063 | + else: |
| 3064 | + evtname = "FS_FILE_" |
| 3065 | + self.general_processor.eq_push(evtname + "CREATE", |
| 3066 | + path=t_path) |
| 3067 | + if not event.dir: |
| 3068 | + self.general_processor.eq_push( |
| 3069 | + 'FS_FILE_CLOSE_WRITE', path=t_path) |
| 3070 | + |
| 3071 | + else: |
| 3072 | + # this is the case of a MOVE from something valid |
| 3073 | + # to an ignored filename |
| 3074 | + if event.dir: |
| 3075 | + evtname = "FS_DIR_" |
| 3076 | + else: |
| 3077 | + evtname = "FS_FILE_" |
| 3078 | + self.general_processor.eq_push(evtname + "DELETE", |
| 3079 | + path=f_path) |
| 3080 | + |
| 3081 | + self.held_event = None |
| 3082 | + return |
| 3083 | + else: |
| 3084 | + self.release_held_event() |
| 3085 | + self.general_processor.push_event(event) |
| 3086 | + else: |
| 3087 | + # we don't have a held_event so this is a move from outside. |
| 3088 | + # if it's a file move it's atomic on POSIX, so we aren't going to |
| 3089 | + # receive a IN_CLOSE_WRITE, so let's fake it for files |
| 3090 | + self.general_processor.push_event(event) |
| 3091 | + if not event.dir: |
| 3092 | + t_path = os.path.join(event.path, event.name) |
| 3093 | + self.general_processor.eq_push('FS_FILE_CLOSE_WRITE', |
| 3094 | + path=t_path) |
| 3095 | + |
| 3096 | + @validate_filename |
| 3097 | + def process_default(self, event): |
| 3098 | + """Push the event into the EventQueue.""" |
| 3099 | + if self.held_event is not None: |
| 3100 | + self.release_held_event() |
| 3101 | + self.general_processor.push_event(event) |
| 3102 | + |
| 3103 | + def freeze_begin(self, path): |
| 3104 | + """Puts in hold all the events for this path.""" |
| 3105 | + self.general_processor.freeze_begin(path) |
| 3106 | + |
| 3107 | + def freeze_rollback(self): |
| 3108 | + """Unfreezes the frozen path, reseting to idle state.""" |
| 3109 | + self.general_processor.freeze_rollback() |
| 3110 | + |
| 3111 | + def freeze_commit(self, events): |
| 3112 | + """Unfreezes the frozen path, sending received events if not dirty. |
| 3113 | + |
| 3114 | + If events for that path happened: |
| 3115 | + - return True |
| 3116 | + else: |
| 3117 | + - push the here received events, return False |
| 3118 | + """ |
| 3119 | + return self.general_processor.freeze_commit(events) |
| 3120 | + |
| 3121 | + def handle_dir_delete(self, fullpath): |
| 3122 | + """Some special work when a directory is deleted.""" |
| 3123 | + # remove the watch on that dir from our structures |
| 3124 | + self.general_processor.rm_watch(fullpath) |
| 3125 | + |
| 3126 | + # handle the case of move a dir to a non-watched directory |
| 3127 | + paths = self.general_processor.get_paths_starting_with(fullpath, |
| 3128 | + include_base=False) |
| 3129 | + |
| 3130 | + paths.sort(reverse=True) |
| 3131 | + for path, is_dir in paths: |
| 3132 | + m = "Pushing deletion because of parent dir move: (is_dir=%s) %r" |
| 3133 | + self.general_processor.log.info(m, is_dir, path) |
| 3134 | + if is_dir: |
| 3135 | + self.general_processor.rm_watch(path) |
| 3136 | + self.general_processor.eq_push('FS_DIR_DELETE', path=path) |
| 3137 | + else: |
| 3138 | + self.general_processor.eq_push('FS_FILE_DELETE', path=path) |
| 3139 | + |
| 3140 | + @property |
| 3141 | + def mute_filter(self): |
| 3142 | + """Return the mute filter used by the processor.""" |
| 3143 | + return self.general_processor.filter |
| 3144 | + |
| 3145 | + @property |
| 3146 | + def frozen_path(self): |
| 3147 | + """Return the frozen path.""" |
| 3148 | + return self.general_processor.frozen_path |
| 3149 | + |
| 3150 | + @property |
| 3151 | + def log(self): |
| 3152 | + """Return the logger of the instance.""" |
| 3153 | + return self.general_processor.log |
| 3154 | |
| 3155 | === modified file 'ubuntuone/syncdaemon/event_queue.py' |
| 3156 | --- ubuntuone/syncdaemon/event_queue.py 2012-06-28 10:26:50 +0000 |
| 3157 | +++ ubuntuone/syncdaemon/event_queue.py 2012-07-31 17:27:22 +0000 |
| 3158 | @@ -38,7 +38,9 @@ |
| 3159 | from twisted.internet import defer |
| 3160 | |
| 3161 | from ubuntuone.platform.os_helper import access |
| 3162 | -from ubuntuone.platform.filesystem_notifications import FilesystemMonitor |
| 3163 | +from ubuntuone.platform.filesystem_notifications.monitor import ( |
| 3164 | + FilesystemMonitor, |
| 3165 | +) |
| 3166 | |
| 3167 | # these are our internal events, what is inserted into the whole system |
| 3168 | EVENTS = { |
| 3169 | @@ -183,13 +185,17 @@ |
| 3170 | class EventQueue(object): |
| 3171 | """Manages the events from different sources and distributes them.""" |
| 3172 | |
| 3173 | - def __init__(self, fs, ignore_config=None): |
| 3174 | + def __init__(self, fs, ignore_config=None, monitor_class=None): |
| 3175 | self.listener_map = {} |
| 3176 | |
| 3177 | self.log = logging.getLogger('ubuntuone.SyncDaemon.EQ') |
| 3178 | self.fs = fs |
| 3179 | |
| 3180 | - self.monitor = FilesystemMonitor(self, fs, ignore_config) |
| 3181 | + if monitor_class is None: |
| 3182 | + # use the default class returned by platform |
| 3183 | + self.monitor = FilesystemMonitor(self, fs, ignore_config) |
| 3184 | + else: |
| 3185 | + self.monitor = monitor_class(self, fs, ignore_config) |
| 3186 | |
| 3187 | self.dispatching = False |
| 3188 | self.dispatch_queue = collections.deque() |
| 3189 | |
| 3190 | === modified file 'ubuntuone/syncdaemon/filesystem_manager.py' |
| 3191 | --- ubuntuone/syncdaemon/filesystem_manager.py 2012-04-09 20:07:05 +0000 |
| 3192 | +++ ubuntuone/syncdaemon/filesystem_manager.py 2012-07-31 17:27:22 +0000 |
| 3193 | @@ -744,6 +744,7 @@ |
| 3194 | mdobj = self.fs[mdid] |
| 3195 | if mdobj["share_id"] == share_id: |
| 3196 | all_mdobjs.append(_MDObject(**mdobj)) |
| 3197 | + |
| 3198 | return all_mdobjs |
| 3199 | |
| 3200 | def get_mdobjs_in_dir(self, base_path): |
| 3201 | |
| 3202 | === modified file 'ubuntuone/syncdaemon/filesystem_notifications.py' |
| 3203 | --- ubuntuone/syncdaemon/filesystem_notifications.py 2012-04-09 20:07:05 +0000 |
| 3204 | +++ ubuntuone/syncdaemon/filesystem_notifications.py 2012-07-31 17:27:22 +0000 |
| 3205 | @@ -44,7 +44,6 @@ |
| 3206 | |
| 3207 | def __init__(self, monitor, handle_dir_delete, name_translations, |
| 3208 | platform_is_ignored, ignore_mask, ignore_config=None): |
| 3209 | - super(GeneralINotifyProcessor, self).__init__() |
| 3210 | self.log = logging.getLogger('ubuntuone.SyncDaemon.' |
| 3211 | + 'filesystem_notifications.GeneralProcessor') |
| 3212 | self.log.setLevel(TRACE) |
| 3213 | |
| 3214 | === modified file 'ubuntuone/syncdaemon/main.py' |
| 3215 | --- ubuntuone/syncdaemon/main.py 2012-06-21 18:58:50 +0000 |
| 3216 | +++ ubuntuone/syncdaemon/main.py 2012-07-31 17:27:22 +0000 |
| 3217 | @@ -90,7 +90,7 @@ |
| 3218 | handshake_timeout=30, |
| 3219 | shares_symlink_name='Shared With Me', |
| 3220 | read_limit=None, write_limit=None, throttling_enabled=False, |
| 3221 | - ignore_files=None, oauth_credentials=None): |
| 3222 | + ignore_files=None, oauth_credentials=None, monitor_class=None): |
| 3223 | self.root_dir = root_dir |
| 3224 | self.shares_dir = shares_dir |
| 3225 | self.shares_dir_link = os.path.join(self.root_dir, shares_symlink_name) |
| 3226 | @@ -115,7 +115,8 @@ |
| 3227 | self.vm = volume_manager.VolumeManager(self) |
| 3228 | self.fs = filesystem_manager.FileSystemManager( |
| 3229 | data_dir, partials_dir, self.vm, self.db) |
| 3230 | - self.event_q = event_queue.EventQueue(self.fs, ignore_files) |
| 3231 | + self.event_q = event_queue.EventQueue(self.fs, ignore_files, |
| 3232 | + monitor_class=monitor_class) |
| 3233 | self.fs.register_eq(self.event_q) |
| 3234 | |
| 3235 | # subscribe VM to EQ, to be unsubscribed in shutdown |


The attempt to merge lp:~dobey/ubuntuone-client/update-4-0 into lp:ubuntuone-client/stable-4-0 failed. Below is the output from the failed tests.
/usr/bin/ gnome-autogen. sh MACRO_DIR, `m4'. unknown- linux-gnu unknown- linux-gnu
checking for autoconf >= 2.53...
testing autoconf2.50... not found.
testing autoconf... found 2.69
checking for automake >= 1.10...
testing automake-1.11... found 1.11.5
checking for libtool >= 1.5...
testing libtoolize... found 2.4.2
checking for intltool >= 0.30...
testing intltoolize... found 0.50.2
checking for pkg-config >= 0.14.0...
testing pkg-config... found 0.26
checking for gtk-doc >= 1.0...
testing gtkdocize... found 1.18
Checking for required M4 macros...
Checking for forbidden M4 macros...
Processing ./configure.ac
Running libtoolize...
libtoolize: putting auxiliary files in `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: putting macros in AC_CONFIG_
libtoolize: copying file `m4/libtool.m4'
libtoolize: copying file `m4/ltoptions.m4'
libtoolize: copying file `m4/ltsugar.m4'
libtoolize: copying file `m4/ltversion.m4'
libtoolize: copying file `m4/lt~obsolete.m4'
Running intltoolize...
Running gtkdocize...
Running aclocal-1.11...
Running autoconf...
Running autoheader...
Running automake-1.11...
Running ./configure --enable-gtk-doc --enable-debug ...
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... no
checking for mawk... mawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking for library containing strerror... none required
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking dependency style of gcc... (cached) gcc3
checking build system type... x86_64-
checking host system type... x86_64-
checking how to print strings... printf
checking for a sed that does not truncate output... /bin/sed
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking whether the shell understands some XSI constructs... yes
checking whether the shell understands "+="... y...