Merge lp:~diegosarmentero/ubuntuone-client/darwin-fsevents-1 into lp:ubuntuone-client
- darwin-fsevents-1
- Merge into trunk
Status: | Merged | ||||
---|---|---|---|---|---|
Approved by: | Manuel de la Peña | ||||
Approved revision: | 1260 | ||||
Merged at revision: | 1264 | ||||
Proposed branch: | lp:~diegosarmentero/ubuntuone-client/darwin-fsevents-1 | ||||
Merge into: | lp:ubuntuone-client | ||||
Diff against target: |
1298 lines (+589/-602) 6 files modified
contrib/testing/testcase.py (+7/-3) tests/platform/filesystem_notifications/__init__.py (+55/-0) tests/platform/filesystem_notifications/test_linux.py (+5/-53) tests/platform/filesystem_notifications/test_pyinotify_agnostic.py (+4/-1) tests/platform/filesystem_notifier/__init__.py (+0/-27) ubuntuone/platform/filesystem_notifications/pyinotify_agnostic.py (+518/-518) |
||||
To merge this branch: | bzr merge lp:~diegosarmentero/ubuntuone-client/darwin-fsevents-1 | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Manuel de la Peña (community) | Approve | ||
Alejandro J. Cura (community) | Approve | ||
Review via email: mp+110382@code.launchpad.net |
Commit message
- Some refactoring to support mac os filesystem notifications in the future (LP: #1013323).
Description of the change
- 1259. By Diego Sarmentero
-
Fixing encoding problem on pyinotify_agnostic
Diego Sarmentero (diegosarmentero) wrote : | # |
Alejandro J. Cura (alecu) wrote : | # |
Nice branch so far, here's a small comment:
----
The decorators like: "skip_if_
Instead of having a pair of decorators for both darwin and windows, I think they should be merged so they are called something like "skip_if_
Diego Sarmentero (diegosarmentero) wrote : | # |
> Nice branch so far, here's a small comment:
> ----
> The decorators like: "skip_if_
> Instead of having a pair of decorators for both darwin and windows, I think
> they should be merged so they are called something like
> "skip_if_
Fixed
Diego Sarmentero (diegosarmentero) wrote : | # |
Branch reverted, because we really need the two different skips for different situations.
Alejandro J. Cura (alecu) wrote : | # |
Here are some more comments that I didn't think of on my first review:
----
This first decorator makes sense; I'm not sure about the other two.
Anyway, the string in the first one is wrong:
+skip_if_
+ skipIfOS('darwin',
+ 'In windows there is no need to migrate metadata older than v5.')
Do we know of any Read/Only issue on darwin to justify this decorator?
+skip_if_
+ skipIfOS('darwin', 'Can not test RO shares until bug #820350 is resolved.')
We may have some missing or out of order events in darwin, but we surely need to apply this decorator to the specific tests that fail with darwin, and we have to make sure we don't do a blanket decoration of the same test cases than in windows. In any case, we need a new bug with the darwin specifics.
+skip_if_
+ skipIfOS('darwin', 'Fails due to missing/out of order FS events, '
+ 'see bug #820598.')
- 1260. By Diego Sarmentero
-
branch fixed according comments in the mp
Diego Sarmentero (diegosarmentero) wrote : | # |
> Here are some more comments that I didn't think of on my first review:
>
> ----
>
> This first decorator makes sense; I'm not sure about the other two.
> Anyway, the string in the first one is wrong:
>
> +skip_if_
> + skipIfOS('darwin',
> + 'In windows there is no need to migrate metadata older than
> v5.')
>
>
> Do we know of any Read/Only issue on darwin to justify this decorator?
>
> +skip_if_
> + skipIfOS('darwin', 'Can not test RO shares until bug #820350 is
> resolved.')
>
>
> We may have some missing or out of order events in darwin, but we surely need
> to apply this decorator to the specific tests that fail with darwin, and we
> have to make sure we don't do a blanket decoration of the same test cases than
> in windows. In any case, we need a new bug with the darwin specifics.
>
> +skip_if_
> + skipIfOS('darwin', 'Fails due to missing/out of order FS events, '
> + 'see bug #820598.')
I removed the decorators that weren't being used, and leave "skip_if_
Alejandro J. Cura (alecu) wrote : | # |
Looks good so far. +1
Manuel de la Peña (mandel) wrote : | # |
Looks good, this will be nice to merge with my work so far.
Preview Diff
1 | === modified file 'contrib/testing/testcase.py' | |||
2 | --- contrib/testing/testcase.py 2012-05-22 14:07:55 +0000 | |||
3 | +++ contrib/testing/testcase.py 2012-06-21 17:37:46 +0000 | |||
4 | @@ -197,7 +197,7 @@ | |||
5 | 197 | 197 | ||
6 | 198 | cancel_download = cancel_upload = download = upload = make_dir = disconnect | 198 | cancel_download = cancel_upload = download = upload = make_dir = disconnect |
7 | 199 | make_file = move = unlink = list_shares = disconnect | 199 | make_file = move = unlink = list_shares = disconnect |
9 | 200 | list_volumes = create_share = create_udf = inquire_free_space = disconnect | 200 | list_volumes = create_share = create_udf = inquire_free_space = disconnect |
10 | 201 | inquire_account_info = delete_volume = change_public_access = disconnect | 201 | inquire_account_info = delete_volume = change_public_access = disconnect |
11 | 202 | query_volumes = get_delta = rescan_from_scratch = delete_share = disconnect | 202 | query_volumes = get_delta = rescan_from_scratch = delete_share = disconnect |
12 | 203 | node_is_with_queued_move = cleanup = get_public_files = disconnect | 203 | node_is_with_queued_move = cleanup = get_public_files = disconnect |
13 | @@ -302,7 +302,7 @@ | |||
14 | 302 | mmtree(path): support read-only shares | 302 | mmtree(path): support read-only shares |
15 | 303 | makedirs(path): support read-only shares | 303 | makedirs(path): support read-only shares |
16 | 304 | """ | 304 | """ |
18 | 305 | MAX_FILENAME = 32 # some platforms limit lengths of filenames | 305 | MAX_FILENAME = 32 # some platforms limit lengths of filenames |
19 | 306 | tunnel_runner_class = FakeTunnelRunner | 306 | tunnel_runner_class = FakeTunnelRunner |
20 | 307 | 307 | ||
21 | 308 | def mktemp(self, name='temp'): | 308 | def mktemp(self, name='temp'): |
22 | @@ -448,7 +448,7 @@ | |||
23 | 448 | def __init__(self, root_path): | 448 | def __init__(self, root_path): |
24 | 449 | """ Creates the instance""" | 449 | """ Creates the instance""" |
25 | 450 | self.root = volume_manager.Root(node_id="root_node_id", path=root_path) | 450 | self.root = volume_manager.Root(node_id="root_node_id", path=root_path) |
27 | 451 | self.shares = {'':self.root} | 451 | self.shares = {'': self.root} |
28 | 452 | self.udfs = {} | 452 | self.udfs = {} |
29 | 453 | self.log = logging.getLogger('ubuntuone.SyncDaemon.VM-test') | 453 | self.log = logging.getLogger('ubuntuone.SyncDaemon.VM-test') |
30 | 454 | 454 | ||
31 | @@ -686,3 +686,7 @@ | |||
32 | 686 | skip_if_win32_missing_fs_event = \ | 686 | skip_if_win32_missing_fs_event = \ |
33 | 687 | skipIfOS('win32', 'Fails due to missing/out of order FS events, ' | 687 | skipIfOS('win32', 'Fails due to missing/out of order FS events, ' |
34 | 688 | 'see bug #820598.') | 688 | 'see bug #820598.') |
35 | 689 | |||
36 | 690 | skip_if_darwin_missing_fs_event = \ | ||
37 | 691 | skipIfOS('darwin', 'Fails due to missing/out of order FS events, ' | ||
38 | 692 | 'see bug #820598.') | ||
39 | 689 | 693 | ||
40 | === modified file 'tests/platform/filesystem_notifications/__init__.py' | |||
41 | --- tests/platform/filesystem_notifications/__init__.py 2012-04-30 15:19:03 +0000 | |||
42 | +++ tests/platform/filesystem_notifications/__init__.py 2012-06-21 17:37:46 +0000 | |||
43 | @@ -25,3 +25,58 @@ | |||
44 | 25 | # version. If you delete this exception statement from all source | 25 | # version. If you delete this exception statement from all source |
45 | 26 | # files in the program, then also delete it here. | 26 | # files in the program, then also delete it here. |
46 | 27 | """Platform/File System Notifications test code.""" | 27 | """Platform/File System Notifications test code.""" |
47 | 28 | |||
48 | 29 | import logging | ||
49 | 30 | |||
50 | 31 | from twisted.internet import defer | ||
51 | 32 | |||
52 | 33 | from ubuntuone.syncdaemon import ( | ||
53 | 34 | event_queue, | ||
54 | 35 | filesystem_manager, | ||
55 | 36 | ) | ||
56 | 37 | from contrib.testing import testcase | ||
57 | 38 | from ubuntuone.devtools.handlers import MementoHandler | ||
58 | 39 | from ubuntuone.syncdaemon.tritcask import Tritcask | ||
59 | 40 | |||
60 | 41 | |||
61 | 42 | class BaseFSMonitorTestCase(testcase.BaseTwistedTestCase): | ||
62 | 43 | """Test the structures where we have the path/watch.""" | ||
63 | 44 | |||
64 | 45 | timeout = 3 | ||
65 | 46 | |||
66 | 47 | @defer.inlineCallbacks | ||
67 | 48 | def setUp(self): | ||
68 | 49 | """Set up.""" | ||
69 | 50 | yield super(BaseFSMonitorTestCase, self).setUp() | ||
70 | 51 | fsmdir = self.mktemp('fsmdir') | ||
71 | 52 | partials_dir = self.mktemp('partials_dir') | ||
72 | 53 | self.root_dir = self.mktemp('root_dir') | ||
73 | 54 | self.vm = testcase.FakeVolumeManager(self.root_dir) | ||
74 | 55 | self.tritcask_dir = self.mktemp("tritcask_dir") | ||
75 | 56 | self.db = Tritcask(self.tritcask_dir) | ||
76 | 57 | self.fs = filesystem_manager.FileSystemManager(fsmdir, partials_dir, | ||
77 | 58 | self.vm, self.db) | ||
78 | 59 | self.fs.create(path=self.root_dir, share_id='', is_dir=True) | ||
79 | 60 | self.fs.set_by_path(path=self.root_dir, | ||
80 | 61 | local_hash=None, server_hash=None) | ||
81 | 62 | eq = event_queue.EventQueue(self.fs) | ||
82 | 63 | |||
83 | 64 | self.deferred = deferred = defer.Deferred() | ||
84 | 65 | |||
85 | 66 | class HitMe(object): | ||
86 | 67 | # class-closure, cannot use self, pylint: disable-msg=E0213 | ||
87 | 68 | def handle_default(innerself, event, **args): | ||
88 | 69 | deferred.callback(True) | ||
89 | 70 | |||
90 | 71 | eq.subscribe(HitMe()) | ||
91 | 72 | self.monitor = eq.monitor | ||
92 | 73 | self.log_handler = MementoHandler() | ||
93 | 74 | self.log_handler.setLevel(logging.DEBUG) | ||
94 | 75 | self.monitor.log.addHandler(self.log_handler) | ||
95 | 76 | |||
96 | 77 | @defer.inlineCallbacks | ||
97 | 78 | def tearDown(self): | ||
98 | 79 | """Clean up the tests.""" | ||
99 | 80 | self.monitor.shutdown() | ||
100 | 81 | self.monitor.log.removeHandler(self.log_handler) | ||
101 | 82 | yield super(BaseFSMonitorTestCase, self).tearDown() | ||
102 | 28 | 83 | ||
103 | === modified file 'tests/platform/filesystem_notifications/test_linux.py' | |||
104 | --- tests/platform/filesystem_notifications/test_linux.py 2012-05-23 13:06:42 +0000 | |||
105 | +++ tests/platform/filesystem_notifications/test_linux.py 2012-06-21 17:37:46 +0000 | |||
106 | @@ -30,23 +30,17 @@ | |||
107 | 30 | # files in the program, then also delete it here. | 30 | # files in the program, then also delete it here. |
108 | 31 | """Tests for the Event Queue.""" | 31 | """Tests for the Event Queue.""" |
109 | 32 | 32 | ||
110 | 33 | import logging | ||
111 | 34 | import os | 33 | import os |
112 | 35 | 34 | ||
113 | 36 | from twisted.internet import defer, reactor | 35 | from twisted.internet import defer, reactor |
114 | 37 | from twisted.trial.unittest import TestCase as PlainTestCase | 36 | from twisted.trial.unittest import TestCase as PlainTestCase |
115 | 38 | 37 | ||
116 | 39 | from ubuntuone.syncdaemon import ( | ||
117 | 40 | event_queue, | ||
118 | 41 | filesystem_manager, | ||
119 | 42 | ) | ||
120 | 43 | from contrib.testing import testcase | 38 | from contrib.testing import testcase |
121 | 44 | from ubuntuone.devtools.handlers import MementoHandler | ||
122 | 45 | from ubuntuone.syncdaemon import volume_manager | 39 | from ubuntuone.syncdaemon import volume_manager |
123 | 46 | from ubuntuone.syncdaemon.tritcask import Tritcask | ||
124 | 47 | from ubuntuone.platform.filesystem_notifications import ( | 40 | from ubuntuone.platform.filesystem_notifications import ( |
125 | 48 | linux as filesystem_notifications, | 41 | linux as filesystem_notifications, |
126 | 49 | ) | 42 | ) |
127 | 43 | from tests.platform.filesystem_notifications import BaseFSMonitorTestCase | ||
128 | 50 | 44 | ||
129 | 51 | # We normally access to private attribs in tests | 45 | # We normally access to private attribs in tests |
130 | 52 | # pylint: disable=W0212 | 46 | # pylint: disable=W0212 |
131 | @@ -101,50 +95,6 @@ | |||
132 | 101 | self.assertFalse(processor.timer.active()) | 95 | self.assertFalse(processor.timer.active()) |
133 | 102 | 96 | ||
134 | 103 | 97 | ||
135 | 104 | |||
136 | 105 | class BaseFSMonitorTestCase(testcase.BaseTwistedTestCase): | ||
137 | 106 | """Test the structures where we have the path/watch.""" | ||
138 | 107 | |||
139 | 108 | timeout = 3 | ||
140 | 109 | |||
141 | 110 | @defer.inlineCallbacks | ||
142 | 111 | def setUp(self): | ||
143 | 112 | """Set up.""" | ||
144 | 113 | yield super(BaseFSMonitorTestCase, self).setUp() | ||
145 | 114 | fsmdir = self.mktemp('fsmdir') | ||
146 | 115 | partials_dir = self.mktemp('partials_dir') | ||
147 | 116 | self.root_dir = self.mktemp('root_dir') | ||
148 | 117 | self.vm = testcase.FakeVolumeManager(self.root_dir) | ||
149 | 118 | self.tritcask_dir = self.mktemp("tritcask_dir") | ||
150 | 119 | self.db = Tritcask(self.tritcask_dir) | ||
151 | 120 | self.fs = filesystem_manager.FileSystemManager(fsmdir, partials_dir, | ||
152 | 121 | self.vm, self.db) | ||
153 | 122 | self.fs.create(path=self.root_dir, share_id='', is_dir=True) | ||
154 | 123 | self.fs.set_by_path(path=self.root_dir, | ||
155 | 124 | local_hash=None, server_hash=None) | ||
156 | 125 | eq = event_queue.EventQueue(self.fs) | ||
157 | 126 | |||
158 | 127 | self.deferred = deferred = defer.Deferred() | ||
159 | 128 | |||
160 | 129 | class HitMe(object): | ||
161 | 130 | # class-closure, cannot use self, pylint: disable-msg=E0213 | ||
162 | 131 | def handle_default(innerself, event, **args): | ||
163 | 132 | deferred.callback(True) | ||
164 | 133 | |||
165 | 134 | eq.subscribe(HitMe()) | ||
166 | 135 | self.monitor = eq.monitor | ||
167 | 136 | self.log_handler = MementoHandler() | ||
168 | 137 | self.log_handler.setLevel(logging.DEBUG) | ||
169 | 138 | self.monitor.log.addHandler(self.log_handler) | ||
170 | 139 | |||
171 | 140 | @defer.inlineCallbacks | ||
172 | 141 | def tearDown(self): | ||
173 | 142 | """Clean up the tests.""" | ||
174 | 143 | self.monitor.shutdown() | ||
175 | 144 | self.monitor.log.removeHandler(self.log_handler) | ||
176 | 145 | yield super(BaseFSMonitorTestCase, self).tearDown() | ||
177 | 146 | |||
178 | 147 | |||
179 | 148 | class WatchManagerTests(BaseFSMonitorTestCase): | 98 | class WatchManagerTests(BaseFSMonitorTestCase): |
180 | 149 | """Test the structures where we have the path/watch.""" | 99 | """Test the structures where we have the path/watch.""" |
181 | 150 | 100 | ||
182 | @@ -222,7 +172,8 @@ | |||
183 | 222 | self.monitor._general_watchs = {'/path1/foo': 1, '/other': 2} | 172 | self.monitor._general_watchs = {'/path1/foo': 1, '/other': 2} |
184 | 223 | self.monitor._ancestors_watchs = {'/foo': 3} | 173 | self.monitor._ancestors_watchs = {'/foo': 3} |
185 | 224 | self.monitor.inotify_watch_fix('/path1/foo', '/path1/new') | 174 | self.monitor.inotify_watch_fix('/path1/foo', '/path1/new') |
187 | 225 | self.assertEqual(self.monitor._general_watchs, {'/path1/new': 1, '/other': 2}) | 175 | self.assertEqual(self.monitor._general_watchs, |
188 | 176 | {'/path1/new': 1, '/other': 2}) | ||
189 | 226 | self.assertEqual(self.monitor._ancestors_watchs, {'/foo': 3}) | 177 | self.assertEqual(self.monitor._ancestors_watchs, {'/foo': 3}) |
190 | 227 | 178 | ||
191 | 228 | def test_fix_path_ancestors(self): | 179 | def test_fix_path_ancestors(self): |
192 | @@ -231,7 +182,8 @@ | |||
193 | 231 | self.monitor._ancestors_watchs = {'/oth': 1, '/other': 2} | 182 | self.monitor._ancestors_watchs = {'/oth': 1, '/other': 2} |
194 | 232 | self.monitor.inotify_watch_fix('/oth', '/baz') | 183 | self.monitor.inotify_watch_fix('/oth', '/baz') |
195 | 233 | self.assertEqual(self.monitor._general_watchs, {'/bar': 3}) | 184 | self.assertEqual(self.monitor._general_watchs, {'/bar': 3}) |
197 | 234 | self.assertEqual(self.monitor._ancestors_watchs, {'/baz': 1, '/other': 2}) | 185 | self.assertEqual(self.monitor._ancestors_watchs, |
198 | 186 | {'/baz': 1, '/other': 2}) | ||
199 | 235 | 187 | ||
200 | 236 | 188 | ||
201 | 237 | class DynamicHitMe(object): | 189 | class DynamicHitMe(object): |
202 | 238 | 190 | ||
203 | === renamed file 'tests/platform/filesystem_notifier/test_windows.py' => 'tests/platform/filesystem_notifications/test_pyinotify_agnostic.py' | |||
204 | --- tests/platform/filesystem_notifier/test_windows.py 2012-05-14 21:24:24 +0000 | |||
205 | +++ tests/platform/filesystem_notifications/test_pyinotify_agnostic.py 2012-06-21 17:37:46 +0000 | |||
206 | @@ -30,6 +30,8 @@ | |||
207 | 30 | # files in the program, then also delete it here. | 30 | # files in the program, then also delete it here. |
208 | 31 | """Test for the pyinotify implementation on windows.""" | 31 | """Test for the pyinotify implementation on windows.""" |
209 | 32 | 32 | ||
210 | 33 | import sys | ||
211 | 34 | |||
212 | 33 | from twisted.internet import defer | 35 | from twisted.internet import defer |
213 | 34 | from twisted.trial.unittest import TestCase | 36 | from twisted.trial.unittest import TestCase |
214 | 35 | 37 | ||
215 | @@ -51,7 +53,8 @@ | |||
216 | 51 | attr = 'attribute' | 53 | attr = 'attribute' |
217 | 52 | self.format[attr] = attr | 54 | self.format[attr] = attr |
218 | 53 | value = u'ñoño' | 55 | value = u'ñoño' |
220 | 54 | expected_result = (attr + value.encode('mbcs', 'replace') + | 56 | expected_result = (attr + value.encode( |
221 | 57 | sys.getfilesystemencoding(), 'replace') + | ||
222 | 55 | self.format['normal']) | 58 | self.format['normal']) |
223 | 56 | self.assertEqual(expected_result, self.formatter.simple(value, attr)) | 59 | self.assertEqual(expected_result, self.formatter.simple(value, attr)) |
224 | 57 | 60 | ||
225 | 58 | 61 | ||
226 | === removed directory 'tests/platform/filesystem_notifier' | |||
227 | === removed file 'tests/platform/filesystem_notifier/__init__.py' | |||
228 | --- tests/platform/filesystem_notifier/__init__.py 2012-05-14 19:04:43 +0000 | |||
229 | +++ tests/platform/filesystem_notifier/__init__.py 1970-01-01 00:00:00 +0000 | |||
230 | @@ -1,27 +0,0 @@ | |||
231 | 1 | # Copyright 2012 Canonical Ltd. | ||
232 | 2 | # | ||
233 | 3 | # This program is free software: you can redistribute it and/or modify it | ||
234 | 4 | # under the terms of the GNU General Public License version 3, as published | ||
235 | 5 | # by the Free Software Foundation. | ||
236 | 6 | # | ||
237 | 7 | # This program is distributed in the hope that it will be useful, but | ||
238 | 8 | # WITHOUT ANY WARRANTY; without even the implied warranties of | ||
239 | 9 | # MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR | ||
240 | 10 | # PURPOSE. See the GNU General Public License for more details. | ||
241 | 11 | # | ||
242 | 12 | # You should have received a copy of the GNU General Public License along | ||
243 | 13 | # with this program. If not, see <http://www.gnu.org/licenses/>. | ||
244 | 14 | # | ||
245 | 15 | # In addition, as a special exception, the copyright holders give | ||
246 | 16 | # permission to link the code of portions of this program with the | ||
247 | 17 | # OpenSSL library under certain conditions as described in each | ||
248 | 18 | # individual source file, and distribute linked combinations | ||
249 | 19 | # including the two. | ||
250 | 20 | # You must obey the GNU General Public License in all respects | ||
251 | 21 | # for all of the code used other than OpenSSL. If you modify | ||
252 | 22 | # file(s) with this exception, you may extend this exception to your | ||
253 | 23 | # version of the file(s), but you are not obligated to do so. If you | ||
254 | 24 | # do not wish to do so, delete this exception statement from your | ||
255 | 25 | # version. If you delete this exception statement from all source | ||
256 | 26 | # files in the program, then also delete it here. | ||
257 | 27 | """Platform/File System Notifier (Pyinotify agnostic) test code.""" | ||
258 | 28 | 0 | ||
259 | === modified file 'ubuntuone/platform/filesystem_notifications/pyinotify_agnostic.py' | |||
260 | --- ubuntuone/platform/filesystem_notifications/pyinotify_agnostic.py 2012-05-14 20:38:23 +0000 | |||
261 | +++ ubuntuone/platform/filesystem_notifications/pyinotify_agnostic.py 2012-06-21 17:37:46 +0000 | |||
262 | @@ -1,518 +1,518 @@ | |||
781 | 1 | #!/usr/bin/env python | 1 | #!/usr/bin/env python |
782 | 2 | 2 | ||
783 | 3 | # pyinotify.py - python interface to inotify | 3 | # pyinotify.py - python interface to inotify |
784 | 4 | # Copyright (c) 2010 Sebastien Martini <seb@dbzteam.org> | 4 | # Copyright (c) 2010 Sebastien Martini <seb@dbzteam.org> |
785 | 5 | # | 5 | # |
786 | 6 | # Permission is hereby granted, free of charge, to any person obtaining a copy | 6 | # Permission is hereby granted, free of charge, to any person obtaining a copy |
787 | 7 | # of this software and associated documentation files (the "Software"), to deal | 7 | # of this software and associated documentation files (the "Software"), to deal |
788 | 8 | # in the Software without restriction, including without limitation the rights | 8 | # in the Software without restriction, including without limitation the rights |
789 | 9 | # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | 9 | # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell |
790 | 10 | # copies of the Software, and to permit persons to whom the Software is | 10 | # copies of the Software, and to permit persons to whom the Software is |
791 | 11 | # furnished to do so, subject to the following conditions: | 11 | # furnished to do so, subject to the following conditions: |
792 | 12 | # | 12 | # |
793 | 13 | # The above copyright notice and this permission notice shall be included in | 13 | # The above copyright notice and this permission notice shall be included in |
794 | 14 | # all copies or substantial portions of the Software. | 14 | # all copies or substantial portions of the Software. |
795 | 15 | # | 15 | # |
796 | 16 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | 16 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |
797 | 17 | # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | 17 | # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |
798 | 18 | # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | 18 | # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE |
799 | 19 | # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | 19 | # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER |
800 | 20 | # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | 20 | # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, |
801 | 21 | # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN | 21 | # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN |
802 | 22 | # THE SOFTWARE. | 22 | # THE SOFTWARE. |
803 | 23 | """Platform agnostic code grabed from pyinotify.""" | 23 | """Platform agnostic code grabed from pyinotify.""" |
804 | 24 | import logging | 24 | import logging |
805 | 25 | import os | 25 | import os |
806 | 26 | import sys | 26 | import sys |
807 | 27 | 27 | ||
808 | 28 | COMPATIBILITY_MODE = False | 28 | COMPATIBILITY_MODE = False |
809 | 29 | 29 | ||
810 | 30 | class PyinotifyError(Exception): | 30 | class PyinotifyError(Exception): |
811 | 31 | """Indicates exceptions raised by a Pyinotify class.""" | 31 | """Indicates exceptions raised by a Pyinotify class.""" |
812 | 32 | pass | 32 | pass |
813 | 33 | 33 | ||
814 | 34 | 34 | ||
815 | 35 | class RawOutputFormat: | 35 | class RawOutputFormat: |
816 | 36 | """ | 36 | """ |
817 | 37 | Format string representations. | 37 | Format string representations. |
818 | 38 | """ | 38 | """ |
819 | 39 | def __init__(self, format=None): | 39 | def __init__(self, format=None): |
820 | 40 | self.format = format or {} | 40 | self.format = format or {} |
821 | 41 | 41 | ||
822 | 42 | def simple(self, s, attribute): | 42 | def simple(self, s, attribute): |
823 | 43 | if isinstance(s, unicode): | 43 | if isinstance(s, unicode): |
824 | 44 | s = s.encode('mbcs', 'replace') | 44 | s = s.encode(sys.getfilesystemencoding(), 'replace') |
825 | 45 | else: | 45 | else: |
826 | 46 | s = str(s) | 46 | s = str(s) |
827 | 47 | return (self.format.get(attribute, '') + s + | 47 | return (self.format.get(attribute, '') + s + |
828 | 48 | self.format.get('normal', '')) | 48 | self.format.get('normal', '')) |
829 | 49 | 49 | ||
830 | 50 | def punctuation(self, s): | 50 | def punctuation(self, s): |
831 | 51 | """Punctuation color.""" | 51 | """Punctuation color.""" |
832 | 52 | return self.simple(s, 'normal') | 52 | return self.simple(s, 'normal') |
833 | 53 | 53 | ||
834 | 54 | def field_value(self, s): | 54 | def field_value(self, s): |
835 | 55 | """Field value color.""" | 55 | """Field value color.""" |
836 | 56 | return self.simple(s, 'purple') | 56 | return self.simple(s, 'purple') |
837 | 57 | 57 | ||
838 | 58 | def field_name(self, s): | 58 | def field_name(self, s): |
839 | 59 | """Field name color.""" | 59 | """Field name color.""" |
840 | 60 | return self.simple(s, 'blue') | 60 | return self.simple(s, 'blue') |
841 | 61 | 61 | ||
842 | 62 | def class_name(self, s): | 62 | def class_name(self, s): |
843 | 63 | """Class name color.""" | 63 | """Class name color.""" |
844 | 64 | return self.format.get('red', '') + self.simple(s, 'bold') | 64 | return self.format.get('red', '') + self.simple(s, 'bold') |
845 | 65 | 65 | ||
846 | 66 | output_format = RawOutputFormat() | 66 | output_format = RawOutputFormat() |
847 | 67 | 67 | ||
848 | 68 | 68 | ||
849 | 69 | class EventsCodes: | 69 | class EventsCodes: |
850 | 70 | """ | 70 | """ |
851 | 71 | Set of codes corresponding to each kind of events. | 71 | Set of codes corresponding to each kind of events. |
852 | 72 | Some of these flags are used to communicate with inotify, whereas | 72 | Some of these flags are used to communicate with inotify, whereas |
853 | 73 | the others are sent to userspace by inotify notifying some events. | 73 | the others are sent to userspace by inotify notifying some events. |
854 | 74 | 74 | ||
855 | 75 | @cvar IN_ACCESS: File was accessed. | 75 | @cvar IN_ACCESS: File was accessed. |
856 | 76 | @type IN_ACCESS: int | 76 | @type IN_ACCESS: int |
857 | 77 | @cvar IN_MODIFY: File was modified. | 77 | @cvar IN_MODIFY: File was modified. |
858 | 78 | @type IN_MODIFY: int | 78 | @type IN_MODIFY: int |
859 | 79 | @cvar IN_ATTRIB: Metadata changed. | 79 | @cvar IN_ATTRIB: Metadata changed. |
860 | 80 | @type IN_ATTRIB: int | 80 | @type IN_ATTRIB: int |
861 | 81 | @cvar IN_CLOSE_WRITE: Writtable file was closed. | 81 | @cvar IN_CLOSE_WRITE: Writtable file was closed. |
862 | 82 | @type IN_CLOSE_WRITE: int | 82 | @type IN_CLOSE_WRITE: int |
863 | 83 | @cvar IN_CLOSE_NOWRITE: Unwrittable file closed. | 83 | @cvar IN_CLOSE_NOWRITE: Unwrittable file closed. |
864 | 84 | @type IN_CLOSE_NOWRITE: int | 84 | @type IN_CLOSE_NOWRITE: int |
865 | 85 | @cvar IN_OPEN: File was opened. | 85 | @cvar IN_OPEN: File was opened. |
866 | 86 | @type IN_OPEN: int | 86 | @type IN_OPEN: int |
867 | 87 | @cvar IN_MOVED_FROM: File was moved from X. | 87 | @cvar IN_MOVED_FROM: File was moved from X. |
868 | 88 | @type IN_MOVED_FROM: int | 88 | @type IN_MOVED_FROM: int |
869 | 89 | @cvar IN_MOVED_TO: File was moved to Y. | 89 | @cvar IN_MOVED_TO: File was moved to Y. |
870 | 90 | @type IN_MOVED_TO: int | 90 | @type IN_MOVED_TO: int |
871 | 91 | @cvar IN_CREATE: Subfile was created. | 91 | @cvar IN_CREATE: Subfile was created. |
872 | 92 | @type IN_CREATE: int | 92 | @type IN_CREATE: int |
873 | 93 | @cvar IN_DELETE: Subfile was deleted. | 93 | @cvar IN_DELETE: Subfile was deleted. |
874 | 94 | @type IN_DELETE: int | 94 | @type IN_DELETE: int |
875 | 95 | @cvar IN_DELETE_SELF: Self (watched item itself) was deleted. | 95 | @cvar IN_DELETE_SELF: Self (watched item itself) was deleted. |
876 | 96 | @type IN_DELETE_SELF: int | 96 | @type IN_DELETE_SELF: int |
877 | 97 | @cvar IN_MOVE_SELF: Self (watched item itself) was moved. | 97 | @cvar IN_MOVE_SELF: Self (watched item itself) was moved. |
878 | 98 | @type IN_MOVE_SELF: int | 98 | @type IN_MOVE_SELF: int |
879 | 99 | @cvar IN_UNMOUNT: Backing fs was unmounted. | 99 | @cvar IN_UNMOUNT: Backing fs was unmounted. |
880 | 100 | @type IN_UNMOUNT: int | 100 | @type IN_UNMOUNT: int |
881 | 101 | @cvar IN_Q_OVERFLOW: Event queued overflowed. | 101 | @cvar IN_Q_OVERFLOW: Event queued overflowed. |
882 | 102 | @type IN_Q_OVERFLOW: int | 102 | @type IN_Q_OVERFLOW: int |
883 | 103 | @cvar IN_IGNORED: File was ignored. | 103 | @cvar IN_IGNORED: File was ignored. |
884 | 104 | @type IN_IGNORED: int | 104 | @type IN_IGNORED: int |
885 | 105 | @cvar IN_ONLYDIR: only watch the path if it is a directory (new | 105 | @cvar IN_ONLYDIR: only watch the path if it is a directory (new |
886 | 106 | in kernel 2.6.15). | 106 | in kernel 2.6.15). |
887 | 107 | @type IN_ONLYDIR: int | 107 | @type IN_ONLYDIR: int |
888 | 108 | @cvar IN_DONT_FOLLOW: don't follow a symlink (new in kernel 2.6.15). | 108 | @cvar IN_DONT_FOLLOW: don't follow a symlink (new in kernel 2.6.15). |
889 | 109 | IN_ONLYDIR we can make sure that we don't watch | 109 | IN_ONLYDIR we can make sure that we don't watch |
890 | 110 | the target of symlinks. | 110 | the target of symlinks. |
891 | 111 | @type IN_DONT_FOLLOW: int | 111 | @type IN_DONT_FOLLOW: int |
892 | 112 | @cvar IN_MASK_ADD: add to the mask of an already existing watch (new | 112 | @cvar IN_MASK_ADD: add to the mask of an already existing watch (new |
893 | 113 | in kernel 2.6.14). | 113 | in kernel 2.6.14). |
894 | 114 | @type IN_MASK_ADD: int | 114 | @type IN_MASK_ADD: int |
895 | 115 | @cvar IN_ISDIR: Event occurred against dir. | 115 | @cvar IN_ISDIR: Event occurred against dir. |
896 | 116 | @type IN_ISDIR: int | 116 | @type IN_ISDIR: int |
897 | 117 | @cvar IN_ONESHOT: Only send event once. | 117 | @cvar IN_ONESHOT: Only send event once. |
898 | 118 | @type IN_ONESHOT: int | 118 | @type IN_ONESHOT: int |
899 | 119 | @cvar ALL_EVENTS: Alias for considering all of the events. | 119 | @cvar ALL_EVENTS: Alias for considering all of the events. |
900 | 120 | @type ALL_EVENTS: int | 120 | @type ALL_EVENTS: int |
901 | 121 | """ | 121 | """ |
902 | 122 | 122 | ||
903 | 123 | # The idea here is 'configuration-as-code' - this way, we get | 123 | # The idea here is 'configuration-as-code' - this way, we get |
904 | 124 | # our nice class constants, but we also get nice human-friendly text | 124 | # our nice class constants, but we also get nice human-friendly text |
905 | 125 | # mappings to do lookups against as well, for free: | 125 | # mappings to do lookups against as well, for free: |
906 | 126 | FLAG_COLLECTIONS = {'OP_FLAGS': { | 126 | FLAG_COLLECTIONS = {'OP_FLAGS': { |
907 | 127 | 'IN_ACCESS' : 0x00000001, # File was accessed | 127 | 'IN_ACCESS' : 0x00000001, # File was accessed |
908 | 128 | 'IN_MODIFY' : 0x00000002, # File was modified | 128 | 'IN_MODIFY' : 0x00000002, # File was modified |
909 | 129 | 'IN_ATTRIB' : 0x00000004, # Metadata changed | 129 | 'IN_ATTRIB' : 0x00000004, # Metadata changed |
910 | 130 | 'IN_CLOSE_WRITE' : 0x00000008, # Writable file was closed | 130 | 'IN_CLOSE_WRITE' : 0x00000008, # Writable file was closed |
911 | 131 | 'IN_CLOSE_NOWRITE' : 0x00000010, # Unwritable file closed | 131 | 'IN_CLOSE_NOWRITE' : 0x00000010, # Unwritable file closed |
912 | 132 | 'IN_OPEN' : 0x00000020, # File was opened | 132 | 'IN_OPEN' : 0x00000020, # File was opened |
913 | 133 | 'IN_MOVED_FROM' : 0x00000040, # File was moved from X | 133 | 'IN_MOVED_FROM' : 0x00000040, # File was moved from X |
914 | 134 | 'IN_MOVED_TO' : 0x00000080, # File was moved to Y | 134 | 'IN_MOVED_TO' : 0x00000080, # File was moved to Y |
915 | 135 | 'IN_CREATE' : 0x00000100, # Subfile was created | 135 | 'IN_CREATE' : 0x00000100, # Subfile was created |
916 | 136 | 'IN_DELETE' : 0x00000200, # Subfile was deleted | 136 | 'IN_DELETE' : 0x00000200, # Subfile was deleted |
917 | 137 | 'IN_DELETE_SELF' : 0x00000400, # Self (watched item itself) | 137 | 'IN_DELETE_SELF' : 0x00000400, # Self (watched item itself) |
918 | 138 | # was deleted | 138 | # was deleted |
919 | 139 | 'IN_MOVE_SELF' : 0x00000800, # Self(watched item itself) was moved | 139 | 'IN_MOVE_SELF' : 0x00000800, # Self(watched item itself) was moved |
920 | 140 | }, | 140 | }, |
921 | 141 | 'EVENT_FLAGS': { | 141 | 'EVENT_FLAGS': { |
922 | 142 | 'IN_UNMOUNT' : 0x00002000, # Backing fs was unmounted | 142 | 'IN_UNMOUNT' : 0x00002000, # Backing fs was unmounted |
923 | 143 | 'IN_Q_OVERFLOW' : 0x00004000, # Event queued overflowed | 143 | 'IN_Q_OVERFLOW' : 0x00004000, # Event queued overflowed |
924 | 144 | 'IN_IGNORED' : 0x00008000, # File was ignored | 144 | 'IN_IGNORED' : 0x00008000, # File was ignored |
925 | 145 | }, | 145 | }, |
926 | 146 | 'SPECIAL_FLAGS': { | 146 | 'SPECIAL_FLAGS': { |
927 | 147 | 'IN_ONLYDIR' : 0x01000000, # only watch the path if it is a | 147 | 'IN_ONLYDIR' : 0x01000000, # only watch the path if it is a |
928 | 148 | # directory | 148 | # directory |
929 | 149 | 'IN_DONT_FOLLOW' : 0x02000000, # don't follow a symlink | 149 | 'IN_DONT_FOLLOW' : 0x02000000, # don't follow a symlink |
930 | 150 | 'IN_MASK_ADD' : 0x20000000, # add to the mask of an already | 150 | 'IN_MASK_ADD' : 0x20000000, # add to the mask of an already |
931 | 151 | # existing watch | 151 | # existing watch |
932 | 152 | 'IN_ISDIR' : 0x40000000, # event occurred against dir | 152 | 'IN_ISDIR' : 0x40000000, # event occurred against dir |
933 | 153 | 'IN_ONESHOT' : 0x80000000, # only send event once | 153 | 'IN_ONESHOT' : 0x80000000, # only send event once |
934 | 154 | }, | 154 | }, |
935 | 155 | } | 155 | } |
936 | 156 | 156 | ||
937 | 157 | def maskname(mask): | 157 | def maskname(mask): |
938 | 158 | """ | 158 | """ |
939 | 159 | Returns the event name associated to mask. IN_ISDIR is appended to | 159 | Returns the event name associated to mask. IN_ISDIR is appended to |
940 | 160 | the result when appropriate. Note: only one event is returned, because | 160 | the result when appropriate. Note: only one event is returned, because |
941 | 161 | only one event can be raised at a given time. | 161 | only one event can be raised at a given time. |
942 | 162 | 162 | ||
943 | 163 | @param mask: mask. | 163 | @param mask: mask. |
944 | 164 | @type mask: int | 164 | @type mask: int |
945 | 165 | @return: event name. | 165 | @return: event name. |
946 | 166 | @rtype: str | 166 | @rtype: str |
947 | 167 | """ | 167 | """ |
948 | 168 | ms = mask | 168 | ms = mask |
949 | 169 | name = '%s' | 169 | name = '%s' |
950 | 170 | if mask & IN_ISDIR: | 170 | if mask & IN_ISDIR: |
951 | 171 | ms = mask - IN_ISDIR | 171 | ms = mask - IN_ISDIR |
952 | 172 | name = '%s|IN_ISDIR' | 172 | name = '%s|IN_ISDIR' |
953 | 173 | return name % EventsCodes.ALL_VALUES[ms] | 173 | return name % EventsCodes.ALL_VALUES[ms] |
954 | 174 | 174 | ||
955 | 175 | maskname = staticmethod(maskname) | 175 | maskname = staticmethod(maskname) |
956 | 176 | 176 | ||
957 | 177 | 177 | ||
958 | 178 | # So let's now turn the configuration into code | 178 | # So let's now turn the configuration into code |
959 | 179 | EventsCodes.ALL_FLAGS = {} | 179 | EventsCodes.ALL_FLAGS = {} |
960 | 180 | EventsCodes.ALL_VALUES = {} | 180 | EventsCodes.ALL_VALUES = {} |
961 | 181 | for flagc, valc in EventsCodes.FLAG_COLLECTIONS.items(): | 181 | for flagc, valc in EventsCodes.FLAG_COLLECTIONS.items(): |
962 | 182 | # Make the collections' members directly accessible through the | 182 | # Make the collections' members directly accessible through the |
963 | 183 | # class dictionary | 183 | # class dictionary |
964 | 184 | setattr(EventsCodes, flagc, valc) | 184 | setattr(EventsCodes, flagc, valc) |
965 | 185 | 185 | ||
966 | 186 | # Collect all the flags under a common umbrella | 186 | # Collect all the flags under a common umbrella |
967 | 187 | EventsCodes.ALL_FLAGS.update(valc) | 187 | EventsCodes.ALL_FLAGS.update(valc) |
968 | 188 | 188 | ||
969 | 189 | # Make the individual masks accessible as 'constants' at globals() scope | 189 | # Make the individual masks accessible as 'constants' at globals() scope |
970 | 190 | # and masknames accessible by values. | 190 | # and masknames accessible by values. |
971 | 191 | for name, val in valc.items(): | 191 | for name, val in valc.items(): |
972 | 192 | globals()[name] = val | 192 | globals()[name] = val |
973 | 193 | EventsCodes.ALL_VALUES[val] = name | 193 | EventsCodes.ALL_VALUES[val] = name |
974 | 194 | 194 | ||
975 | 195 | 195 | ||
976 | 196 | # all 'normal' events | 196 | # all 'normal' events |
977 | 197 | ALL_EVENTS = reduce(lambda x, y: x | y, EventsCodes.OP_FLAGS.values()) | 197 | ALL_EVENTS = reduce(lambda x, y: x | y, EventsCodes.OP_FLAGS.values()) |
978 | 198 | EventsCodes.ALL_FLAGS['ALL_EVENTS'] = ALL_EVENTS | 198 | EventsCodes.ALL_FLAGS['ALL_EVENTS'] = ALL_EVENTS |
979 | 199 | EventsCodes.ALL_VALUES[ALL_EVENTS] = 'ALL_EVENTS' | 199 | EventsCodes.ALL_VALUES[ALL_EVENTS] = 'ALL_EVENTS' |
980 | 200 | 200 | ||
981 | 201 | 201 | ||
982 | 202 | class _Event: | 202 | class _Event: |
983 | 203 | """ | 203 | """ |
984 | 204 | Event structure, represent events raised by the system. This | 204 | Event structure, represent events raised by the system. This |
985 | 205 | is the base class and should be subclassed. | 205 | is the base class and should be subclassed. |
986 | 206 | 206 | ||
987 | 207 | """ | 207 | """ |
988 | 208 | def __init__(self, dict_): | 208 | def __init__(self, dict_): |
989 | 209 | """ | 209 | """ |
990 | 210 | Attach attributes (contained in dict_) to self. | 210 | Attach attributes (contained in dict_) to self. |
991 | 211 | 211 | ||
992 | 212 | @param dict_: Set of attributes. | 212 | @param dict_: Set of attributes. |
993 | 213 | @type dict_: dictionary | 213 | @type dict_: dictionary |
994 | 214 | """ | 214 | """ |
995 | 215 | for tpl in dict_.items(): | 215 | for tpl in dict_.items(): |
996 | 216 | setattr(self, *tpl) | 216 | setattr(self, *tpl) |
997 | 217 | 217 | ||
998 | 218 | def __repr__(self): | 218 | def __repr__(self): |
999 | 219 | """ | 219 | """ |
1000 | 220 | @return: Generic event string representation. | 220 | @return: Generic event string representation. |
1001 | 221 | @rtype: str | 221 | @rtype: str |
1002 | 222 | """ | 222 | """ |
1003 | 223 | s = '' | 223 | s = '' |
1004 | 224 | for attr, value in sorted(self.__dict__.items(), key=lambda x: x[0]): | 224 | for attr, value in sorted(self.__dict__.items(), key=lambda x: x[0]): |
1005 | 225 | if attr.startswith('_'): | 225 | if attr.startswith('_'): |
1006 | 226 | continue | 226 | continue |
1007 | 227 | if attr == 'mask': | 227 | if attr == 'mask': |
1008 | 228 | value = hex(getattr(self, attr)) | 228 | value = hex(getattr(self, attr)) |
1009 | 229 | elif isinstance(value, basestring) and not value: | 229 | elif isinstance(value, basestring) and not value: |
1010 | 230 | value = "''" | 230 | value = "''" |
1011 | 231 | s += ' %s%s%s' % (output_format.field_name(attr), | 231 | s += ' %s%s%s' % (output_format.field_name(attr), |
1012 | 232 | output_format.punctuation('='), | 232 | output_format.punctuation('='), |
1013 | 233 | output_format.field_value(value)) | 233 | output_format.field_value(value)) |
1014 | 234 | 234 | ||
1015 | 235 | s = '%s%s%s %s' % (output_format.punctuation('<'), | 235 | s = '%s%s%s %s' % (output_format.punctuation('<'), |
1016 | 236 | output_format.class_name(self.__class__.__name__), | 236 | output_format.class_name(self.__class__.__name__), |
1017 | 237 | s, | 237 | s, |
1018 | 238 | output_format.punctuation('>')) | 238 | output_format.punctuation('>')) |
1019 | 239 | return s | 239 | return s |
1020 | 240 | 240 | ||
1021 | 241 | def __str__(self): | 241 | def __str__(self): |
1022 | 242 | return repr(self) | 242 | return repr(self) |
1023 | 243 | 243 | ||
1024 | 244 | 244 | ||
1025 | 245 | class _RawEvent(_Event): | 245 | class _RawEvent(_Event): |
1026 | 246 | """ | 246 | """ |
1027 | 247 | Raw event, it contains only the informations provided by the system. | 247 | Raw event, it contains only the informations provided by the system. |
1028 | 248 | It doesn't infer anything. | 248 | It doesn't infer anything. |
1029 | 249 | """ | 249 | """ |
1030 | 250 | def __init__(self, wd, mask, cookie, name): | 250 | def __init__(self, wd, mask, cookie, name): |
1031 | 251 | """ | 251 | """ |
1032 | 252 | @param wd: Watch Descriptor. | 252 | @param wd: Watch Descriptor. |
1033 | 253 | @type wd: int | 253 | @type wd: int |
1034 | 254 | @param mask: Bitmask of events. | 254 | @param mask: Bitmask of events. |
1035 | 255 | @type mask: int | 255 | @type mask: int |
1036 | 256 | @param cookie: Cookie. | 256 | @param cookie: Cookie. |
1037 | 257 | @type cookie: int | 257 | @type cookie: int |
1038 | 258 | @param name: Basename of the file or directory against which the | 258 | @param name: Basename of the file or directory against which the |
1039 | 259 | event was raised in case where the watched directory | 259 | event was raised in case where the watched directory |
1040 | 260 | is the parent directory. None if the event was raised | 260 | is the parent directory. None if the event was raised |
1041 | 261 | on the watched item itself. | 261 | on the watched item itself. |
1042 | 262 | @type name: string or None | 262 | @type name: string or None |
1043 | 263 | """ | 263 | """ |
1044 | 264 | # Use this variable to cache the result of str(self), this object | 264 | # Use this variable to cache the result of str(self), this object |
1045 | 265 | # is immutable. | 265 | # is immutable. |
1046 | 266 | self._str = None | 266 | self._str = None |
1047 | 267 | # name: remove trailing '\0' | 267 | # name: remove trailing '\0' |
1048 | 268 | d = {'wd': wd, | 268 | d = {'wd': wd, |
1049 | 269 | 'mask': mask, | 269 | 'mask': mask, |
1050 | 270 | 'cookie': cookie, | 270 | 'cookie': cookie, |
1051 | 271 | 'name': name.rstrip('\0')} | 271 | 'name': name.rstrip('\0')} |
1052 | 272 | _Event.__init__(self, d) | 272 | _Event.__init__(self, d) |
1053 | 273 | logging.debug(str(self)) | 273 | logging.debug(str(self)) |
1054 | 274 | 274 | ||
1055 | 275 | def __str__(self): | 275 | def __str__(self): |
1056 | 276 | if self._str is None: | 276 | if self._str is None: |
1057 | 277 | self._str = _Event.__str__(self) | 277 | self._str = _Event.__str__(self) |
1058 | 278 | return self._str | 278 | return self._str |
1059 | 279 | 279 | ||
1060 | 280 | 280 | ||
1061 | 281 | class Event(_Event): | 281 | class Event(_Event): |
1062 | 282 | """ | 282 | """ |
1063 | 283 | This class contains all the useful informations about the observed | 283 | This class contains all the useful informations about the observed |
1064 | 284 | event. However, the presence of each field is not guaranteed and | 284 | event. However, the presence of each field is not guaranteed and |
1065 | 285 | depends on the type of event. In effect, some fields are irrelevant | 285 | depends on the type of event. In effect, some fields are irrelevant |
1066 | 286 | for some kind of event (for example 'cookie' is meaningless for | 286 | for some kind of event (for example 'cookie' is meaningless for |
1067 | 287 | IN_CREATE whereas it is mandatory for IN_MOVE_TO). | 287 | IN_CREATE whereas it is mandatory for IN_MOVE_TO). |
1068 | 288 | 288 | ||
1069 | 289 | The possible fields are: | 289 | The possible fields are: |
1070 | 290 | - wd (int): Watch Descriptor. | 290 | - wd (int): Watch Descriptor. |
1071 | 291 | - mask (int): Mask. | 291 | - mask (int): Mask. |
1072 | 292 | - maskname (str): Readable event name. | 292 | - maskname (str): Readable event name. |
1073 | 293 | - path (str): path of the file or directory being watched. | 293 | - path (str): path of the file or directory being watched. |
1074 | 294 | - name (str): Basename of the file or directory against which the | 294 | - name (str): Basename of the file or directory against which the |
1075 | 295 | event was raised in case where the watched directory | 295 | event was raised in case where the watched directory |
1076 | 296 | is the parent directory. None if the event was raised | 296 | is the parent directory. None if the event was raised |
1077 | 297 | on the watched item itself. This field is always provided | 297 | on the watched item itself. This field is always provided |
1078 | 298 | even if the string is ''. | 298 | even if the string is ''. |
1079 | 299 | - pathname (str): Concatenation of 'path' and 'name'. | 299 | - pathname (str): Concatenation of 'path' and 'name'. |
1080 | 300 | - src_pathname (str): Only present for IN_MOVED_TO events and only in | 300 | - src_pathname (str): Only present for IN_MOVED_TO events and only in |
1081 | 301 | the case where IN_MOVED_FROM events are watched too. Holds the | 301 | the case where IN_MOVED_FROM events are watched too. Holds the |
1082 | 302 | source pathname from where pathname was moved from. | 302 | source pathname from where pathname was moved from. |
1083 | 303 | - cookie (int): Cookie. | 303 | - cookie (int): Cookie. |
1084 | 304 | - dir (bool): True if the event was raised against a directory. | 304 | - dir (bool): True if the event was raised against a directory. |
1085 | 305 | 305 | ||
1086 | 306 | """ | 306 | """ |
1087 | 307 | def __init__(self, raw): | 307 | def __init__(self, raw): |
1088 | 308 | """ | 308 | """ |
1089 | 309 | Concretely, this is the raw event plus inferred infos. | 309 | Concretely, this is the raw event plus inferred infos. |
1090 | 310 | """ | 310 | """ |
1091 | 311 | _Event.__init__(self, raw) | 311 | _Event.__init__(self, raw) |
1092 | 312 | self.maskname = EventsCodes.maskname(self.mask) | 312 | self.maskname = EventsCodes.maskname(self.mask) |
1093 | 313 | if COMPATIBILITY_MODE: | 313 | if COMPATIBILITY_MODE: |
1094 | 314 | self.event_name = self.maskname | 314 | self.event_name = self.maskname |
1095 | 315 | try: | 315 | try: |
1096 | 316 | if self.name: | 316 | if self.name: |
1097 | 317 | self.pathname = os.path.abspath(os.path.join(self.path, | 317 | self.pathname = os.path.abspath(os.path.join(self.path, |
1098 | 318 | self.name)) | 318 | self.name)) |
1099 | 319 | else: | 319 | else: |
1100 | 320 | self.pathname = os.path.abspath(self.path) | 320 | self.pathname = os.path.abspath(self.path) |
1101 | 321 | except AttributeError, err: | 321 | except AttributeError, err: |
1102 | 322 | # Usually it is not an error some events are perfectly valids | 322 | # Usually it is not an error some events are perfectly valids |
1103 | 323 | # despite the lack of these attributes. | 323 | # despite the lack of these attributes. |
1104 | 324 | logging.debug(err) | 324 | logging.debug(err) |
1105 | 325 | 325 | ||
1106 | 326 | 326 | ||
1107 | 327 | class ProcessEventError(PyinotifyError): | 327 | class ProcessEventError(PyinotifyError): |
1108 | 328 | """ | 328 | """ |
1109 | 329 | ProcessEventError Exception. Raised on ProcessEvent error. | 329 | ProcessEventError Exception. Raised on ProcessEvent error. |
1110 | 330 | """ | 330 | """ |
1111 | 331 | def __init__(self, err): | 331 | def __init__(self, err): |
1112 | 332 | """ | 332 | """ |
1113 | 333 | @param err: Exception error description. | 333 | @param err: Exception error description. |
1114 | 334 | @type err: string | 334 | @type err: string |
1115 | 335 | """ | 335 | """ |
1116 | 336 | PyinotifyError.__init__(self, err) | 336 | PyinotifyError.__init__(self, err) |
1117 | 337 | 337 | ||
1118 | 338 | 338 | ||
1119 | 339 | class _ProcessEvent: | 339 | class _ProcessEvent: |
1120 | 340 | """ | 340 | """ |
1121 | 341 | Abstract processing event class. | 341 | Abstract processing event class. |
1122 | 342 | """ | 342 | """ |
1123 | 343 | def __call__(self, event): | 343 | def __call__(self, event): |
1124 | 344 | """ | 344 | """ |
1125 | 345 | To behave like a functor the object must be callable. | 345 | To behave like a functor the object must be callable. |
1126 | 346 | This method is a dispatch method. Its lookup order is: | 346 | This method is a dispatch method. Its lookup order is: |
1127 | 347 | 1. process_MASKNAME method | 347 | 1. process_MASKNAME method |
1128 | 348 | 2. process_FAMILY_NAME method | 348 | 2. process_FAMILY_NAME method |
1129 | 349 | 3. otherwise calls process_default | 349 | 3. otherwise calls process_default |
1130 | 350 | 350 | ||
1131 | 351 | @param event: Event to be processed. | 351 | @param event: Event to be processed. |
1132 | 352 | @type event: Event object | 352 | @type event: Event object |
1133 | 353 | @return: By convention when used from the ProcessEvent class: | 353 | @return: By convention when used from the ProcessEvent class: |
1134 | 354 | - Returning False or None (default value) means keep on | 354 | - Returning False or None (default value) means keep on |
1135 | 355 | executing next chained functors (see chain.py example). | 355 | executing next chained functors (see chain.py example). |
1136 | 356 | - Returning True instead means do not execute next | 356 | - Returning True instead means do not execute next |
1137 | 357 | processing functions. | 357 | processing functions. |
1138 | 358 | @rtype: bool | 358 | @rtype: bool |
1139 | 359 | @raise ProcessEventError: Event object undispatchable, | 359 | @raise ProcessEventError: Event object undispatchable, |
1140 | 360 | unknown event. | 360 | unknown event. |
1141 | 361 | """ | 361 | """ |
1142 | 362 | stripped_mask = event.mask - (event.mask & IN_ISDIR) | 362 | stripped_mask = event.mask - (event.mask & IN_ISDIR) |
1143 | 363 | maskname = EventsCodes.ALL_VALUES.get(stripped_mask) | 363 | maskname = EventsCodes.ALL_VALUES.get(stripped_mask) |
1144 | 364 | if maskname is None: | 364 | if maskname is None: |
1145 | 365 | raise ProcessEventError("Unknown mask 0x%08x" % stripped_mask) | 365 | raise ProcessEventError("Unknown mask 0x%08x" % stripped_mask) |
1146 | 366 | 366 | ||
1147 | 367 | # 1- look for process_MASKNAME | 367 | # 1- look for process_MASKNAME |
1148 | 368 | meth = getattr(self, 'process_' + maskname, None) | 368 | meth = getattr(self, 'process_' + maskname, None) |
1149 | 369 | if meth is not None: | 369 | if meth is not None: |
1150 | 370 | return meth(event) | 370 | return meth(event) |
1151 | 371 | # 2- look for process_FAMILY_NAME | 371 | # 2- look for process_FAMILY_NAME |
1152 | 372 | meth = getattr(self, 'process_IN_' + maskname.split('_')[1], None) | 372 | meth = getattr(self, 'process_IN_' + maskname.split('_')[1], None) |
1153 | 373 | if meth is not None: | 373 | if meth is not None: |
1154 | 374 | return meth(event) | 374 | return meth(event) |
1155 | 375 | # 3- default call method process_default | 375 | # 3- default call method process_default |
1156 | 376 | return self.process_default(event) | 376 | return self.process_default(event) |
1157 | 377 | 377 | ||
1158 | 378 | def __repr__(self): | 378 | def __repr__(self): |
1159 | 379 | return '<%s>' % self.__class__.__name__ | 379 | return '<%s>' % self.__class__.__name__ |
1160 | 380 | 380 | ||
1161 | 381 | 381 | ||
1162 | 382 | class ProcessEvent(_ProcessEvent): | 382 | class ProcessEvent(_ProcessEvent): |
1163 | 383 | """ | 383 | """ |
1164 | 384 | Process events objects, can be specialized via subclassing, thus its | 384 | Process events objects, can be specialized via subclassing, thus its |
1165 | 385 | behavior can be overriden: | 385 | behavior can be overriden: |
1166 | 386 | 386 | ||
1167 | 387 | Note: you should not override __init__ in your subclass instead define | 387 | Note: you should not override __init__ in your subclass instead define |
1168 | 388 | a my_init() method, this method will be called automatically from the | 388 | a my_init() method, this method will be called automatically from the |
1169 | 389 | constructor of this class with its optionals parameters. | 389 | constructor of this class with its optionals parameters. |
1170 | 390 | 390 | ||
1171 | 391 | 1. Provide specialized individual methods, e.g. process_IN_DELETE for | 391 | 1. Provide specialized individual methods, e.g. process_IN_DELETE for |
1172 | 392 | processing a precise type of event (e.g. IN_DELETE in this case). | 392 | processing a precise type of event (e.g. IN_DELETE in this case). |
1173 | 393 | 2. Or/and provide methods for processing events by 'family', e.g. | 393 | 2. Or/and provide methods for processing events by 'family', e.g. |
1174 | 394 | process_IN_CLOSE method will process both IN_CLOSE_WRITE and | 394 | process_IN_CLOSE method will process both IN_CLOSE_WRITE and |
1175 | 395 | IN_CLOSE_NOWRITE events (if process_IN_CLOSE_WRITE and | 395 | IN_CLOSE_NOWRITE events (if process_IN_CLOSE_WRITE and |
1176 | 396 | process_IN_CLOSE_NOWRITE aren't defined though). | 396 | process_IN_CLOSE_NOWRITE aren't defined though). |
1177 | 397 | 3. Or/and override process_default for catching and processing all | 397 | 3. Or/and override process_default for catching and processing all |
1178 | 398 | the remaining types of events. | 398 | the remaining types of events. |
1179 | 399 | """ | 399 | """ |
1180 | 400 | pevent = None | 400 | pevent = None |
1181 | 401 | 401 | ||
1182 | 402 | def __init__(self, pevent=None, **kargs): | 402 | def __init__(self, pevent=None, **kargs): |
1183 | 403 | """ | 403 | """ |
1184 | 404 | Enable chaining of ProcessEvent instances. | 404 | Enable chaining of ProcessEvent instances. |
1185 | 405 | 405 | ||
1186 | 406 | @param pevent: Optional callable object, will be called on event | 406 | @param pevent: Optional callable object, will be called on event |
1187 | 407 | processing (before self). | 407 | processing (before self). |
1188 | 408 | @type pevent: callable | 408 | @type pevent: callable |
1189 | 409 | @param kargs: This constructor is implemented as a template method | 409 | @param kargs: This constructor is implemented as a template method |
1190 | 410 | delegating its optionals keyworded arguments to the | 410 | delegating its optionals keyworded arguments to the |
1191 | 411 | method my_init(). | 411 | method my_init(). |
1192 | 412 | @type kargs: dict | 412 | @type kargs: dict |
1193 | 413 | """ | 413 | """ |
1194 | 414 | self.pevent = pevent | 414 | self.pevent = pevent |
1195 | 415 | self.my_init(**kargs) | 415 | self.my_init(**kargs) |
1196 | 416 | 416 | ||
1197 | 417 | def my_init(self, **kargs): | 417 | def my_init(self, **kargs): |
1198 | 418 | """ | 418 | """ |
1199 | 419 | This method is called from ProcessEvent.__init__(). This method is | 419 | This method is called from ProcessEvent.__init__(). This method is |
1200 | 420 | empty here and must be redefined to be useful. In effect, if you | 420 | empty here and must be redefined to be useful. In effect, if you |
1201 | 421 | need to specifically initialize your subclass' instance then you | 421 | need to specifically initialize your subclass' instance then you |
1202 | 422 | just have to override this method in your subclass. Then all the | 422 | just have to override this method in your subclass. Then all the |
1203 | 423 | keyworded arguments passed to ProcessEvent.__init__() will be | 423 | keyworded arguments passed to ProcessEvent.__init__() will be |
1204 | 424 | transmitted as parameters to this method. Beware you MUST pass | 424 | transmitted as parameters to this method. Beware you MUST pass |
1205 | 425 | keyword arguments though. | 425 | keyword arguments though. |
1206 | 426 | 426 | ||
1207 | 427 | @param kargs: optional delegated arguments from __init__(). | 427 | @param kargs: optional delegated arguments from __init__(). |
1208 | 428 | @type kargs: dict | 428 | @type kargs: dict |
1209 | 429 | """ | 429 | """ |
1210 | 430 | pass | 430 | pass |
1211 | 431 | 431 | ||
1212 | 432 | def __call__(self, event): | 432 | def __call__(self, event): |
1213 | 433 | stop_chaining = False | 433 | stop_chaining = False |
1214 | 434 | if self.pevent is not None: | 434 | if self.pevent is not None: |
1215 | 435 | # By default methods return None so we set as guideline | 435 | # By default methods return None so we set as guideline |
1216 | 436 | # that methods asking for stop chaining must explicitely | 436 | # that methods asking for stop chaining must explicitely |
1217 | 437 | # return non None or non False values, otherwise the default | 437 | # return non None or non False values, otherwise the default |
1218 | 438 | # behavior will be to accept chain call to the corresponding | 438 | # behavior will be to accept chain call to the corresponding |
1219 | 439 | # local method. | 439 | # local method. |
1220 | 440 | stop_chaining = self.pevent(event) | 440 | stop_chaining = self.pevent(event) |
1221 | 441 | if not stop_chaining: | 441 | if not stop_chaining: |
1222 | 442 | return _ProcessEvent.__call__(self, event) | 442 | return _ProcessEvent.__call__(self, event) |
1223 | 443 | 443 | ||
1224 | 444 | def nested_pevent(self): | 444 | def nested_pevent(self): |
1225 | 445 | return self.pevent | 445 | return self.pevent |
1226 | 446 | 446 | ||
1227 | 447 | def process_IN_Q_OVERFLOW(self, event): | 447 | def process_IN_Q_OVERFLOW(self, event): |
1228 | 448 | """ | 448 | """ |
1229 | 449 | By default this method only reports warning messages, you can | 449 | By default this method only reports warning messages, you can |
1230 | 450 | overredide it by subclassing ProcessEvent and implement your own | 450 | overredide it by subclassing ProcessEvent and implement your own |
1231 | 451 | process_IN_Q_OVERFLOW method. The actions you can take on receiving | 451 | process_IN_Q_OVERFLOW method. The actions you can take on receiving |
1232 | 452 | this event is either to update the variable max_queued_events in order | 452 | this event is either to update the variable max_queued_events in order |
1233 | 453 | to handle more simultaneous events or to modify your code in order to | 453 | to handle more simultaneous events or to modify your code in order to |
1234 | 454 | accomplish a better filtering diminishing the number of raised events. | 454 | accomplish a better filtering diminishing the number of raised events. |
1235 | 455 | Because this method is defined, IN_Q_OVERFLOW will never get | 455 | Because this method is defined, IN_Q_OVERFLOW will never get |
1236 | 456 | transmitted as arguments to process_default calls. | 456 | transmitted as arguments to process_default calls. |
1237 | 457 | 457 | ||
1238 | 458 | @param event: IN_Q_OVERFLOW event. | 458 | @param event: IN_Q_OVERFLOW event. |
1239 | 459 | @type event: dict | 459 | @type event: dict |
1240 | 460 | """ | 460 | """ |
1241 | 461 | logging.warning('Event queue overflowed.') | 461 | logging.warning('Event queue overflowed.') |
1242 | 462 | 462 | ||
1243 | 463 | def process_default(self, event): | 463 | def process_default(self, event): |
1244 | 464 | """ | 464 | """ |
1245 | 465 | Default processing event method. By default does nothing. Subclass | 465 | Default processing event method. By default does nothing. Subclass |
1246 | 466 | ProcessEvent and redefine this method in order to modify its behavior. | 466 | ProcessEvent and redefine this method in order to modify its behavior. |
1247 | 467 | 467 | ||
1248 | 468 | @param event: Event to be processed. Can be of any type of events but | 468 | @param event: Event to be processed. Can be of any type of events but |
1249 | 469 | IN_Q_OVERFLOW events (see method process_IN_Q_OVERFLOW). | 469 | IN_Q_OVERFLOW events (see method process_IN_Q_OVERFLOW). |
1250 | 470 | @type event: Event instance | 470 | @type event: Event instance |
1251 | 471 | """ | 471 | """ |
1252 | 472 | pass | 472 | pass |
1253 | 473 | 473 | ||
1254 | 474 | 474 | ||
1255 | 475 | class PrintAllEvents(ProcessEvent): | 475 | class PrintAllEvents(ProcessEvent): |
1256 | 476 | """ | 476 | """ |
1257 | 477 | Dummy class used to print events strings representations. For instance this | 477 | Dummy class used to print events strings representations. For instance this |
1258 | 478 | class is used from command line to print all received events to stdout. | 478 | class is used from command line to print all received events to stdout. |
1259 | 479 | """ | 479 | """ |
1260 | 480 | def my_init(self, out=None): | 480 | def my_init(self, out=None): |
1261 | 481 | """ | 481 | """ |
1262 | 482 | @param out: Where events will be written. | 482 | @param out: Where events will be written. |
1263 | 483 | @type out: Object providing a valid file object interface. | 483 | @type out: Object providing a valid file object interface. |
1264 | 484 | """ | 484 | """ |
1265 | 485 | if out is None: | 485 | if out is None: |
1266 | 486 | out = sys.stdout | 486 | out = sys.stdout |
1267 | 487 | self._out = out | 487 | self._out = out |
1268 | 488 | 488 | ||
1269 | 489 | def process_default(self, event): | 489 | def process_default(self, event): |
1270 | 490 | """ | 490 | """ |
1271 | 491 | Writes event string representation to file object provided to | 491 | Writes event string representation to file object provided to |
1272 | 492 | my_init(). | 492 | my_init(). |
1273 | 493 | 493 | ||
1274 | 494 | @param event: Event to be processed. Can be of any type of events but | 494 | @param event: Event to be processed. Can be of any type of events but |
1275 | 495 | IN_Q_OVERFLOW events (see method process_IN_Q_OVERFLOW). | 495 | IN_Q_OVERFLOW events (see method process_IN_Q_OVERFLOW). |
1276 | 496 | @type event: Event instance | 496 | @type event: Event instance |
1277 | 497 | """ | 497 | """ |
1278 | 498 | self._out.write(str(event)) | 498 | self._out.write(str(event)) |
1279 | 499 | self._out.write('\n') | 499 | self._out.write('\n') |
1280 | 500 | self._out.flush() | 500 | self._out.flush() |
1281 | 501 | 501 | ||
1282 | 502 | 502 | ||
1283 | 503 | class WatchManagerError(Exception): | 503 | class WatchManagerError(Exception): |
1284 | 504 | """ | 504 | """ |
1285 | 505 | WatchManager Exception. Raised on error encountered on watches | 505 | WatchManager Exception. Raised on error encountered on watches |
1286 | 506 | operations. | 506 | operations. |
1287 | 507 | 507 | ||
1288 | 508 | """ | 508 | """ |
1289 | 509 | def __init__(self, msg, wmd): | 509 | def __init__(self, msg, wmd): |
1290 | 510 | """ | 510 | """ |
1291 | 511 | @param msg: Exception string's description. | 511 | @param msg: Exception string's description. |
1292 | 512 | @type msg: string | 512 | @type msg: string |
1293 | 513 | @param wmd: This dictionary contains the wd assigned to paths of the | 513 | @param wmd: This dictionary contains the wd assigned to paths of the |
1294 | 514 | same call for which watches were successfully added. | 514 | same call for which watches were successfully added. |
1295 | 515 | @type wmd: dict | 515 | @type wmd: dict |
1296 | 516 | """ | 516 | """ |
1297 | 517 | self.wmd = wmd | 517 | self.wmd = wmd |
1298 | 518 | Exception.__init__(self, msg) | 518 | Exception.__init__(self, msg) |
The only real changes for pyinotify_agnostic are:
- Adding import sys
and:
805def simple(self, s, attribute): sys.getfilesyst emencoding( ), 'replace')
806 + if isinstance(s, unicode):
807 + s = s.encode(