Merge lp:~seif/zeitgeist/fix-738555 into lp:zeitgeist/0.1

Proposed by Seif Lotfy
Status: Merged
Merged at revision: 1722
Proposed branch: lp:~seif/zeitgeist/fix-738555
Merge into: lp:zeitgeist/0.1
Diff against target: 60 lines (+19/-7)
1 file modified
_zeitgeist/engine/extensions/datasource_registry.py (+19/-7)
To merge this branch: bzr merge lp:~seif/zeitgeist/fix-738555
Reviewer Review Type Date Requested Status
Siegfried Gevatter Approve
Review via email: mp+58818@code.launchpad.net

Description of the change

Add a try except block in the _write_to_disk method
Add a counter to write to disk upon every 20 events
Fixed some indents
----
OK so a primary solution for this was putting the content of _write_to_disk
in datasource_registery.py in a normal " try - catch " block
My primary fear was that the next extension or zeitgeist would crash trying
to write to the disk. This was not the case and the error was caught.
To explain why the other stuff did not crash is:
1) This was the only extension writing to disk upon unloading
2) The engine does not write to disk upon exit since all events are written
to disk upon entrance.
My primary solution would be to put the _write_to_disk in a normal "try -
catch" block for now...

To post a comment you must log in.
Revision history for this message
Seif Lotfy (seif) wrote :

Add a try except block in the _write_to_disk method
Add a counter to write to disk upon every 20 events
Fixed some indents

lp:~seif/zeitgeist/fix-738555 updated
1716. By Siegfried Gevatter

Enable query filtering by StorageState.

Revision history for this message
Siegfried Gevatter (rainct) wrote :

> Add a try except block in the _write_to_disk method
> Fixed some indents
Looks good. Feel free to commit this part already.

> Add a counter to write to disk upon every 20 events
Why every 20 events? I don't like this, too arbitrary (and it ends writing to disk a lot again).

Also, shouldn't really matter much, but I don't like the modulo approach, since it's doing a useless division and the variable can overflow. Something like this would look nicer to me:
if self._counter == 0:
    self._write_to_disk()
    self._counter = 19
else:
    self._counter -= 1

lp:~seif/zeitgeist/fix-738555 updated
1717. By Siegfried Gevatter

Fix typo in singleton.py comment.

1718. By Siegfried Gevatter

Update NEWS.

1719. By Siegfried Gevatter

test/engine-test.py:
 - Fix typo in method names (MimetType -> MimeType).
 - Rename *Origin methods to *SubjectOrigin.

1720. By Seif Lotfy

replace the counter in datasource_registry with a timeout to write to disk

1721. By Seif Lotfy

added a dirty flag to mimize writing to disk

1722. By Seif Lotfy

merge with trunk

Revision history for this message
Siegfried Gevatter (rainct) wrote :

Haven't tested it, but looks good :).

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file '_zeitgeist/engine/extensions/datasource_registry.py'
2--- _zeitgeist/engine/extensions/datasource_registry.py 2011-01-17 15:54:47 +0000
3+++ _zeitgeist/engine/extensions/datasource_registry.py 2011-04-22 21:58:00 +0000
4@@ -23,6 +23,7 @@
5 import cPickle as pickle
6 import dbus
7 import dbus.service
8+import gobject
9 import logging
10
11 from zeitgeist.datamodel import get_timestamp_for_now
12@@ -36,6 +37,7 @@
13 REGISTRY_DBUS_OBJECT_PATH = "/org/gnome/zeitgeist/data_source_registry"
14 REGISTRY_DBUS_INTERFACE = "org.gnome.zeitgeist.DataSourceRegistry"
15 SIG_FULL_DATASOURCE = "(sssa("+constants.SIG_EVENT+")bxb)"
16+DISK_WRITE_TIMEOUT = 5*60*1000 # 5 Minutes
17
18 class DataSource(OrigDataSource):
19 @classmethod
20@@ -88,16 +90,25 @@
21
22 # Connect to client disconnection signals
23 dbus.SessionBus().add_signal_receiver(self._name_owner_changed,
24- signal_name="NameOwnerChanged",
25- dbus_interface=dbus.BUS_DAEMON_IFACE,
26- arg2="", # only match services with no new owner
27- )
28-
29+ signal_name="NameOwnerChanged",
30+ dbus_interface=dbus.BUS_DAEMON_IFACE,
31+ arg2="", # only match services with no new owner
32+ )
33+
34+ self._dirty = True
35+ gobject.timeout_add(DISK_WRITE_TIMEOUT, self._write_to_disk)
36+
37 def _write_to_disk(self):
38 data = [DataSource.get_plain(datasource) for datasource in
39 self._registry.itervalues()]
40- with open(DATA_FILE, "w") as data_file:
41- pickle.dump(data, data_file)
42+ try:
43+ if self._dirty:
44+ self._dirty = False
45+ with open(DATA_FILE, "w") as data_file:
46+ pickle.dump(data, data_file)
47+ except Exception, e:
48+ log.warn("Failed to write to data file %s: %s" % (DATA_FILE, e))
49+ return True
50 #log.debug("Data-source registry update written to disk.")
51
52 def pre_insert_event(self, event, sender):
53@@ -106,6 +117,7 @@
54 datasource = self._registry[unique_id]
55 # Update LastSeen time
56 datasource.last_seen = get_timestamp_for_now()
57+ self._dirty = True
58 # Check whether the data-source is allowed to insert events
59 if not datasource.enabled:
60 return None

Subscribers

People subscribed via source and target branches