Merge lp:~zeitgeist/zeitgeist/fix-fts-block into lp:~zeitgeist/zeitgeist/bluebird
- fix-fts-block
- Merge into bluebird
Status: | Merged |
---|---|
Merged at revision: | 369 |
Proposed branch: | lp:~zeitgeist/zeitgeist/fix-fts-block |
Merge into: | lp:~zeitgeist/zeitgeist/bluebird |
Diff against target: |
444 lines (+13/-398) 1 file modified
extensions/fts-python/sql.py (+13/-398) |
To merge this branch: | bzr merge lp:~zeitgeist/zeitgeist/fix-fts-block |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Siegfried Gevatter | Approve | ||
Michal Hruby (community) | Approve | ||
Review via email: mp+88660@code.launchpad.net |
Commit message
Description of the change
Removed all methods that CREATE and INSERT into the Zeitgeist DB avoiding it to be blocked.
Siegfried Gevatter (rainct) wrote : | # |
Seif Lotfy (seif) wrote : | # |
If its wrong dont run :P
On Mon, Jan 16, 2012 at 12:26 PM, Siegfried Gevatter <email address hidden>wrote:
> We probably still need to check the schema version (and figure out what to
> do if it's wrong, like do we want it to sleep for a while in case Zeitgeist
> is performing an upgrade?).
> --
> https:/
> You proposed lp:~zeitgeist/zeitgeist/fix-fts-block for merging.
>
Siegfried Gevatter (rainct) wrote : | # |
2012/1/16 Seif Lotfy <email address hidden>:
> If its wrong dont run :P
OK, yeah (it'll start again with the next D-Bus request, so just
quitting is fine).
Seif Lotfy (seif) wrote : | # |
Seems like I found a new bug...
Zeitgeist does not start again after killing it and then trying to do
something with synapse. I have to start it by hand
On Mon, Jan 16, 2012 at 1:10 PM, Siegfried Gevatter <email address hidden>wrote:
> 2012/1/16 Seif Lotfy <email address hidden>:
> > If its wrong dont run :P
>
> OK, yeah (it'll start again with the next D-Bus request, so just
> quitting is fine).
>
> --
> https:/
> Your team Zeitgeist Framework Team is requested to review the proposed
> merge of lp:~zeitgeist/zeitgeist/fix-fts-block into lp:zeitgeist.
>
> _______
> Mailing list: https:/
> Post to : <email address hidden>
> Unsubscribe : https:/
> More help : https:/
>
Michal Hruby (mhr3) wrote : | # |
> We probably still need to check the schema version (and figure out what to do
> if it's wrong, like do we want it to sleep for a while in case Zeitgeist is
> performing an upgrade?).
If opening the DB fails, just quit, extensions are run after the DB is set up, so it should be safe.
Michal Hruby (mhr3) wrote : | # |
> Seems like I found a new bug...
> Zeitgeist does not start again after killing it and then trying to do
> something with synapse. I have to start it by hand
Although not nice, it doesn't affect this merge...
Seif Lotfy (seif) wrote : | # |
Can we merge this first? Please?
Michal Hruby (mhr3) wrote : | # |
I'm fine with it, waiting for final ACK from RainCT.
Siegfried Gevatter (rainct) wrote : | # |
It's still not checking the schema version and quitting if it's wrong.
- 369. By Seif Lotfy
-
Added core schema check and exit if invalid
Seif Lotfy (seif) wrote : | # |
updated please check
Siegfried Gevatter (rainct) wrote : | # |
Haven't tested, but I guess it should be fine.
Preview Diff
1 | === modified file 'extensions/fts-python/sql.py' |
2 | --- extensions/fts-python/sql.py 2011-10-17 07:46:27 +0000 |
3 | +++ extensions/fts-python/sql.py 2012-01-20 14:05:26 +0000 |
4 | @@ -99,398 +99,28 @@ |
5 | log.debug ("Schema '%s' not found: %s" % (schema_name, e)) |
6 | return 0 |
7 | |
8 | -def _set_schema_version (cursor, schema_name, version): |
9 | - """ |
10 | - Sets the version of `schema_name` to `version` |
11 | - """ |
12 | - cursor.execute(""" |
13 | - CREATE TABLE IF NOT EXISTS schema_version |
14 | - (schema VARCHAR PRIMARY KEY ON CONFLICT REPLACE, version INT) |
15 | - """) |
16 | - |
17 | - # The 'ON CONFLICT REPLACE' on the PK converts INSERT to UPDATE |
18 | - # when appriopriate |
19 | - cursor.execute(""" |
20 | - INSERT INTO schema_version VALUES (?, ?) |
21 | - """, (schema_name, version)) |
22 | - cursor.connection.commit() |
23 | - |
24 | -def _do_schema_upgrade (cursor, schema_name, old_version, new_version): |
25 | - """ |
26 | - Try and upgrade schema `schema_name` from version `old_version` to |
27 | - `new_version`. This is done by executing a series of upgrade modules |
28 | - named '_zeitgeist.engine.upgrades.$schema_name_$(i)_$(i+1)' and executing |
29 | - the run(cursor) method of those modules until new_version is reached |
30 | - """ |
31 | - _do_schema_backup() |
32 | - _set_schema_version(cursor, schema_name, -1) |
33 | - for i in xrange(old_version, new_version): |
34 | - # Fire off the right upgrade module |
35 | - log.info("Upgrading database '%s' from version %s to %s. " |
36 | - "This may take a while" % (schema_name, i, i+1)) |
37 | - upgrader_name = "%s_%s_%s" % (schema_name, i, i+1) |
38 | - module = __import__ ("_zeitgeist.engine.upgrades.%s" % upgrader_name) |
39 | - eval("module.engine.upgrades.%s.run(cursor)" % upgrader_name) |
40 | - |
41 | - # Update the schema version |
42 | - _set_schema_version(cursor, schema_name, new_version) |
43 | - |
44 | - log.info("Upgrade succesful") |
45 | - |
46 | -def _check_core_schema_upgrade (cursor): |
47 | - """ |
48 | - Checks whether the schema is good or, if it is outdated, triggers any |
49 | - necessary upgrade scripts. This method will also attempt to restore a |
50 | - database backup in case a previous upgrade was cancelled midway. |
51 | - |
52 | - It returns a boolean indicating whether the schema was good and the |
53 | - database cursor (which will have changed if the database was restored). |
54 | - """ |
55 | - # See if we have the right schema version, and try an upgrade if needed |
56 | - core_schema_version = _get_schema_version(cursor, constants.CORE_SCHEMA) |
57 | - if core_schema_version >= constants.CORE_SCHEMA_VERSION: |
58 | - return True, cursor |
59 | - else: |
60 | - try: |
61 | - if core_schema_version <= -1: |
62 | - cursor.connection.commit() |
63 | - cursor.connection.close() |
64 | - _do_schema_restore() |
65 | - cursor = _connect_to_db(constants.DATABASE_FILE) |
66 | - core_schema_version = _get_schema_version(cursor, |
67 | - constants.CORE_SCHEMA) |
68 | - log.exception("Database corrupted at upgrade -- " |
69 | - "upgrading from version %s" % core_schema_version) |
70 | - |
71 | - _do_schema_upgrade (cursor, |
72 | - constants.CORE_SCHEMA, |
73 | - core_schema_version, |
74 | - constants.CORE_SCHEMA_VERSION) |
75 | - |
76 | - # Don't return here. The upgrade process might depend on the |
77 | - # tables, indexes, and views being set up (to avoid code dup) |
78 | - log.info("Running post upgrade setup") |
79 | - return False, cursor |
80 | - except sqlite3.OperationalError: |
81 | - # Something went wrong while applying the upgrade -- this is |
82 | - # probably due to a non existing table (this occurs when |
83 | - # applying core_3_4, for example). We just need to fall through |
84 | - # the rest of create_db to fix this... |
85 | - log.exception("Database corrupted -- proceeding") |
86 | - return False, cursor |
87 | - except Exception, e: |
88 | - log.exception( |
89 | - "Failed to upgrade database '%s' from version %s to %s: %s" % \ |
90 | - (constants.CORE_SCHEMA, core_schema_version, |
91 | - constants.CORE_SCHEMA_VERSION, e)) |
92 | - raise SystemExit(27) |
93 | - |
94 | -def _do_schema_backup (): |
95 | - shutil.copyfile(constants.DATABASE_FILE, constants.DATABASE_FILE_BACKUP) |
96 | - |
97 | -def _do_schema_restore (): |
98 | - shutil.move(constants.DATABASE_FILE_BACKUP, constants.DATABASE_FILE) |
99 | - |
100 | def _connect_to_db(file_path): |
101 | conn = sqlite3.connect(file_path) |
102 | conn.row_factory = sqlite3.Row |
103 | cursor = conn.cursor(UnicodeCursor) |
104 | return cursor |
105 | |
106 | -def create_db(file_path): |
107 | - """Create the database and return a default cursor for it""" |
108 | - start = time.time() |
109 | - log.info("Using database: %s" % file_path) |
110 | - new_database = not os.path.exists(file_path) |
111 | - cursor = _connect_to_db(file_path) |
112 | - |
113 | - # Seif: as result of the optimization story (LP: #639737) we are setting |
114 | - # journal_mode to WAL if possible, this change is irreversible but |
115 | - # gains us a big speedup, for more information see http://www.sqlite.org/wal.html |
116 | - # FIXME: Set journal_mode to WAL when teamdecision has been take. |
117 | - # cursor.execute("PRAGMA journal_mode = WAL") |
118 | - # cursor.execute("PRAGMA journal_mode = DELETE") |
119 | - # Seif: another result of the performance tweaks discussed in (LP: #639737) |
120 | - # we decided to set locking_mode to EXCLUSIVE, from now on only |
121 | - # one connection to the database is allowed to revert this setting set locking_mode to NORMAL. |
122 | - |
123 | - # thekorn: as part of the workaround for (LP: #598666) we need to |
124 | - # create the '_fix_cache' TEMP table on every start, |
125 | - # this table gets purged once the engine gets closed. |
126 | - # When a cached value gets deleted we automatically store the name |
127 | - # of the cache and the value's id to this table. It's then up to |
128 | - # the python code to delete items from the cache based on the content |
129 | - # of this table. |
130 | - cursor.execute("CREATE TEMP TABLE _fix_cache (table_name VARCHAR, id INTEGER)") |
131 | - |
132 | - # Always assume that temporary memory backed DBs have good schemas |
133 | - if constants.DATABASE_FILE != ":memory:" and not new_database: |
134 | - do_upgrade, cursor = _check_core_schema_upgrade(cursor) |
135 | - if do_upgrade: |
136 | - _time = (time.time() - start)*1000 |
137 | - log.debug("Core schema is good. DB loaded in %sms" % _time) |
138 | - return cursor |
139 | - |
140 | - # the following sql statements are only executed if a new database |
141 | - # is created or an update of the core schema was done |
142 | - log.debug("Updating sql schema") |
143 | - # uri |
144 | - cursor.execute(""" |
145 | - CREATE TABLE IF NOT EXISTS uri |
146 | - (id INTEGER PRIMARY KEY, value VARCHAR UNIQUE) |
147 | - """) |
148 | - cursor.execute(""" |
149 | - CREATE UNIQUE INDEX IF NOT EXISTS uri_value ON uri(value) |
150 | - """) |
151 | - |
152 | - # interpretation |
153 | - cursor.execute(""" |
154 | - CREATE TABLE IF NOT EXISTS interpretation |
155 | - (id INTEGER PRIMARY KEY, value VARCHAR UNIQUE) |
156 | - """) |
157 | - cursor.execute(""" |
158 | - CREATE UNIQUE INDEX IF NOT EXISTS interpretation_value |
159 | - ON interpretation(value) |
160 | - """) |
161 | - |
162 | - # manifestation |
163 | - cursor.execute(""" |
164 | - CREATE TABLE IF NOT EXISTS manifestation |
165 | - (id INTEGER PRIMARY KEY, value VARCHAR UNIQUE) |
166 | - """) |
167 | - cursor.execute(""" |
168 | - CREATE UNIQUE INDEX IF NOT EXISTS manifestation_value |
169 | - ON manifestation(value)""") |
170 | - |
171 | - # mimetype |
172 | - cursor.execute(""" |
173 | - CREATE TABLE IF NOT EXISTS mimetype |
174 | - (id INTEGER PRIMARY KEY, value VARCHAR UNIQUE) |
175 | - """) |
176 | - cursor.execute(""" |
177 | - CREATE UNIQUE INDEX IF NOT EXISTS mimetype_value |
178 | - ON mimetype(value)""") |
179 | - |
180 | - # actor |
181 | - cursor.execute(""" |
182 | - CREATE TABLE IF NOT EXISTS actor |
183 | - (id INTEGER PRIMARY KEY, value VARCHAR UNIQUE) |
184 | - """) |
185 | - cursor.execute(""" |
186 | - CREATE UNIQUE INDEX IF NOT EXISTS actor_value |
187 | - ON actor(value)""") |
188 | - |
189 | - # text |
190 | - cursor.execute(""" |
191 | - CREATE TABLE IF NOT EXISTS text |
192 | - (id INTEGER PRIMARY KEY, value VARCHAR UNIQUE) |
193 | - """) |
194 | - cursor.execute(""" |
195 | - CREATE UNIQUE INDEX IF NOT EXISTS text_value |
196 | - ON text(value)""") |
197 | - |
198 | - # payload, there's no value index for payload, |
199 | - # they can only be fetched by id |
200 | - cursor.execute(""" |
201 | - CREATE TABLE IF NOT EXISTS payload |
202 | - (id INTEGER PRIMARY KEY, value BLOB) |
203 | - """) |
204 | - |
205 | - # storage, represented by a StatefulEntityTable |
206 | - cursor.execute(""" |
207 | - CREATE TABLE IF NOT EXISTS storage |
208 | - (id INTEGER PRIMARY KEY, |
209 | - value VARCHAR UNIQUE, |
210 | - state INTEGER, |
211 | - icon VARCHAR, |
212 | - display_name VARCHAR) |
213 | - """) |
214 | - cursor.execute(""" |
215 | - CREATE UNIQUE INDEX IF NOT EXISTS storage_value |
216 | - ON storage(value)""") |
217 | - |
218 | - # event - the primary table for log statements |
219 | - # - Note that event.id is NOT unique, we can have multiple subjects per ID |
220 | - # - Timestamps are integers. |
221 | - # - (event-)origin and subj_id_current are added to the end of the table |
222 | - cursor.execute(""" |
223 | - CREATE TABLE IF NOT EXISTS event ( |
224 | - id INTEGER, |
225 | - timestamp INTEGER, |
226 | - interpretation INTEGER, |
227 | - manifestation INTEGER, |
228 | - actor INTEGER, |
229 | - payload INTEGER, |
230 | - subj_id INTEGER, |
231 | - subj_interpretation INTEGER, |
232 | - subj_manifestation INTEGER, |
233 | - subj_origin INTEGER, |
234 | - subj_mimetype INTEGER, |
235 | - subj_text INTEGER, |
236 | - subj_storage INTEGER, |
237 | - origin INTEGER, |
238 | - subj_id_current INTEGER, |
239 | - CONSTRAINT interpretation_fk FOREIGN KEY(interpretation) |
240 | - REFERENCES interpretation(id) ON DELETE CASCADE, |
241 | - CONSTRAINT manifestation_fk FOREIGN KEY(manifestation) |
242 | - REFERENCES manifestation(id) ON DELETE CASCADE, |
243 | - CONSTRAINT actor_fk FOREIGN KEY(actor) |
244 | - REFERENCES actor(id) ON DELETE CASCADE, |
245 | - CONSTRAINT origin_fk FOREIGN KEY(origin) |
246 | - REFERENCES uri(id) ON DELETE CASCADE, |
247 | - CONSTRAINT payload_fk FOREIGN KEY(payload) |
248 | - REFERENCES payload(id) ON DELETE CASCADE, |
249 | - CONSTRAINT subj_id_fk FOREIGN KEY(subj_id) |
250 | - REFERENCES uri(id) ON DELETE CASCADE, |
251 | - CONSTRAINT subj_id_current_fk FOREIGN KEY(subj_id_current) |
252 | - REFERENCES uri(id) ON DELETE CASCADE, |
253 | - CONSTRAINT subj_interpretation_fk FOREIGN KEY(subj_interpretation) |
254 | - REFERENCES interpretation(id) ON DELETE CASCADE, |
255 | - CONSTRAINT subj_manifestation_fk FOREIGN KEY(subj_manifestation) |
256 | - REFERENCES manifestation(id) ON DELETE CASCADE, |
257 | - CONSTRAINT subj_origin_fk FOREIGN KEY(subj_origin) |
258 | - REFERENCES uri(id) ON DELETE CASCADE, |
259 | - CONSTRAINT subj_mimetype_fk FOREIGN KEY(subj_mimetype) |
260 | - REFERENCES mimetype(id) ON DELETE CASCADE, |
261 | - CONSTRAINT subj_text_fk FOREIGN KEY(subj_text) |
262 | - REFERENCES text(id) ON DELETE CASCADE, |
263 | - CONSTRAINT subj_storage_fk FOREIGN KEY(subj_storage) |
264 | - REFERENCES storage(id) ON DELETE CASCADE, |
265 | - CONSTRAINT unique_event UNIQUE (timestamp, interpretation, manifestation, actor, subj_id) |
266 | - ) |
267 | - """) |
268 | - cursor.execute(""" |
269 | - CREATE INDEX IF NOT EXISTS event_id |
270 | - ON event(id)""") |
271 | - cursor.execute(""" |
272 | - CREATE INDEX IF NOT EXISTS event_timestamp |
273 | - ON event(timestamp)""") |
274 | - cursor.execute(""" |
275 | - CREATE INDEX IF NOT EXISTS event_interpretation |
276 | - ON event(interpretation)""") |
277 | - cursor.execute(""" |
278 | - CREATE INDEX IF NOT EXISTS event_manifestation |
279 | - ON event(manifestation)""") |
280 | - cursor.execute(""" |
281 | - CREATE INDEX IF NOT EXISTS event_actor |
282 | - ON event(actor)""") |
283 | - cursor.execute(""" |
284 | - CREATE INDEX IF NOT EXISTS event_origin |
285 | - ON event(origin)""") |
286 | - cursor.execute(""" |
287 | - CREATE INDEX IF NOT EXISTS event_subj_id |
288 | - ON event(subj_id)""") |
289 | - cursor.execute(""" |
290 | - CREATE INDEX IF NOT EXISTS event_subj_id_current |
291 | - ON event(subj_id_current)""") |
292 | - cursor.execute(""" |
293 | - CREATE INDEX IF NOT EXISTS event_subj_interpretation |
294 | - ON event(subj_interpretation)""") |
295 | - cursor.execute(""" |
296 | - CREATE INDEX IF NOT EXISTS event_subj_manifestation |
297 | - ON event(subj_manifestation)""") |
298 | - cursor.execute(""" |
299 | - CREATE INDEX IF NOT EXISTS event_subj_origin |
300 | - ON event(subj_origin)""") |
301 | - cursor.execute(""" |
302 | - CREATE INDEX IF NOT EXISTS event_subj_mimetype |
303 | - ON event(subj_mimetype)""") |
304 | - cursor.execute(""" |
305 | - CREATE INDEX IF NOT EXISTS event_subj_text |
306 | - ON event(subj_text)""") |
307 | - cursor.execute(""" |
308 | - CREATE INDEX IF NOT EXISTS event_subj_storage |
309 | - ON event(subj_storage)""") |
310 | - |
311 | - # Foreign key constraints don't work in SQLite. Yay! |
312 | - for table, columns in ( |
313 | - ('interpretation', ('interpretation', 'subj_interpretation')), |
314 | - ('manifestation', ('manifestation', 'subj_manifestation')), |
315 | - ('actor', ('actor',)), |
316 | - ('payload', ('payload',)), |
317 | - ('mimetype', ('subj_mimetype',)), |
318 | - ('text', ('subj_text',)), |
319 | - ('storage', ('subj_storage',)), |
320 | - ): |
321 | - for column in columns: |
322 | - cursor.execute(""" |
323 | - CREATE TRIGGER IF NOT EXISTS fkdc_event_%(column)s |
324 | - BEFORE DELETE ON event |
325 | - WHEN ((SELECT COUNT(*) FROM event WHERE %(column)s=OLD.%(column)s) < 2) |
326 | - BEGIN |
327 | - DELETE FROM %(table)s WHERE id=OLD.%(column)s; |
328 | - END; |
329 | - """ % {'column': column, 'table': table}) |
330 | - |
331 | - # ... special cases |
332 | - for num, column in enumerate(('subj_id', 'subj_origin', |
333 | - 'subj_id_current', 'origin')): |
334 | - cursor.execute(""" |
335 | - CREATE TRIGGER IF NOT EXISTS fkdc_event_uri_%(num)d |
336 | - BEFORE DELETE ON event |
337 | - WHEN (( |
338 | - SELECT COUNT(*) |
339 | - FROM event |
340 | - WHERE |
341 | - origin=OLD.%(column)s |
342 | - OR subj_id=OLD.%(column)s |
343 | - OR subj_id_current=OLD.%(column)s |
344 | - OR subj_origin=OLD.%(column)s |
345 | - ) < 2) |
346 | - BEGIN |
347 | - DELETE FROM uri WHERE id=OLD.%(column)s; |
348 | - END; |
349 | - """ % {'num': num+1, 'column': column}) |
350 | - |
351 | - cursor.execute("DROP VIEW IF EXISTS event_view") |
352 | - cursor.execute(""" |
353 | - CREATE VIEW IF NOT EXISTS event_view AS |
354 | - SELECT event.id, |
355 | - event.timestamp, |
356 | - event.interpretation, |
357 | - event.manifestation, |
358 | - event.actor, |
359 | - (SELECT value FROM payload WHERE payload.id=event.payload) |
360 | - AS payload, |
361 | - (SELECT value FROM uri WHERE uri.id=event.subj_id) |
362 | - AS subj_uri, |
363 | - event.subj_id, -- #this directly points to an id in the uri table |
364 | - event.subj_interpretation, |
365 | - event.subj_manifestation, |
366 | - event.subj_origin, |
367 | - (SELECT value FROM uri WHERE uri.id=event.subj_origin) |
368 | - AS subj_origin_uri, |
369 | - event.subj_mimetype, |
370 | - (SELECT value FROM text WHERE text.id = event.subj_text) |
371 | - AS subj_text, |
372 | - (SELECT value FROM storage |
373 | - WHERE storage.id=event.subj_storage) AS subj_storage, |
374 | - (SELECT state FROM storage |
375 | - WHERE storage.id=event.subj_storage) AS subj_storage_state, |
376 | - event.origin, |
377 | - (SELECT value FROM uri WHERE uri.id=event.origin) |
378 | - AS event_origin_uri, |
379 | - (SELECT value FROM uri WHERE uri.id=event.subj_id_current) |
380 | - AS subj_current_uri, |
381 | - event.subj_id_current |
382 | - FROM event |
383 | - """) |
384 | - |
385 | - # All good. Set the schema version, so we don't have to do all this |
386 | - # sql the next time around |
387 | - _set_schema_version (cursor, constants.CORE_SCHEMA, constants.CORE_SCHEMA_VERSION) |
388 | - _time = (time.time() - start)*1000 |
389 | - log.info("DB set up in %sms" % _time) |
390 | - cursor.connection.commit() |
391 | - |
392 | - return cursor |
393 | - |
394 | _cursor = None |
395 | def get_default_cursor(): |
396 | global _cursor |
397 | if not _cursor: |
398 | dbfile = constants.DATABASE_FILE |
399 | - _cursor = create_db(dbfile) |
400 | + start = time.time() |
401 | + log.info("Using database: %s" % dbfile) |
402 | + new_database = not os.path.exists(dbfile) |
403 | + _cursor = _connect_to_db(dbfile) |
404 | + core_schema_version = _get_schema_version(_cursor, constants.CORE_SCHEMA) |
405 | + if core_schema_version < constants.CORE_SCHEMA_VERSION: |
406 | + log.exception( |
407 | + "Database '%s' is on version %s, but %s is required" % \ |
408 | + (constants.CORE_SCHEMA, core_schema_version, |
409 | + constants.CORE_SCHEMA_VERSION)) |
410 | + raise SystemExit(27) |
411 | return _cursor |
412 | def unset_cursor(): |
413 | global _cursor |
414 | @@ -511,28 +141,13 @@ |
415 | self[row["value"]] = row["id"] |
416 | |
417 | self._inv_dict = dict((value, key) for key, value in self.iteritems()) |
418 | - |
419 | - cursor.execute(""" |
420 | - CREATE TEMP TRIGGER update_cache_%(table)s |
421 | - BEFORE DELETE ON %(table)s |
422 | - BEGIN |
423 | - INSERT INTO _fix_cache VALUES ("%(table)s", OLD.id); |
424 | - END; |
425 | - """ % {"table": table}) |
426 | |
427 | def __getitem__(self, name): |
428 | # Use this for inserting new properties into the database |
429 | if name in self: |
430 | return super(TableLookup, self).__getitem__(name) |
431 | - try: |
432 | - self._cursor.execute( |
433 | - "INSERT INTO %s (value) VALUES (?)" % self._table, (name,)) |
434 | - id = self._cursor.lastrowid |
435 | - except sqlite3.IntegrityError: |
436 | - # This shouldn't happen, but just in case |
437 | - # FIXME: Maybe we should remove it? |
438 | - id = self._cursor.execute("SELECT id FROM %s WHERE value=?" |
439 | - % self._table, (name,)).fetchone()[0] |
440 | + id = self._cursor.execute("SELECT id FROM %s WHERE value=?" |
441 | + % self._table, (name,)).fetchone()[0] |
442 | # If we are here it's a newly inserted value, insert it into cache |
443 | self[name] = id |
444 | self._inv_dict[id] = name |
We probably still need to check the schema version (and figure out what to do if it's wrong, like do we want it to sleep for a while in case Zeitgeist is performing an upgrade?).