Merge lp:~pedronis/u1db/whats_changed_sorted_last_edits into lp:u1db
Proposed by
Samuele Pedroni
Status: | Merged |
---|---|
Approved by: | Samuele Pedroni |
Approved revision: | 148 |
Merged at revision: | 146 |
Proposed branch: | lp:~pedronis/u1db/whats_changed_sorted_last_edits |
Merge into: | lp:u1db |
Diff against target: |
155 lines (+48/-21) 5 files modified
u1db/__init__.py (+6/-5) u1db/backends/inmemory.py (+12/-2) u1db/backends/sqlite_backend.py (+12/-7) u1db/sync.py (+4/-2) u1db/tests/test_backends.py (+14/-5) |
To merge this branch: | bzr merge lp:~pedronis/u1db/whats_changed_sorted_last_edits |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
John A Meinel (community) | Approve | ||
Review via email: mp+85386@code.launchpad.net |
Description of the change
have whats_changed return information about the generation of a change sorted, return only the last change for a given doc_id. (the idea is that it should be possible to make sync incremental by sending document across in the implied order)
To post a comment you must log in.
A few small comments, everything would be fine to land the way it is. Just some thoughts:
38 + cur_generation = len(self. _transaction_ log) on_log[ old_generation: ] tail.reverse( )
39 + changes = []
40 + relevant_tail = self._transacti
41 + relevant_
If we want to be in-memory race safe, then doing:
relevant_tail = self._transacti on_log[ old_generation: ]
cur_generation = old_generation + len(relevant_tail)
In the SQL implementation, you use changes[0][1] which is also getting it from the queried data, rather than assuming things don't change.
Also, instead of relevant_ tail.reverse( ) and then iterating, just do:
for doc_id in reversed( relevant_ tail):
That generates a reverse iterator, rather than the O(n) for actually reversing each item in the list.