Merge lp:~mkanat/loggerhead/synchronize-lru_cache into lp:loggerhead
| Status: | Merged | ||||
|---|---|---|---|---|---|
| Merged at revision: | 418 | ||||
| Proposed branch: | lp:~mkanat/loggerhead/synchronize-lru_cache | ||||
| Merge into: | lp:loggerhead | ||||
| Diff against target: |
51 lines (+7/-7) 1 file modified
loggerhead/history.py (+7/-7) |
||||
| To merge this branch: | bzr merge lp:~mkanat/loggerhead/synchronize-lru_cache | ||||
| Related bugs: |
|
| Reviewer | Review Type | Date Requested | Status |
|---|---|---|---|
| Matt Nordhoff | 2010-05-04 | Approve on 2010-05-06 | |
| Loggerhead Team | 2010-05-06 | Pending | |
| Max Kanat-Alexander | Pending | ||
|
Review via email:
|
|||
This proposal supersedes a proposal from 2010-04-23.
Description of the Change
Fixes Bug #420738 by synchronizing all our access to lru_cache, which is not thread-safe.
In testing, I don't see any significant performance degradation under access from many different users accessing several different branches.
| Ian Clatworthy (ian-clatworthy) wrote : | # |
| Max Kanat-Alexander (mkanat) wrote : | # |
> Well done on tracking down the problem and thanks for the patch. Aren't
> threading problems fun? :-)
Fun indeed. :-)
> What do you think
> about having a decorator class that provides the synchronized access? E.g.
> changing the cache creation line to be something like ...
>
> graph_cache = ThreadSafeCache
>
> and introducing ThreadSafeCache() as the layer with the RLock?
I thought about that, and the problem is that it would depend to some degree on implementation details of lru_cache--if the client started using it in different ways, or if the lru_cache implementation changed in some ways, we'd have to start adding locks in different places.
John has already said that he's not going to make the LRUCache itself thread-safe (see the bug), and the only accesses we make into to it are via this interface, so I think the simplest and most forward-compatible solution is to put the locking in our client code.
| Matt Nordhoff (mnordhoff) wrote : | # |
Sounds fine to me, and I've been running it for a few days without problems.
| John A Meinel (jameinel) wrote : | # |
I should comment here a little bit. I'm a bit surprised that this actually handles what we want it to.
Specifically, RevInfoMemoryCache is created for each History object created, and a new one is created for each HTTP request (if I sorted out my info correctly.)
As such, *each* HTTP thread ends up with its own lock around the cache. And you don't end up doing much of any cross-thread protection, which was the whole point.
I could probably be convinced to add a potential locking primitive to LRUCache. I wanted to avoid the overhead, but we can just default to None, and allow callers to set a Lock object that we'll respect. Something that says "if self._lock is not None: self._lock.
I think that __getitem__ can be protected after the "if node is mru:" check. If two threads are accessing different items concurrently, there is no clear-cut MRU, and at worst the one that thinks it is already 'most recent' will just be 1 step behind.
If this *does* seem to fix the problem, my guess is that it only does so by playing tricks with threads and the GIL, and causing sync points. (aka, by accident)
| John A Meinel (jameinel) wrote : | # |
Check out: lp:///~jameinel/bzr/lru_locking
It still needs tests, and possibly 'live-fire' testing, hammering the cache structure with lots of threads. However, I'm fairly sure that I've gotten a minimal locked section (we lock as little as possible), while still maintaining correctness.
LRUCache.add() is not specifically thread-safe if you have cleanups and are adding different values for the same key. We might want to consider putting a big lock around all of add() for that case, but callers could also do that...
The main reason I poked at it was because you can't get 'fine-grained' locking around the LRU double-linked-list unless it is implemented in the cache itself.
| Max Kanat-Alexander (mkanat) wrote : | # |
Okay, John is right--the bug is not fixed. I just ran into it on bzr.mozilla.org again.
For loggerhead, a quick workaround is to make the lock global instead of a per-instance item.
We do have cleanups, and I suppose it's possible for us to be updating an existing branch with a new cache, right? So I think we'll need this lock in loggerhead anyway.
| Matt Nordhoff (mnordhoff) wrote : | # |
Fine by me, and I've been running it a couple days without issues. But I don't particularly know if it's right -- I approved the original version of this, too. :-\ Still...how could this one not be right?
| John A Meinel (jameinel) wrote : | # |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Matt Nordhoff wrote:
> You have been requested to review the proposed merge of lp:~mkanat/loggerhead/synchronize-lru_cache into lp:loggerhead.
>
> Fixes Bug #420738 by synchronizing all our access to lru_cache, which is not thread-safe.
>
> In testing, I don't see any significant performance degradation under access from many different users accessing several different branches.
>
>
Seems fine to me. Note that the bzr-history-db stuff would supersede
this work.
John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://
iEYEARECAAYFAkv
7YYAnAha0bJjXEr
=B/h8
-----END PGP SIGNATURE-----
| Max Kanat-Alexander (mkanat) wrote : | # |
> Seems fine to me. Note that the bzr-history-db stuff would supersede
> this work.
Only for situations where bzr-history-db was installed though, right?

Max,
Well done on tracking down the problem and thanks for the patch. Aren't threading problems fun? :-)
My main concern with the proposed change is that I'd prefer the Cache object to have the intelligence to Do The Right Thing wrt locking, rather than it being left up to the client to guard access to it. As it stands, we're deciding to use an LRU cache in apps/branch.py and paying the consequences through increased code complexity several layers deeper. What do you think about having a decorator class that provides the synchronized access? E.g. changing the cache creation line to be something like ...
graph_cache = ThreadSafeCache (bzrlib. lru_cache. LRUCache( 10))
and introducing ThreadSafeCache() as the layer with the RLock?
OTOH, this bug is critical and perhaps we ought to just land it as is, particularly as John is looking to rework the caching architecture Real Soon Now anyhow.
Any second opinions?