Merge lp:~mkanat/loggerhead/codebrowse-hangs into lp:loggerhead

Proposed by Max Kanat-Alexander
Status: Merged
Merged at revision: not available
Proposed branch: lp:~mkanat/loggerhead/codebrowse-hangs
Merge into: lp:loggerhead
Diff against target: 69 lines (+33/-11)
1 file modified
loggerhead/history.py (+33/-11)
To merge this branch: bzr merge lp:~mkanat/loggerhead/codebrowse-hangs
Reviewer Review Type Date Requested Status
Matt Nordhoff Approve
Michael Hudson-Doyle Approve
Review via email: mp+17326@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Max Kanat-Alexander (mkanat) wrote :

This most likely fixes Bug 118625, though I wanted to get at least a code review before having the LOSAs deploy it.

401. By Max Kanat-Alexander

Address code review comments from mwhudson and Peng.

Revision history for this message
Michael Hudson-Doyle (mwhudson) wrote :

Looks good to me.

review: Approve
402. By Max Kanat-Alexander

Be super-paranoid about releasing the revision_graph_check_lock.

Revision history for this message
Matt Nordhoff (mnordhoff) wrote :

I'll land it.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'loggerhead/history.py'
2--- loggerhead/history.py 2010-01-11 03:26:44 +0000
3+++ loggerhead/history.py 2010-01-14 00:17:10 +0000
4@@ -33,6 +33,7 @@
5 import logging
6 import re
7 import textwrap
8+import threading
9
10 import bzrlib.branch
11 import bzrlib.delta
12@@ -188,6 +189,12 @@
13 """
14 self._cache[key] = (revid, data)
15
16+# Used to store locks that prevent multiple threads from building a
17+# revision graph for the same branch at the same time, because that can
18+# cause severe performance issues that are so bad that the system seems
19+# to hang.
20+revision_graph_locks = {}
21+revision_graph_check_lock = threading.Lock()
22
23 class History(object):
24 """Decorate a branch to provide information for rendering.
25@@ -227,18 +234,33 @@
26 def update_missed_caches():
27 for cache in missed_caches:
28 cache.set(cache_key, self.last_revid, self._rev_info)
29- for cache in caches:
30- data = cache.get(cache_key, self.last_revid)
31- if data is not None:
32- self._rev_info = data
33+
34+ # Theoretically, it's possible for two threads to race in creating
35+ # the Lock() object for their branch, so we put a lock around
36+ # creating the per-branch Lock().
37+ revision_graph_check_lock.acquire()
38+ try:
39+ if cache_key not in revision_graph_locks:
40+ revision_graph_locks[cache_key] = threading.Lock()
41+ finally:
42+ revision_graph_check_lock.release()
43+
44+ revision_graph_locks[cache_key].acquire()
45+ try:
46+ for cache in caches:
47+ data = cache.get(cache_key, self.last_revid)
48+ if data is not None:
49+ self._rev_info = data
50+ update_missed_caches()
51+ break
52+ else:
53+ missed_caches.append(cache)
54+ else:
55+ whole_history_data = compute_whole_history_data(self._branch)
56+ self._rev_info, self._rev_indices = whole_history_data
57 update_missed_caches()
58- break
59- else:
60- missed_caches.append(cache)
61- else:
62- whole_history_data = compute_whole_history_data(self._branch)
63- self._rev_info, self._rev_indices = whole_history_data
64- update_missed_caches()
65+ finally:
66+ revision_graph_locks[cache_key].release()
67
68 if self._rev_indices is not None:
69 self._revno_revid = {}

Subscribers

People subscribed via source and target branches