Merge lp:~laurynas-biveinis/percona-server/bug1247021 into lp:percona-server/5.6

Proposed by Laurynas Biveinis
Status: Merged
Approved by: George Ormond Lorch III
Approved revision: no longer in the source branch.
Merged at revision: 526
Proposed branch: lp:~laurynas-biveinis/percona-server/bug1247021
Merge into: lp:percona-server/5.6
Diff against target: 59 lines (+11/-16)
1 file modified
Percona-Server/storage/innobase/buf/buf0buf.cc (+11/-16)
To merge this branch: bzr merge lp:~laurynas-biveinis/percona-server/bug1247021
Reviewer Review Type Date Requested Status
George Ormond Lorch III (community) g2 Approve
Review via email: mp+196898@code.launchpad.net

Description of the change

Fix bug 1247021 (Assertion failure in file buf0buf.cc line 3687,
buf_page_init_for_read() w/ multiple purge threads and compressed
tables).

The issue is buf_page_init_for_read() adjusted for the buffer pool
mutex split incorrectly, visible in a combination of compressed
tables and multiple active purge threads, in 5.6 only.

In upstream, decrement of watch page fix count in
buf_pool_watch_unset() is protected by a combination of the relevant
page_hash X latch, buffer pool mutex, and possibly block mutex. In
XtraDB, buf_pool_watch_unset() locks page_hash X latch and then
either a block mutex or the zip mutex.

In upstream, the code path leading to the failing assert temporarily
releases page_hash latch around buf_buddy_alloc() call, which in turn
may release the buffer pool mutex too. If that happens, the watch
page is re-read from the page hash.

In the same code path in XtraDB, the buffer pool mutex was substituted
by the LRU list mutex and the logic was kept the same. Which is
incorrect, because page_hash unlock in itself is enough to necessitate
the page_hash recheck regardless of the LRU list mutex, because of
buf_pool_watch_unset() differences.

Fix by reducing the uses of local variable lru in
buf_page_init_for_read() to absolute minimum, i.e. arg to
buf_buddy_alloc(), and performing the watch_page re-check
unconditionally.

http://jenkins.percona.com/job/percona-server-5.6-param/419/

To post a comment you must log in.
Revision history for this message
George Ormond Lorch III (gl-az) :
review: Approve (g2)
Revision history for this message
Laurynas Biveinis (laurynas-biveinis) wrote :

This was originally a G1 MP, so it shouldn't have been set to Approved with a G2 review only. But in a hindsight this is a borderline-G2 MP IMHO, so keeping as Approved.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Percona-Server/storage/innobase/buf/buf0buf.cc'
2--- Percona-Server/storage/innobase/buf/buf0buf.cc 2013-10-28 13:54:57 +0000
3+++ Percona-Server/storage/innobase/buf/buf0buf.cc 2013-11-27 13:09:38 +0000
4@@ -3496,7 +3496,7 @@
5 prio_rw_lock_t* hash_lock;
6 mtr_t mtr;
7 ulint fold;
8- ibool lru = FALSE;
9+ ibool lru;
10 void* data;
11 buf_pool_t* buf_pool = buf_pool_get(space, offset);
12
13@@ -3575,7 +3575,6 @@
14 /* The block must be put to the LRU list, to the old blocks */
15 buf_LRU_add_block(bpage, TRUE/* to old blocks */);
16 mutex_exit(&buf_pool->LRU_list_mutex);
17- lru = TRUE;
18
19 /* We set a pass-type x-lock on the frame because then
20 the same thread which called for the read operation
21@@ -3626,28 +3625,24 @@
22
23 rw_lock_x_lock(hash_lock);
24
25- /* If buf_buddy_alloc() allocated storage from the LRU list,
26- it released and reacquired buf_pool->LRU_list_mutex. Thus, we
27- must check the page_hash again, as it may have been
28+ /* We must check the page_hash again, as it may have been
29 modified. */
30- if (UNIV_UNLIKELY(lru)) {
31
32- watch_page = buf_page_hash_get_low(
33+ watch_page = buf_page_hash_get_low(
34 buf_pool, space, offset, fold);
35
36- if (UNIV_UNLIKELY(watch_page
37+ if (UNIV_UNLIKELY(watch_page
38 && !buf_pool_watch_is_sentinel(buf_pool,
39 watch_page))) {
40
41- /* The block was added by some other thread. */
42- mutex_exit(&buf_pool->LRU_list_mutex);
43- rw_lock_x_unlock(hash_lock);
44- watch_page = NULL;
45- buf_buddy_free(buf_pool, data, zip_size);
46+ /* The block was added by some other thread. */
47+ mutex_exit(&buf_pool->LRU_list_mutex);
48+ rw_lock_x_unlock(hash_lock);
49+ watch_page = NULL;
50+ buf_buddy_free(buf_pool, data, zip_size);
51
52- bpage = NULL;
53- goto func_exit;
54- }
55+ bpage = NULL;
56+ goto func_exit;
57 }
58
59 bpage = buf_page_alloc_descriptor();

Subscribers

People subscribed via source and target branches