Merge lp:~percona-dev/percona-server/5.1.54-fix_expand_import_and_dict_size_limit into lp:percona-server/release-5.1.54-12

Proposed by Yasufumi Kinoshita
Status: Merged
Approved by: Valentine Gostev
Approved revision: no longer in the source branch.
Merged at revision: 192
Proposed branch: lp:~percona-dev/percona-server/5.1.54-fix_expand_import_and_dict_size_limit
Merge into: lp:percona-server/release-5.1.54-12
Diff against target: 307 lines (+133/-104)
3 files modified
innodb_dict_size_limit.patch (+127/-80)
innodb_expand_import.patch (+2/-2)
innodb_split_buf_pool_mutex.patch (+4/-22)
To merge this branch: bzr merge lp:~percona-dev/percona-server/5.1.54-fix_expand_import_and_dict_size_limit
Reviewer Review Type Date Requested Status
Valentine Gostev (community) qa Approve
Fred Linhoss (community) documentation Approve
Percona developers Pending
Review via email: mp+48102@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Vadim Tkachenko (vadim-tk) wrote :

Looks good for me, asking documentation approval

Revision history for this message
Vadim Tkachenko (vadim-tk) wrote :

Also we need test case to cover import with the same space_id.
Asking Valentine.

Revision history for this message
Fred Linhoss (fred-linhoss) wrote :

Will include in next release notes for 5.1 and 5.5 and update user doc pages for innodb_expand_import and innodb_dict_size_limit.

review: Approve (documentation)
Revision history for this message
Valentine Gostev (longbow) wrote :
Download full text (3.6 KiB)

when we forget to specify --export option during prepare xb_export.sh test fails with:

2011-02-03 19:50:38: xb_export.sh: ===> xtrabackup --datadir=/root/percona-xtrabackup/test/var/mysql --prepare --export --target-dir=/tmp/xb_export_backup
xtrabackup: cd to /tmp/xb_export_backup
xtrabackup: This target seems to be not prepared yet.
xtrabackup: Temporary instance for recovery is set as followings.
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 2097152
xtrabackup: Starting InnoDB instance for recovery.
xtrabackup: Using 104857600 bytes for buffer pool (set by --use-memory parameter)
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
InnoDB: Compressed tables use zlib 1.2.3
InnoDB: Warning: innodb_file_io_threads is deprecated. Please use innodb_read_io_threads and innodb_write_io_threads instead
110203 19:50:38 InnoDB: highest supported file format is Barracuda.
InnoDB: Log scan progressed past the checkpoint lsn 38470
110203 19:50:38 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Doing recovery: scanned up to log sequence number 58034 (1 %)
110203 19:50:38 InnoDB: Starting an apply batch of log records to the database...
InnoDB: Progress in percents: 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
InnoDB: Apply batch completed
110203 19:50:39 Percona XtraDB (http://www.percona.com) 1.0.13-12.4 started; log sequence number 58034
xtrabackup: starting shutdown with innodb_fast_shutdown = 0
110203 19:50:39 InnoDB: Starting shutdown...
110203 19:50:39 InnoDB: Shutdown completed; log sequence number 58034
xtrabackup Ver 1.5 Rev 203 for 5.1.53 unknown-linux-gnu (x86_64)
xtrabackup: xtrabackup_logfile detected: size=2097152, start_lsn=(38470)
xtrabackup: export option is specified.
xtrabackup: export metadata of table 'incremental_sample/test' to file `./incremental_sample/test.exp` (1 indexes)
xtrabackup: name=GEN_CLUST_INDEX, id.low=15, page=3

[notice (again)]
  If you use binary log and don't use any hack of group commit,
  the binary log position seems to be:

2011-02-03 19:50:39: xb_export.sh: ===> /root/percona-xtrabackup/test/var/test/bin/mysql --no-defaults --socket=/tmp/xtrabackup.mysql.sock --user=root -e alter table test import tablespace incremental_sample
InnoDB: free limit of ./incremental_sample/test.ibd is larger than its real size.
InnoDB: import: extended import of incremental_sample/test is started.
InnoDB: import: 1 indexes are detected.
InnoDB: Progress in %: 16 33 50 66 83 100 done.
110203 19:50:39 InnoDB: Error: file './incremental_sample/test.ibd' seems to be corrupt.
InnoDB: anyway, all not corrupt pages were tried to be converted to salvage.
InnoDB: ##### CAUTION #####
InnoDB: ## The .ibd must cause to crash InnoDB, though re-import would seem to be succeeded.
InnoDB: ## If you don't have knowledge about salvaging data from .ibd, you should not use the f...

Read more...

review: Needs Fixing (qa)
Revision history for this message
Valentine Gostev (longbow) wrote :

Please disregard my previous comment, I will file a separate bug

Revision history for this message
Valentine Gostev (longbow) wrote :

Please disregard prev comment - will file a separate bug

review: Approve (qa)
Revision history for this message
Yasufumi Kinoshita (yasufumi-kinoshita) wrote :

Thanks, we are progressing surely...

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'innodb_dict_size_limit.patch'
2--- innodb_dict_size_limit.patch 2010-12-16 11:35:26 +0000
3+++ innodb_dict_size_limit.patch 2011-02-01 02:13:15 +0000
4@@ -8,7 +8,7 @@
5 diff -ruN a/storage/innodb_plugin/btr/btr0sea.c b/storage/innodb_plugin/btr/btr0sea.c
6 --- a/storage/innodb_plugin/btr/btr0sea.c 2010-08-04 02:24:19.000000000 +0900
7 +++ b/storage/innodb_plugin/btr/btr0sea.c 2010-08-27 16:09:42.926020757 +0900
8-@@ -1173,6 +1173,126 @@
9+@@ -1173,6 +1173,173 @@
10 mem_free(folds);
11 }
12
13@@ -36,96 +36,143 @@
14 + ulint i;
15 + mem_heap_t* heap = NULL;
16 + ulint* offsets;
17++ ibool released_search_latch;
18 +
19-+ rw_lock_x_lock(&btr_search_latch);
20-+ buf_pool_mutex_enter();
21++ rw_lock_s_lock(&btr_search_latch);
22 +
23 + table = btr_search_sys->hash_index;
24 +
25-+ bpage = UT_LIST_GET_LAST(buf_pool->LRU);
26-+
27-+ while (bpage != NULL) {
28-+ block = (buf_block_t*) bpage;
29-+ if (block->index == index && block->is_hashed) {
30-+ page = block->frame;
31-+
32-+ /* from btr_search_drop_page_hash_index() */
33-+ n_fields = block->curr_n_fields;
34-+ n_bytes = block->curr_n_bytes;
35-+
36-+ ut_a(n_fields + n_bytes > 0);
37-+
38-+ n_recs = page_get_n_recs(page);
39-+
40-+ /* Calculate and cache fold values into an array for fast deletion
41-+ from the hash index */
42-+
43-+ folds = mem_alloc(n_recs * sizeof(ulint));
44-+
45-+ n_cached = 0;
46-+
47-+ rec = page_get_infimum_rec(page);
48-+ rec = page_rec_get_next_low(rec, page_is_comp(page));
49-+
50-+ index_id = btr_page_get_index_id(page);
51++ do {
52++ buf_chunk_t* chunks = buf_pool->chunks;
53++ buf_chunk_t* chunk = chunks + buf_pool->n_chunks;
54++
55++ released_search_latch = FALSE;
56++
57++ while (--chunk >= chunks) {
58++ block = chunk->blocks;
59++ i = chunk->size;
60++
61++retry:
62++ for (; i--; block++) {
63++ if (buf_block_get_state(block)
64++ != BUF_BLOCK_FILE_PAGE
65++ || block->index != index
66++ || !block->is_hashed) {
67++ continue;
68++ }
69++
70++ page = block->frame;
71++
72++ /* from btr_search_drop_page_hash_index() */
73++ n_fields = block->curr_n_fields;
74++ n_bytes = block->curr_n_bytes;
75++
76++
77++ /* keeping latch order */
78++ rw_lock_s_unlock(&btr_search_latch);
79++ released_search_latch = TRUE;
80++ rw_lock_x_lock(&block->lock);
81++
82++
83++ ut_a(n_fields + n_bytes > 0);
84++
85++ n_recs = page_get_n_recs(page);
86++
87++ /* Calculate and cache fold values into an array for fast deletion
88++ from the hash index */
89++
90++ folds = mem_alloc(n_recs * sizeof(ulint));
91++
92++ n_cached = 0;
93++
94++ rec = page_get_infimum_rec(page);
95++ rec = page_rec_get_next_low(rec, page_is_comp(page));
96++
97++ index_id = btr_page_get_index_id(page);
98 +
99-+ ut_a(0 == ut_dulint_cmp(index_id, index->id));
100-+
101-+ prev_fold = 0;
102-+
103-+ offsets = NULL;
104-+
105-+ while (!page_rec_is_supremum(rec)) {
106-+ offsets = rec_get_offsets(rec, index, offsets,
107-+ n_fields + (n_bytes > 0), &heap);
108-+ ut_a(rec_offs_n_fields(offsets) == n_fields + (n_bytes > 0));
109-+ fold = rec_fold(rec, offsets, n_fields, n_bytes, index_id);
110-+
111-+ if (fold == prev_fold && prev_fold != 0) {
112-+
113-+ goto next_rec;
114-+ }
115-+
116-+ /* Remove all hash nodes pointing to this page from the
117-+ hash chain */
118-+
119-+ folds[n_cached] = fold;
120-+ n_cached++;
121++ ut_a(0 == ut_dulint_cmp(index_id, index->id));
122++
123++ prev_fold = 0;
124++
125++ offsets = NULL;
126++
127++ while (!page_rec_is_supremum(rec)) {
128++ offsets = rec_get_offsets(rec, index, offsets,
129++ n_fields + (n_bytes > 0), &heap);
130++ ut_a(rec_offs_n_fields(offsets) == n_fields + (n_bytes > 0));
131++ fold = rec_fold(rec, offsets, n_fields, n_bytes, index_id);
132++
133++ if (fold == prev_fold && prev_fold != 0) {
134++
135++ goto next_rec;
136++ }
137++
138++ /* Remove all hash nodes pointing to this page from the
139++ hash chain */
140++
141++ folds[n_cached] = fold;
142++ n_cached++;
143 +next_rec:
144-+ rec = page_rec_get_next_low(rec, page_rec_is_comp(rec));
145-+ prev_fold = fold;
146-+ }
147-+
148-+ for (i = 0; i < n_cached; i++) {
149-+
150-+ ha_remove_all_nodes_to_page(table, folds[i], page);
151-+ }
152-+
153-+ ut_a(index->search_info->ref_count > 0);
154-+ index->search_info->ref_count--;
155-+
156-+ block->is_hashed = FALSE;
157-+ block->index = NULL;
158-+
159++ rec = page_rec_get_next_low(rec, page_rec_is_comp(rec));
160++ prev_fold = fold;
161++ }
162++
163++ if (UNIV_LIKELY_NULL(heap)) {
164++ mem_heap_empty(heap);
165++ }
166++
167++ rw_lock_x_lock(&btr_search_latch);
168++
169++ if (UNIV_UNLIKELY(!block->is_hashed)) {
170++ goto cleanup;
171++ }
172++
173++ ut_a(block->index == index);
174++
175++ if (UNIV_UNLIKELY(block->curr_n_fields != n_fields)
176++ || UNIV_UNLIKELY(block->curr_n_bytes != n_bytes)) {
177++ rw_lock_x_unlock(&btr_search_latch);
178++ rw_lock_x_unlock(&block->lock);
179++
180++ mem_free(folds);
181++
182++ rw_lock_s_lock(&btr_search_latch);
183++ goto retry;
184++ }
185++
186++ for (i = 0; i < n_cached; i++) {
187++
188++ ha_remove_all_nodes_to_page(table, folds[i], page);
189++ }
190++
191++ ut_a(index->search_info->ref_count > 0);
192++ index->search_info->ref_count--;
193++
194++ block->is_hashed = FALSE;
195++ block->index = NULL;
196++
197++cleanup:
198 +#if defined UNIV_AHI_DEBUG || defined UNIV_DEBUG
199-+ if (UNIV_UNLIKELY(block->n_pointers)) {
200-+ /* Corruption */
201-+ ut_print_timestamp(stderr);
202-+ fprintf(stderr,
203++ if (UNIV_UNLIKELY(block->n_pointers)) {
204++ /* Corruption */
205++ ut_print_timestamp(stderr);
206++ fprintf(stderr,
207 +" InnoDB: Corruption of adaptive hash index. After dropping\n"
208 +"InnoDB: the hash index to a page of %s, still %lu hash nodes remain.\n",
209-+ index->name, (ulong) block->n_pointers);
210-+ }
211++ index->name, (ulong) block->n_pointers);
212++ }
213 +#endif /* UNIV_AHI_DEBUG || UNIV_DEBUG */
214-+
215-+ mem_free(folds);
216++ rw_lock_x_unlock(&btr_search_latch);
217++ rw_lock_x_unlock(&block->lock);
218++
219++ mem_free(folds);
220++
221++ rw_lock_s_lock(&btr_search_latch);
222++ }
223 + }
224-+
225-+ bpage = UT_LIST_GET_PREV(LRU, bpage);
226-+ }
227-+
228-+ buf_pool_mutex_exit();
229-+ rw_lock_x_unlock(&btr_search_latch);
230++ } while (released_search_latch);
231++
232++ rw_lock_s_unlock(&btr_search_latch);
233 +
234 + if (UNIV_LIKELY_NULL(heap)) {
235 + mem_heap_free(heap);
236
237=== modified file 'innodb_expand_import.patch'
238--- innodb_expand_import.patch 2010-12-16 11:35:26 +0000
239+++ innodb_expand_import.patch 2011-02-01 02:13:15 +0000
240@@ -34,8 +34,8 @@
241 space_id = fsp_header_get_space_id(page);
242 space_flags = fsp_header_get_flags(page);
243
244-+ if (srv_expand_import
245-+ && (space_id != id || space_flags != (flags & ~(~0 << DICT_TF_BITS)))) {
246++ if (srv_expand_import) {
247++
248 + ibool file_is_corrupt = FALSE;
249 + byte* buf3;
250 + byte* descr_page;
251
252=== modified file 'innodb_split_buf_pool_mutex.patch'
253--- innodb_split_buf_pool_mutex.patch 2010-12-16 11:35:26 +0000
254+++ innodb_split_buf_pool_mutex.patch 2011-02-01 02:13:15 +0000
255@@ -48,25 +48,7 @@
256 diff -ruN a/storage/innodb_plugin/btr/btr0sea.c b/storage/innodb_plugin/btr/btr0sea.c
257 --- a/storage/innodb_plugin/btr/btr0sea.c 2010-08-27 16:11:12.151975789 +0900
258 +++ b/storage/innodb_plugin/btr/btr0sea.c 2010-08-27 16:11:40.593021205 +0900
259-@@ -1199,7 +1199,7 @@
260- ulint* offsets;
261-
262- rw_lock_x_lock(&btr_search_latch);
263-- buf_pool_mutex_enter();
264-+ mutex_enter(&LRU_list_mutex);
265-
266- table = btr_search_sys->hash_index;
267-
268-@@ -1285,7 +1285,7 @@
269- bpage = UT_LIST_GET_PREV(LRU, bpage);
270- }
271-
272-- buf_pool_mutex_exit();
273-+ mutex_exit(&LRU_list_mutex);
274- rw_lock_x_unlock(&btr_search_latch);
275-
276- if (UNIV_LIKELY_NULL(heap)) {
277-@@ -1878,7 +1878,8 @@
278+@@ -1925,7 +1925,8 @@
279 rec_offs_init(offsets_);
280
281 rw_lock_x_lock(&btr_search_latch);
282@@ -76,7 +58,7 @@
283
284 cell_count = hash_get_n_cells(btr_search_sys->hash_index);
285
286-@@ -1886,11 +1887,13 @@
287+@@ -1933,11 +1934,13 @@
288 /* We release btr_search_latch every once in a while to
289 give other queries a chance to run. */
290 if ((i != 0) && ((i % chunk_size) == 0)) {
291@@ -92,7 +74,7 @@
292 }
293
294 node = hash_get_nth_cell(btr_search_sys->hash_index, i)->node;
295-@@ -1997,11 +2000,13 @@
296+@@ -2044,11 +2047,13 @@
297 /* We release btr_search_latch every once in a while to
298 give other queries a chance to run. */
299 if (i != 0) {
300@@ -108,7 +90,7 @@
301 }
302
303 if (!ha_validate(btr_search_sys->hash_index, i, end_index)) {
304-@@ -2009,7 +2014,8 @@
305+@@ -2056,7 +2061,8 @@
306 }
307 }
308

Subscribers

People subscribed via source and target branches

to all changes: