Merge lp:~percona-dev/percona-patches/dict-size-limit into lp:~percona-dev/percona-patches/5.0.75
- dict-size-limit
- Merge into 5.0.75
Status: | Merged |
---|---|
Merged at revision: | not available |
Proposed branch: | lp:~percona-dev/percona-patches/dict-size-limit |
Merge into: | lp:~percona-dev/percona-patches/5.0.75 |
To merge this branch: | bzr merge lp:~percona-dev/percona-patches/dict-size-limit |
Related bugs: | |
Related blueprints: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Percona | Approve | ||
Review via email: mp+3119@code.launchpad.net |
Commit message
Description of the change
Vadim Tkachenko (vadim-tk) wrote : | # |
Yasufumi Kinoshita (yasufumi-kinoshita) wrote : | # |
The test will be "CREATE TABLE, FLUSH TABLE, CREATE TABLE; etc..." and check the size by show innodb status.
But the size may be different between 32bit and 64bit or different platforms.
In this case, how should we write .result file of the test? Do you have any idea?
Vadim Tkachenko (vadim-tk) wrote : | # |
Yasufumi,
mysql-test has --record option which can provide .result file
Yasufumi Kinoshita wrote:
> The test will be "CREATE TABLE, FLUSH TABLE, CREATE TABLE; etc..." and check the size by show innodb status.
> But the size may be different between 32bit and 64bit or different platforms.
> In this case, how should we write .result file of the test? Do you have any idea?
--
Vadim Tkachenko, CTO
Percona Inc.
ICQ: 369-510-335, Skype: vadimtk153, Phone +1-888-401-3403
MySQL Performance Blog - http://
MySQL Consulting http://
Yasufumi Kinoshita (yasufumi-kinoshita) wrote : | # |
Sorry, I mean that we should write different .result files to the each platforms...
Hmm... Should I add new global status of "tables in cache" for mysql-test?
The number of tables may not be so different between different platforms.
Vadim Tkachenko (vadim-tk) wrote : | # |
Yasufumi,
You mean 32bit and 64bit may be different ?
Can we make architecture independed tests ?
Yasufumi Kinoshita wrote:
> Sorry, I mean that we should write different .result files to the each platforms...
> Hmm... Should I add new global status of "tables in cache" for mysql-test?
> The number of tables may not be so different between different platforms.
--
Vadim Tkachenko, CTO
Percona Inc.
ICQ: 369-510-335, Skype: vadimtk153, Phone +1-888-401-3403
MySQL Performance Blog - http://
MySQL Consulting http://
Peter Zaitsev (pz-percona) wrote : | # |
Yasufumi,
Hm. What if we ask heikki or on internals@ to see how people deal
with such things ? There are cases when there is platform specific
ifferene.
Monday, January 26, 2009, 4:40:39 PM, you wrote:
> Sorry, I mean that we should write different .result files to the each platforms...
> Hmm... Should I add new global status of "tables in cache" for mysql-test?
> The number of tables may not be so different between different platforms.
--
Peter Zaitsev, CEO, Percona Inc.
Tel: +1 888 401 3401 ext 501 Skype: peter_zaitsev
24/7 Emergency Line +1 888 401 3401 ext 911
Our Services: http://
Our Blog: http://
Yasufumi Kinoshita (yasufumi-kinoshita) wrote : | # |
struct dict_table_struct contains ulint...
OK. I will add new global status.
It may be also friendly for users.
- 33. By kinoyasu <kinoyasu@gauntlet3>
-
innodb_
dict_size_ limit: fix mistype, add status Innodb_dict_tables, add testcase
Yasufumi Kinoshita (yasufumi-kinoshita) wrote : | # |
New revision 33 is pushed
revno: 33
committer: kinoyasu <kinoyasu@
branch nick: dict-size-limit
timestamp: Tue 2009-01-27 12:06:04 +0900
message:
innodb_
Vadim Tkachenko (vadim-tk) wrote : | # |
Yasufumi,
So you cleaning dictionary cache only by FLUSH TABLE ?
I think it will not work, we should make it automatically.
FLUSH TABLE is very expensive and performance hurt command
Yasufumi Kinoshita (yasufumi-kinoshita) wrote : | # |
Vadim,
If a handler of certain table exists, we don't remove the dictionary entry
even if the hendler is not used.
So, we shouhld use "FLUSH TABLES" to close such handler
or use "small table_cache" to close such handler automatocally.
Vadim Tkachenko wrote:
> Yasufumi,
>
>
> So you cleaning dictionary cache only by FLUSH TABLE ?
>
> I think it will not work, we should make it automatically.
>
> FLUSH TABLE is very expensive and performance hurt command
- 34. By kinoyasu <kinoyasu@gauntlet3>
-
change default of innodb_
dict_size_ limit as 0 (unlimited)
Peter Zaitsev (pz-percona) wrote : | # |
Yasufumi,
So with your patch Innodb dictionary can be set not to go over
table_cache in size ?
Wednesday, January 28, 2009, 5:15:55 PM, you wrote:
> Vadim,
> If a handler of certain table exists, we don't remove the dictionary entry
> even if the hendler is not used.
> So, we shouhld use "FLUSH TABLES" to close such handler
> or use "small table_cache" to close such handler automatocally.
> Vadim Tkachenko wrote:
>> Yasufumi,
>>
>>
>> So you cleaning dictionary cache only by FLUSH TABLE ?
>>
>> I think it will not work, we should make it automatically.
>>
>> FLUSH TABLE is very expensive and performance hurt command
--
Peter Zaitsev, CEO, Percona Inc.
Tel: +1 888 401 3401 ext 501 Skype: peter_zaitsev
24/7 Emergency Line +1 888 401 3401 ext 911
Our Services: http://
Our Blog: http://
Yasufumi Kinoshita (yasufumi-kinoshita) wrote : | # |
The entry of table_cache is not unique to its table name. And the entry of dictionary is unique to its table name.
The each handler in the table_cache opens the dictionary entry by its table name.
Which opens and uses the dictionary entry are not only handlers.
Internally, InnoDB uses the entries of some system tables.
And recovery process or purge process also uses entries of the tables to use.
We cannot remove entry of the table used.
So sometimes the dictionary size may exceed dict_size_limit.
(like relation between the variable "table_cache" and status "Open_tables")
Vadim Tkachenko (vadim-tk) wrote : | # |
No documentation yet on page
http://
Yasufumi Kinoshita (yasufumi-kinoshita) wrote : | # |
I have found memory leak about dictionary.
It is not our bug, but I will fix it.
(diff of valgrind output)
+ 152 bytes in 1 blocks are still reachable in loss record 49 of 234
+ at 0x4C21D06: malloc (in /usr/lib64/
+ by 0x786BC3: ut_malloc_low (ut0mem.c:81)
+ by 0x7884FA: os_event_create (os0sync.c:143)
+ by 0x7805F4: mutex_create_func (sync0sync.c:247)
+ by 0x6B2B83: dict_mem_
+ by 0x6AC2E1: dict_load_table (dict0load.c:828)
+ by 0x69D119: dict_create_
+ by 0x69AB48: innobase_
+ by 0x62ACDE: innobase_init() (ha_innodb.cc:1557)
+ by 0x61FCC1: ha_init() (handler.cc:481)
+ by 0x5636AE: init_server_
+ by 0x56676C: main (mysqld.cc:3788)
Yasufumi Kinoshita (yasufumi-kinoshita) wrote : | # |
Sorry, previous is paste miss....
+ 152 bytes in 1 blocks are still reachable in loss record 83 of 234
+ at 0x4C21D06: malloc (in /usr/lib64/
+ by 0x786BC3: ut_malloc_low (ut0mem.c:81)
+ by 0x7884FA: os_event_create (os0sync.c:143)
+ by 0x77D179: rw_lock_create_func (sync0rw.c:141)
+ by 0x6A314C: dict_tree_create (dict0dict.c:3757)
+ by 0x6A3D15: dict_index_
+ by 0x6AAF5E: dict_load_indexes (dict0load.c:699)
+ by 0x6ACC9F: dict_load_table (dict0load.c:884)
+ by 0x6A6B12: dict_table_
+ by 0x62CF74: ha_innobase:
+ by 0x61F381: handler:
+ by 0x5AF89F: openfrm(THD*, char const*, char const*, unsigned, unsigned, unsigned, st_table*) (table.cc:929)
Yasufumi Kinoshita (yasufumi-kinoshita) wrote : | # |
Sorry for my mistake again,,,,
There may be no memory leak in mysqld.
+ by 0x7884FA: os_event_create (os0sync.c:143)
+ by 0x7805F4: mutex_create_func (sync0sync.c:247)
+ by 0x7884FA: os_event_create (os0sync.c:143)
+ by 0x77D184: rw_lock_create_func (sync0rw.c:142)
+ by 0x7884FA: os_event_create (os0sync.c:143)
+ by 0x77D16F: rw_lock_create_func (sync0rw.c:140)
+ by 0x7884FA: os_event_create (os0sync.c:143)
+ by 0x77D179: rw_lock_create_func (sync0rw.c:141)
This is one set of table entry.
I have tested and took difference between "do nothing" and "create 5 table; select from 5 table; flush tables; select from 1 table".
The later creates 5 table entries and remove 4 entries. So remaining one set is correct...
Percona (percona-team) : | # |
Mistyping: Valiable innodb_ dict_size_ limit in bytes
And please add tests to mysql-test to show correct work of this feature