there table called transactions
~6 million rows. below query counts current user balance. here log after enabled slow_query_log = 'on'
:
# time: 170406 9:51:48 # user@host: root[root] @ [xx.xx.xx.xx] # thread_id: 13 schema: main_db qc_hit: no # query_time: 38.924823 lock_time: 0.000034 rows_sent: 1 rows_examined: 773550 # rows_affected: 0 set timestamp=1491456108; select sum(`transaction`.`amount`) total `main_db`.`transactions` `transaction` `transaction`.`user_id` = 1008 , `transaction`.`confirmed` = 1 limit 1;
as can see took ~38 seconds
!
here transactions
table explain :
this query run fast (about ~1 second) , slow!
any great appreciated.
p.s:
it's innodb , transactions
table has frequent insert , select operations.
i tried running query sql_no_cache
, still fast, slow.
transactions
table schema :
create table `transactions` ( `id` int(10) unsigned not null auto_increment, `user_id` int(10) unsigned not null, `ref_id` varchar(40) collate utf8_persian_ci not null, `payment_id` tinyint(3) unsigned not null, `amount` decimal(10,1) not null, `created` datetime not null, `private_note` varchar(6000) collate utf8_persian_ci not null, `public_note` varchar(200) collate utf8_persian_ci not null, `confirmed` tinyint(3) not null, primary key (`id`), key `user_id` (`user_id`) ) engine=innodb auto_increment=13133663 default charset=utf8 collate=utf8_persian_ci
mysql running on vps 12gb ram , 9 logical cpu cores.
here part of my.cnf
:
# * innodb # # innodb enabled default 10mb datafile in /var/lib/mysql/. # read manual more innodb related options. there many! default_storage_engine = innodb # can't change log file size, requires special procedure innodb_buffer_pool_size = 9g innodb_log_buffer_size = 8m innodb_file_per_table = 1 innodb_open_files = 400 innodb_io_capacity = 400 innodb_flush_method = o_direct innodb_thread_concurrency = 0 innodb_read_io_threads = 64 innodb_write_io_threads = 64 # instead of skip-networking default listen on # localhost more compatible , not less secure. #bind-address = 127.0.0.1 # # * fine tuning # max_connections = 500 connect_timeout = 5 wait_timeout = 600 max_allowed_packet = 16m thread_cache_size = 128 sort_buffer_size = 4m bulk_insert_buffer_size = 16m tmp_table_size = 32m max_heap_table_size = 32m
(yes, adding another answer. justification: addresses underlying problem different way.)
the underlying problem seems there ever-growing "transaction" table derived various statistics, such sum(amount)
. performance of worse , worse table(s) grow.
the basis answer @ data in 2 ways: "history" , "current". transactions
history. new table current
totals each user. see multiple ways that. each involves form of subtotal(s) avoid adding 773k rows answer.
- the traditional banking way... each night tally day's
transactions
, add themcurrent
. - the materialized view way... each time row added
transactions
, incrementcurrent
. - hybrid: keep daily subtotals in "summary table". sum subtotals
sum
through last night.
more discussion in blog on summary tables.
note up-to-the-second balance banking or hybrid way little tricky:
- get last night's amount
- add transactions occurred during day.
any of approaches a lot faster scanning 773k rows user, more complex code.
Comments
Post a Comment