I am troubleshooting a question performance problem. Here's an expected query plan from explain:

mysql> explain select * from table1 where tdcol between '2010-04-13 00:00' and '2010-04-14 03:16';
| id | select_type | table              | type  | possible_keys | key          | key_len | ref  | rows    | Extra       |
|  1 | SIMPLE      | table1             | range | tdcol         | tdcol        | 8       | NULL | 5437848 | Using where | 
1 row in set (0.00 sec)

Which makes sense, because the index named tdcol (KEY tdcol (tdcol)) can be used, contributing to 5M rows ought to be selected out of this query.

However, basically query for starters more minute of information, we obtain this question plan:

mysql> explain select * from table1 where tdcol between '2010-04-13 00:00' and '2010-04-14 03:17';
| id | select_type | table              | type | possible_keys | key  | key_len | ref  | rows      | Extra       |
|  1 | SIMPLE      | table1             | ALL  | tdcol         | NULL | NULL    | NULL | 381601300 | Using where | 
1 row in set (0.00 sec)

The optimizer thinks the scan will improve, but it is over 70x more rows to look at, so I've got a difficult time thinking the table scan is much better.

Also, the 'USE KEY tdcol' syntax doesn't alter the query plan.

Thanks ahead of time for just about any help, and I am more than pleased to supply more informationOrsolution questions.

5 million index probes is possibly more costly (plenty of random disk reads, potentially more difficult synchronization) than reading through all 350 million rows (consecutive disk reads).

This situation may be the best, because most probably an order from the timestamps roughly matches an order from the card inserts in to the table. But, unless of course the index on tdcol is really a "clustered" index (and therefore the database guarantees the order within the underlying table matches an order in tdcol), its unlikely the optimizer knows it.

Even without the that order correlation information, it might be to think that the five million rows you would like are roughly distributed one of the 350 million rows, and therefore the index approach calls for reading through most or almost all of the web pages within the underlying row anyway (by which situation the scan is going to be a smaller amount costly compared to index approach, less reads outright and consecutive rather than random reads).

MySQL's query generator includes a cutoff when determining using a catalog. As is available properly recognized, MySQL has made the decision a table scan is going to be faster than while using index, and will not be dissuaded from it's decision. The irony is the fact that once the key-range matches a lot more than in regards to a third on the table, it's most likely right. Why within this situation?

I do not come with an answer, but I've got a suspicion MySQL does not have sufficient memory look around the index. I'd be searching in the server memory configurations, specially the Innodb memory pool and a few of the other key storage pools.

What is the distribution of the data like? Try managing a min(), avg(), max() onto it to determine where it's. It is possible that that one minute helps make the difference in just how much details are contained for the reason that range.

Additionally, it can you need to be the backdrop setting of InnoDB You will find a couple of factors such as page size, and memory like staticsan stated. You might want to clearly define a b -+Tree index.