I can tell within the postgresql logs that particular simple queries (no joins and taking advantage of only match problems that use indexes) take between one to three seconds to complete. I log queries that take more time than the usual second to complete thus you will find similar queries which execute within second which do not get reported.
After I try exactly the same query using EXPLAIN Evaluate, it requires a couple of milliseconds.
The table consists of around 8 million records and it is written to and queried extensively. I've enabled auto vacuum as well as lately (couple of hrs ago) went vacuum pressure Evaluate on that table.
Sample query log entry: 12 , 30 10:14:57 db01 postgres: [20-1] LOG: duration: 3857.322 ms statement: Choose * FROM "reactions" WHERE ("reactions".contest_id = 17469) AND (user_id isn't 12 , 30 10:14:57 db01 postgres: [20-2] null) ORDER BY up-to-date_on desc LIMIT 5
contest_id and user_id is indexed. up-to-date_on isn't indexed. Basically index it, the query planner ignores the competition_id index ans uses up-to-date_on rather which further slows lower the query. The utmost records around the above query with no LIMIT wouldn't be a lot more than 1000.
Any help could be much appreciated.
A couple of more particulars may be useful here, based on whether you are able to provide them. Most helpful will be the actual creation of your EXPLAIN Evaluate, to ensure that we are able to see what it really does in finishing the query. The phrase the table being queried might prove useful too, together with the indexes. The greater information the better. I'm able to only speculate at this time on what's going on, listed here are a couple of blind stabs:
- Lots of other Chooses are happening about this database simultaneously, and periodically the information and/or outcome is expiring from some cache somewhere.
- There's another thing that periodically locks this table for up to 3-4 seconds before delivering it again, throughout which period this question is stuck
- This table is written to so extensively the table statistics finish up rarely reflecting reality, and therefore the query analyzer botches the conclusion on if you should use index(es) to do the query.
Others may have other ideas, but yeah. More information on which is going on might prove helpful.
This appears to become happening because of changing.
pgsql-performance is a superb subscriber list to request most of these questions.
It appears as if you have two problems here:
1) You need to have the ability to index up-to-date_on, but when you need to do, PostgreSQL selects the incorrect plan.
My first wild guess is PostgreSQL is overestimating the amount of tuples that match the predicate "
(responses.contest_id = 17469) AND (user_id is not null)". If postgres uses that predicate first, it has to later sort the values to implement an order BY. You say it matches 1000 tuples if postgresql thinks it matches 100000, maybe it thinks checking so as while using up-to-date_on index is going to be cheaper. Another factor might be your configuration: if
work_mem is placed low, it might believe that sorting is much more costly than.
You will need to show the EXPLAIN Evaluate creation of a sluggish query, to ensure that we are able to understand why it might be selecting a catalog scan on
2) Even when it isn't indexed, it sometimes requires a while to complete, but you do not know why if you take it by hand it really works fine.
Make use of the
auto_explain contrib module, new with 8.4. It enables you to definitely log the
EXPLAIN ANALYZE creation of queries that take too lengthy. Just logging the query puts you in precisely the problem you've now: any time you run the query it's fast.
if the identical query takes miliseconds in explain evaluate, and three seconds in logs (i.e. i suppose that it will happen take 3 seconds, its not all call into it takes that lengthy) - of computer certainly means that it's securing problem.
- CLUSTER / VACUUM FULL in cron job
- saturated network
- saturated IO
check iostat, vmstat, iptraf...