Can you really get past queries produced in postgres? and it is it's possible to obtain the time that it required for every query? I am presently attempting to identify slow queries within the application I am focusing on.
I am using Postgres 8.3.5
There is no history within the database itself, if you are using psql you should use "s" to visit your command history there.
You will get future queries or other kinds of procedures in to the log files by setting log_statement within the postgresql.conf file. That which you most likely want rather is log_min_duration_statement, which should you place it to will log all queries as well as their trips within the logs. That may be useful when your applications goes live, should you set that to some greater value you'll only begin to see the lengthy running queries which may be useful for optimisation (you are able to run EXPLAIN Evaluate around the queries you discover there to determine why they are slow).
Another handy factor to understand in this region is when you take psql and tell it "timing", it'll show how lengthy every statement next takes. If you possess a sql file that appears such as this:
\timing select 1;
You are able to run it using the right flags and find out each statement interleaved with how lengthy it required. Here's how and exactly what the result appears like:
$ psql -ef test.sql Timing is on. select 1; ?column? ---------- 1 (1 row) Time: 1.196 ms
This really is handy because you don't have to be database superuser for doing things, unlike altering the config file, and it is simpler to make use of if you are developing new code and wish to try it out.
If you wish to identify slow queries, than the method is by using log_min_duration_statement setting (in postgresql.conf or set per-database with ALTER DATABASE SET).
Whenever you drenched the information, after that you can use grep or some specialized tools - like pgFouine or my very own analyzer - which lacks proper paperwork, but regardless of this - runs very well.