I am searching for a method to discover the row count for those my tables in Postgres. I understand I'm able to do that one table at any given time having a

select count(*) from table_name;

but Let me begin to see the row count for the tables and also the order with that to obtain a concept of how large my tables are.

There's three methods for getting this kind of count, each using their own tradeoffs.

If you prefer a true count, you need to execute the Choose statement such as the one you used against each table. The reason being PostgreSQL keeps row visibility information within the row itself, not elsewhere, so any accurate count are only able to be in accordance with some transaction. You are obtaining a count of the items that transaction sees in the time if this executes. You can automate this to operate against every table within the database, however, you most likely have no need for that much cla of precision or wish to wait that lengthy.

The 2nd approach notes the statistics collector tracks roughly the number of rows are "live" (not erased or obsoleted by later updates) anytime. This value could be off by a little under heavy activity, but generally is a good estimate:

SELECT schemaname,relname,n_live_tup 
  FROM pg_stat_user_tables 
  ORDER BY n_live_tup DESC;

That may also demonstrate the number of rows are dead, that is itself a fascinating number to watch.

The 3rd strategy is to notice the system Evaluate command, that is performed through the autovacuum process regularly by PostgreSQL 8.3 to update table statistics, also computes a row estimate. You are able to grab that certain such as this:

SELECT 
  nspname AS schemaname,relname,reltuples
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE 
  nspname NOT IN ('pg_catalog', 'information_schema') AND
  relkind='r' 
ORDER BY reltuples DESC;

Which of those queries 's better to me is tough to say. Normally I choose to according to whether there's more helpful information I should also use within pg_class or within pg_stat_user_tables. For fundamental counting reasons simply to observe how large situations are generally, either ought to be accurate enough.

If you do not mind potentially stale data, you are able to access exactly the same statistics utilized by the query optimizer.

Something similar to:

SELECT relname, n_tup_ins - n_tup_del as rowcount FROM pg_stat_all_tables;

I do not recall the URL where I collected this. But hope this will help:

CREATE TYPE table_count AS (table_name TEXT, num_rows INTEGER); 

CREATE OR REPLACE FUNCTION count_em_all () RETURNS SETOF table_count  AS '
DECLARE 
    the_count RECORD; 
    t_name RECORD; 
    r table_count%ROWTYPE; 

BEGIN
    FOR t_name IN 
        SELECT 
            c.relname
        FROM
            pg_catalog.pg_class c LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
        WHERE 
            c.relkind = ''r''
            AND n.nspname = ''public'' 
        ORDER BY 1 
        LOOP
            FOR the_count IN EXECUTE ''SELECT COUNT(*) AS "count" FROM '' || t_name.relname 
            LOOP 
            END LOOP; 

            r.table_name := t_name.relname; 
            r.num_rows := the_count.count; 
            RETURN NEXT r; 
        END LOOP; 
        RETURN; 
END;
' LANGUAGE plpgsql; 

Performing select count_em_all(); should enable you to get row count of your tables.

Unsure if the answer in party is appropriate for you, but FWIW...

PGCOMMAND=" psql -h localhost -U fred -d mydb -At -c \"
            SELECT   table_name
            FROM     information_schema.tables
            WHERE    table_type='BASE TABLE'
            AND      table_schema='public'
            \""
TABLENAMES=$(export PGPASSWORD=test; eval "$PGCOMMAND")

for TABLENAME in $TABLENAMES; do
    PGCOMMAND=" psql -h localhost -U fred -d mydb -At -c \"
                SELECT   '$TABLENAME',
                         count(*) 
                FROM     $TABLENAME
                \""
    eval "$PGCOMMAND"
done
select count_em_all(); did't work to me, BUT

select * from count_em_all() did.

Regards!