mentioning to this question, I have made the decision to copy the tables each year, creating tables using the data of the season, something similar to, for instance:

orders_2008
orders_2009
orders_2010
etc...

Well, I understand that most likely the rate problem might be solved with only 2 tables for every element, like orders_background and order_actual, however i believed that when the handler code is been authored, there won't be any difference.. just many tables.

Individuals tables may have even some child with foreign key for instance the orders_2008 may have the kid products_2008:

CREATE TABLE orders_2008 (
    id serial NOT NULL,
    code character(5),
    customer text
);

ALTER TABLE ONLY orders_2008
    ADD CONSTRAINT orders_2008_pkey PRIMARY KEY (id);

CREATE TABLE items_2008 (
    id serial NOT NULL,
    order_id integer,
    item_name text,
    price money
);

ALTER TABLE ONLY items_2008
    ADD CONSTRAINT items_2008_pkey PRIMARY KEY (id);

ALTER TABLE ONLY items_2008
    ADD CONSTRAINT "$1" FOREIGN KEY (order_id) REFERENCES orders_2008(id) ON DELETE CASCADE;

So, my issue is: what is your opinion is the easiest method to replicate individuals tables every first the month of january and, obviously, keeping the table dependencies?

A PHP/Python script that, query after query, rebuild the dwelling for that year (known as with a cron job)? Can the PostgreSQL's functions be utilized for the reason that way? If so, how (an little example is going to be nice)

Really I am opting for the very first way (b .sql file that contains the dwelling, along with a php/python script loaded by cronjob that rebuild the dwelling), but i am wondering if this sounds like the easiest way.

edit: i have seen the pgsql function CREATE TABLE LIKE, however the foreigns secrets must be included again.. or it'll keep your new tables referencied tot he old one.

PostgreSQL includes a feature that allows you produce a table that gets fields from another table. The documentation are available in their manual. That may simplify your process a little.

You should think about Partitioning in Postgresql. It is the standard method of doing what for you to do. It uses inheritance as John Downey recommended.

Very bad idea.

Take a look around partitioning and keep the eyes in your real goal:

  • You don't want table sets for each year, since this is not your condition. Plenty of systems will work perfectly without one :)
  • You wish to solve some performance and/or space for storage issues.

I'd recommend orders and order_history... just periodically roll that old orders in to the history, the industry read-only dataset now, which means you add a catalog to cater for every query you need, also it should (in case your data structures are half decent) remain performant.

In case your history table begins getting "too large" it's most likely time for you to start consider data warehousing... which is really wonderful, however it certainly ain't cheap.

As others pointed out inside your previous question, this really is most likely an awful idea. Nevertheless, if you're dead set on doing the work by doing this, why don't you just create all of the tables in advance (say 2008-2050)?