I've got a data structure that appears such as this:

Model Place
    primary key "id"

    foreign key "parent" -> Place
    foreign key "neighbor" -> Place (symmetryc)
    foreign key "belongtos" -> Place (asymmetric)

    a bunch of scalar fields ...

I've over 5 million rows within the model table, and I have to place ~50 million rows into each one of the two foreign key tables. I've SQL files that appear to be such as this:

INSERT INTO place_belongtos (from_place_id, to_place_id) VALUES (123, 456);

and they're about 7 Gb each. The issue is, after i do psql < belongtos.sql, it requires me about 12 hrs to import ~4 million rows on my small AMD Turion64x2 CPU. OS is Gentoo ~amd64, PostgreSQL is version 8.4, put together in your area. The information dir is really a bind mount, situated on my small second extended partition (ext4), that we believe isn't the bottleneck.

I am suspicious of it requires such a long time to place the foreign key relations because psql inspections for that key constraints for every row, which most likely adds some unnecessary overhead, when i know without a doubt the information is valid. It is possible to method to accelerate the import, i.e. temporarily crippling the constraint check?

  1. Make certain both foreign key constraints are DEFERRABLE
  2. Use COPY to load your computer data
  3. If you cannot use COPY, make use of a prepared statement for the Place.
  4. Propper configuration configurations will even help, look into the WAL configurations.

The reply is yes... Depesz wrote an article here on deferrable uniqueness. regrettably it appears to become a 9. feature.

hmm... Maybe it does not affect your circumstances? Appears we have had the opportunity to set constraints to deferred for some time... I am speculating that unique is really a unique situation (pun intended).