I've got a utility during my application where i have to perform bulk load of Place, UPDATE &lifier Remove procedures. I'm attempting to create transaction for this to ensure that once this technique is invoke and also the information is given into it, it's made certain that it's either any none put into the database.

The concern what's have is what's the boundary conditions here? The number of Place, UPDATE &lifier Remove can one have in a single transaction? Is transaction size configurable?

Any help could be appreciated.

-Thanks

I do not think there is a maximum work load that may be carried out inside a transaction. Data just get put into the table files, and finally the transaction either commits or comes backs: AIUI this result will get saved in pg_clog whether it comes back, the area will ultimately be reclaimed by vacuum. So it is not as when the ongoing transaction jobs are locked in memory and flushed at commit time, for example.

Just one transaction can run roughly two billion instructions inside it (2^31, minus IIRC a small little bit of overhead. Really, arrived at think about it, that might be 2^32 - the commandcounter is unsigned I believe).

All of individuals instructions can modify multiple rows, obviously.

For any project Sometimes on, I perform 20 countless Place. I attempted with one large transaction with one transaction for each million of Place and also the performances appear the identical.

PostgreSQL 8.3

In my opinion all the jobs are restricted to your log quality. The database won't ever allow itself not to have the ability to rollback, if you consume all of your log space throughout the transaction, it'll halt before you provide more room or rollback. This can be a generally true for those databases.

I would suggest chunking your updates into workable portions that have a most a few momemts of execution time, this way you realize if there is a problem earlier (eg what usually takes one minute continues to be running after ten minutes... err, did someone drop a catalog?)