Hallo,

I'm writing a database application that does lots of card inserts and updates with fake serialisable isolation level (snapshot isolation).

Not to do tonnes of network roundtrips I am batching card inserts and updates in a single transaction with PreparedStatements. They ought to fail very rarely since the card inserts are prechecked and nearly conflict liberated to other transactions, so rollbacks don't occur frequently.

Getting large transactions ought to be great for WAL, since it can flush large portions and does not need to flush for small transactions.

1.) I'm able to only see results of the large transaction. However I frequently read that they're bad. Why is it bad during my use situation?

2.) May be the checking for conflicts so costly once the local snapshots are incorporated into the real database? The database will need to compare all write teams of possible conflicts (parallel transaction). Or will it perform some high-speed shortcut? Or perhaps is that quite cheap anyways?

[EDIT] It may be interesting if a person could bring some clearness into the way a snapshot isolation database inspections if transaction, that have overlapping parts around the timeline, are checked for disjunct write sets. Because that is what fake serializable isolation level is about.

The actual issues listed here are two parts. The very first possible issue is bloat. Large transactions migh result in many dead tuples turning up at the same time. Another possible issue is from lengthy running transactions. As lengthy like a lengthy running transaction is running, the tables it's touching can not be cleaned so can collect plenty of dead tuples too.

I'd say only use check_postgresql.pl to check on for bloatedness issues. As lengthy while you aren't seeing lots of table bloat after your lengthy transactions you are ok.

1) Manual states that it's good: http://www.postgresql.org/docs/current/interactive/populate.html

I'm able to recommend and to Use COPY, Remove Indexes (however test), Increase maintenance_work_mem, Increase checkpoint_segments, Run Evaluate (or VACUUM Evaluate) Later on.

I won't recommed if you're not sure: Remove Foreign Key Constraints, Disable WAL archival and streaming replication.

2) Always data are incorporated on commit but there's no inspections, data are simply written. Read again: http://www.postgresql.org/docs/current/interactive/transaction-iso.html

In case your card inserts/updates doesn't rely on other card inserts/updates you do not need "wholly consistent view". You can utilize read committed and transaction won't ever fail.