Just how much would you depend on database transactions?
Would you prefer big or small transaction scopes ?
Would you prefer client side transaction handling (e.g. TransactionScope insInternet) over server side transactions or vice-versa?
How about nested transactions?
Have you got some suggestions&tricks associated with transactions ?
Any gotchas you experienced dealing with transaction ?
All kind of solutions are welcome.
I usually wrap a transaction inside a using statement.
using(IDbTransaction transaction )
When the transaction moves from scope, it's disposed. When the transaction continues to be active, it's folded back. This behavior fail-safes you against accidentally securing the database. Even when an unhandled exception is tossed, the transaction will still rollback.
During my code I really omit explicit rollbacks and depend around the using statement to complete the job for me personally. I only clearly perform commits.
I have found this pattern has drastically reduced record securing issues.
Personally, creating a website that's high traffic perfomance based, I avoid database transactions whenever you can. Clearly they're neccessary, and so i make use of an ORM, and page level object variables to reduce the amount of server side calls I must make.
Nested transactions are an incredible method to minimize your calls, I steer for the reason that direction whenever I'm able to as lengthy because they are quick queries that wont cause securing. NHibernate is a messiah in these instances.
I personally use transactions on every write operation towards the database.
So you will find a number of small "transactions" covered with a bigger transaction and essentially there's a superb transaction count within the nesting code. If you will find any outstanding children whenever you finish parents, its all folded back.
I favor client-side transaction handling where available. If you're consigned to doing sps or any other server side logical models of labor, server side transactions are fine.
Wow! Plenty of questions!
Until last year I depended 100% on transactions. Now its only 98%. In special cases of high traffic websites (like Sara pointed out) as well as high partitioned data, enforcing the necessity of distributed transactions, a transactionless architecture could be adopted. Now you will need to code referential integrity within the application.
Also, I love to manage transactions declaratively using annotations (I am a Java guy) and aspects. This is a very clean method to determine transaction limitations also it includes transaction propagation functionality.
Just like an FYI... Nested transactions could be harmful. It really increases the likelihood of getting deadlock. So, though it's good and necessary, the actual way it is implemented is essential in greater volume situation.
It is really an interesting link for nesting T-SQL transactions: http://aleemkhan.wordpress.com/2006/07/21/t-sql-error-handling-pattern-for-nested-transactions-and-saved-methods/
As Sara Chipps stated, transaction is overkill for top traffic programs. Therefore we should cure it whenever possible. Quite simply, we make use of a BASE architecture instead of Acidity. Ebay is really a typical situation. Distributed transaction sits dormant whatsoever in Ebay architecture. However for eventual consistency, you need to do some kind of trick by yourself.
Server side transactions, 35,000 transactions per second, SQL Server: 10 training from 35K tps
We simply use server side transactions:
- can begin later and finished sooner
- not distributed
- can perform work pre and post
- SET XACT_ABORT ON means immediate rollback on error
- client/OS/driver agnostic
- we nest calls but use @@TRANCOUNT to identify already began TXNs
- each DB call is definitely atomic
We cope with countless Place rows daily (some batched via staging tables), full OLTP, no problems. Not 35k tps though.