It isn't entirely obvious in my experience what transactions in database systems do. I understand they may be used to rollback a listing of updates completely (e.g. subtract cash on one account and combine it with another), but is the fact that all they are doing? Particularly, would they be employed to prevent race conditions? For instance:
// Java/JPA example em.getTransaction().begin(); User u = em.find(User.class, 123); u.credits += 10; em.persist(u); em.getTransaction().commit();
(I understand this might most likely be written like a single update query, but that is not alway the situation)
Is code shielded from race conditions?
I am mostly thinking about MySQL5 + InnoDB, but general solutions are welcome too.
The database tier supports atomicity of transactions to different levels, known as isolation levels. Look into the documentation of the database management system for that isolation levels supported, as well as their trade-offs. The most powerful isolation level, Serialized, requires transactions to complete as though these were performed 1 by 1. This really is implemented by utilizing exclusive locks within the database. This is often cause deadlocks, that the database management system picks up and fixes by moving back some involved transactions. This method is frequently known to as pessimistic securing.
Many object-relational mappers (including JPA companies) also support positive securing, where update conflicts aren't avoided within the database, but detected within the application tier, which in turn comes back the transaction. For those who have positive securing enabled, an average execution of the example code would emit the next sql queries:
select id, version, credits from user where id = 123;
Let us say this returns (123, 13, 100).
update user set version = 14, credit = 110 where id = 123 and version = 13;
The database informs us the number of rows where up-to-date. Whether it was one, there is no conflicting update. Whether it was zero, a conflicting update happened, and also the JPA provider is going to do
and throw the best so application code are designed for the unsuccessful transaction, for example by retrying.
Summary: With either approach, your statement can be created protected from race conditions.
It is dependent on isolation level (in serializable it'll prevent race condition, since generally in serializable isolation level transactions are processed in sequence, not in paralell (or at best exclusive securing can be used, so transactions, that customize the same rows, are carried out in sequence). To be able to avoid the race condition, better by hand lock the record (mysql for instance supports 'select ... for update' statement, which aquires write-lock around the selected records)
It is dependent around the specific rdbms. Generally, transactions acquire locks as made the decision throughout the query evaluation plan. Some can request table level locks, other column level, other record level, the second reason is preferred for performance. Rapid response to your real question is yes.
Quite simply, a transaction is supposed to group some queries and represent them being an atomic operation. When the operation fails the alterations are rolledback. I do not exactly understand what the adapter you are using does, but when it adjusts to the phrase transactions you ought to be fine.
Although this guarantees protection against race conditions, it does not clearly prevent starvation or deadlocks. The transaction lock manager manages that. Table tresses are sometime used, however they include a hefty cost of reducing the amount of concurrent procedures.