I am focusing on a credit card applicatoin that's designed to create items (like shipping insurance plans) when PayPal Instant Payment Notices are received. Regrettably, PayPal sometimes transmits duplicate notices. In addition, there's another third-party that's carrying out web-service updates concurrently once they get updates from PayPal too.

This is a fundamental diagram from the database tables involved.

// table "package"
// columns packageID, policyID, other data...
// table "insurancepolicy"
// columns policyID, coverageAmount, other data...

This is a fundamental diagram of the items I wish to do:

using (SqlConnection conn = new SqlConnection(...))
  sqlTransaction sqlTrans = conn.BeginTransaction(IsolationLevel.RepeatableRead);

  // Calls a stored procedure that checks if the foreign key in the transaction table has a value.
  if (PackageDB.HasInsurancePolicy(packageID, conn))
    return false;

  // Insert row in foreign table.
  int policyID = InsurancePolicyDB.Insert(coverageAmount, conn);
  if (policyID <= 0)
    return false;

  // Assign foreign key to parent table.  If this fails, roll back everything.
  bool assigned = PackageDB.AssignPolicyID(packageID, policyID, conn);
  if (!assigned)
    return false;

If you will find two (or even more) threads (or processes or programs) carrying this out simultaneously, I would like the very first thread to lock the "package" row although it doesn't have policyID, before the policy is produced and also the policyID is designated towards the package table. Then your lock could be launched following the policyID is designated towards the package table. It is indeed my hope the other thread that is calling this same code will pause if this reads the package row to make certain it does not possess a policyID first. Once the first transaction's lock is launched, it is indeed my hope the second transaction might find the policyID can there be and for that reason return without placing any rows in to the policy table.

Note: Due to the CRUD database design, each the saved methods involved either Read (choose), Create (place), or Update.

Is the best utilization of RepeatableRead transaction isolation?


In my opinion you are really wanting Serializable isolation level. However , two threads could possibly get beyond the HasInsurancePolicyCheck (though I've no clue what InsurancePolicyDB.Place would do or why it might return )

There are many other available choices with this too. The first is utilizing a message queue and processing these demands serially yourself. Another is by using sp_getapplock and lock on some key unique to that particular package. This way you do not lock anymore rows or tables than you have to.

It might be safer and cleaner if insert into Policy just hit some originality table constraint on make an effort to place duplicate. Raising isolation level can lower concurrency and result in other nasty issues like deadlocks.

One other way would be to always place Policy row, then roll it back if Package continues to be mounted on an insurance policy already:

begin tran (read committed)

/* tentatively insert new Policy */
insert Policy

/* attach Package to Policy if it's still free */
update Package
  set Package.policy_id = @policy_id
  where Package.package_id = @package_id and Package.policy_id is null

if @@rowcount > 0

This is most effective when conflicts are rare, which appears to become your situation.

To be sure using the "message queue" idea in aaronjensen's response. If you're worried about multiple concurrent threads trying to update exactly the same row of information concurrently, you need to rather possess the threads place their data right into a work queue, that is then processed sequentially with a single thread. This considerably reduces contention around the database, since the target table is up-to-date by just one thread rather than "N", and also the work queue procedures are restricted to card inserts through the texting threads, along with a read/update through the information systems thread.