I'm focusing on a online file management project. We're storing references around the database (sql server) and files data around the on file system.

We're facing an issue of coordination between file system and database when we're uploading personal files as well as just in case of removing personal files. First we produce a reference within the database or store files on file system.

However , basically produce a reference within the database first after which store personal files on file system, but while storing files around the file system any kind of error occur, then your reference for your file is produced within the database but no file data exist around the file system.

Please produce some solution how to approach such situation. I'm badly looking for it.

This situation happens also in the end removing personal files?

Accessibility file product is indeed not transactional. You will have to simulate an all-or-nothing distributed transaction yourself: when the commit in database fails, remove the file on file-system. Inversely, if writing file fails, rollback database transaction (That'll be a little more difficult, but that is a tough sketch).

Note that it may get pretty complicated whenever a file is up-to-date. You'll need first copying it, to ensure that when the database transaction fails after you have overwritten the file you are able to still restore that old version from the file. Whether for you to do this is dependent on the amount of robustness that's preferred.

Attempt to enforce that all manipulations undergo the application (create, write, remove of files). If you can't do that and you will not prevent personal files from being utilized on the file system (and perhaps erased), I see not one other way rather than periodically synchronize the database using the file system: check which file was removed and remove the entry in database. You can produce a job that runs each X minute for your.

I'd also suggest storing a hash (e.g. MD5) from the file in database. Take a little of your time to compute it, but that's been hugely helpful that i can identify problems, e.g. when the file is re-named on file system by error although not in database. Which enables to operate some integrity check periodically, to ensure nothing was screwed.

If the approach isn't sufficient (e.g. you would like it to become more robust), not one other way rather than keep binary within the database in LOB. Then it will likely be really transactional and safe.

Treat the 2 occasions (controlling the reference, and controlling the file) like a single transaction. If each one fails, back another one out. Then you definitely should fight to enter into a scenario in which the two aren't synchronized. It's simpler to rollback database procedures than filesystem procedures.

A classic question I understand, as well as the advantage of other visitors:

Based on your os's you might have the ability to use Transactional TxF

http://msdn.microsoft.com/en-us/magazine/cc163388.aspx