We're presently developing a new technique for our inhouse programs. We presently have about 10-15 programs that is the opposite of exactly the same database directly. This really is clearly not so good and you want to evaluate our options. So far as I can tell we must options:

  • Duplicate the database and taking advantage of replication etc for getting them synchronized
  • Produce a new application between your database and also the 10-15 programs on the top.
  • Other?

I'd greatly prefer to hear your opionions about this. I belive the second item is what you want, which provides for us a highly effective layer to implement caching. But how does one expose this layer for the programs? Will webservices/relaxation be what you want, or exist other, possible ways to get this done?

I believe the buzzword response to this really is "Service Orientated Architecture" - that is indeed option 2.

It is a large-scale undertaking, with lots of exciting dead finishes to understand more about - but essentially, rather than considering databases and tables, consider the help which individuals 15 programs need. Some might be shared, some might be specific to 1 application. Discover a way of subjecting individuals services towards the programs - but don't forget that the web service call could be considerably reduced compared to equivalent direct database call, so do not take "service oriented architecture" to mean you need to introduce web services everywhere - it's much more of a mindset than the usual product specs.

Replication and synchronisation - in my opinion - create very brittle systems, with failure modes that hurt the mind to even consider.

Other - well, you do not really say exactly what the specific issue is that you are attempting to solve. If it is performance - the least expensive method of soving performance issues is tossing hardware in the problem. Whether it's managability - an SOA can sort out that - however it frequently also introduces additional infrastructure in to the mix which must also be maintained. Make certain you're really obvious about what's driving your architecture options, due to there being not one "best" solution - it is all about trade offs.

I miss out on why 10-15 programs must share a database? Do really all programs need to use all tables?

Begin by moving all tables which are unique for every application to some separate database.

When done, it ought to be quite easy to understand if you will see any issues with multiple programs being able to access exactly the same database. Normally it does not matter unless of course programs begin to cache data (since you may never determine if another application has up-to-date some data that you simply cache). The most frequent approach is by using some type of texting for your.

Examining the choices you expose:

Duplicate the database and taking advantage of replication etc for getting them synchronized

If all of the programs have to access exactly the same database, I've found copying it a poor decission, because you would face synchronization issues (outdated data, etc).

Produce a new application between your database and also the 10-15 programs on the top.

This is actually a good possibility. If you do not want each one of these application to rely on the implementation particulars from the database (for instance, a big change on a single table would modify the code famous them), you are able to "hide" the database behind a credit card applicatoin that provides "business significant" procedures to those "client programs".

If you're just facing performance issues, I recommend to clusterize the database (rather than manual duplication).

An unmentioned choice is to possess a part of logic in saved methods / triggers etc... Not recommended, but a concept non the less.

I'd state that your next choice is the best choice here. If you're on .Internet platform, WCF is actually simple and easy , effective approach to take.