When creating a transactional system which has a highly stabilized DB, running confirming style queries, as well as queries to show data on the UI can involve several joins, which inside a data heavy scenario can in most cases does, impact performance. Joins are costly.

Frequently, the guidance espoused is you should not run these queries off your transactional DB model, rather you need to use a denormalized flattened model that's customized for specific UI sights or reviews which removes the requirement for many joins. Data duplication isn't an problem within this scenario.

This idea makes sense, but things i rarely see when experts make these claims is precisely How you can implement this. For instance, (and to be honest I'd appreciate a good example using any platform) inside a mid sized system running on the sql server back-finish you've got a stabilized transactional model. You might also need some reviews along with a website that need queries. So, you produce a "confirming" database that flattens in the stabilized data. How can you bare this synchronized? Transaction log shipping? If that's the case, how can you transform the information to suit within the confirming model?

Within our shop, we setup a continuing transactional replication in the OLTP system to a different DB server employed for confirming. You wouldn't like to make use of log shipping for this function because it requires a unique lock around the database each time it reinstates a log, which may stop your customers from running reviews.

Using the optimizer in SQL Server today, I believe the concept the joins on the stabilized database are "too costly" for confirming is a little outdated. Our design is fully 3rd normal form, into the millions rows within our primary tables, and that we don't have any problems running some of our reviews. With that said, if push found shove, you can consider creating some indexed sights in your confirming server to help.

We use transactional replication to a different database.

We filter the information therefore we only obtain the data we want within our replication database

We only choose the posts we would like, therefore the tables are 'smaller'.

Only then do we mix the information within the replication database either via sights or we build triggers to include data in one table to a different.