I am presently along the way of creating a repository for any project that'll be DB intensive (Performance tests happen to be completed and caching is required hence why I am asking )

The way in which I have first got it setup now's that every object is individually cached, if I wish to perform a query on their behalf objects I pass the query towards the database and return a the id's needed. (For many simple queries I have cached and manage the ids)

Then i hit the cache with one of these ids and pull them out, any missing objects are bundle directly into "wherebyInch statement and fired towards the database at this time I repopulate the cache using the missing ids.

The queries themselves are that appears to be about paging / ordering the information.

Is a appropriate strategy? Or possibly exist better techniques available?

Thanks Tony

This can be a reasonable approach and that i go this route before and it is best to make use of this for straightforward caching.

However, when you're upgrading or conntacting the database you'll encounter some interesting problems and you ought to handle these situations carefully.

For instance your cache data will end up obsolete when the user updates the record within the database. For the reason that scenario you'll either have to concurrently update the in-memory cache or purge the cache to ensure that it may be rejuvenated around the next fetch query.

Things may also get tricky should you for instance the consumer updates a customer's current email address that is inside a separate table but connected using a foreign key.

Besides database caching it's also wise to be thinking about output caching. This works very well if for instance you've got a table that shows sales data for previous month. The table might be saved in another file that will get incorporated in a lot of other pages that are looking to exhibit the table. If you cache the file using the sales data table, individuals other pages once they request this file, the caching engine can fetch it completely from the disk and also the business logic layer does not even get hit. This isn't relevant constantly but quite helpful for custom controls.

Unit of labor Pattern

It may also help to understand about the Unit of Work pattern.

When you are tugging data interior and exterior a database, you need to keep tabs on what you've transformed otherwise, that data will not be written into the database. Similarly you need to place new objects you create and take away any objects you remove.

You are able to alter the database with every switch to your object model, but this can result in plenty of really small database calls, which eventually ends up being very slow. In addition it takes you to possess a transaction open for that whole interaction, that is not practical for those who have a company transaction that spans multiple demands. Everything is a whole lot worse if you want to keep an eye on the objects you've read so that you can avoid sporadic reads.

One of labor monitors all you do throughout a company transaction that may modify the database. When you are done, it figures out everything that should be done to change the database consequently of your projects.

If you work with SQLServer, you should use SqlCacheDependency where your cache is going to be instantly repopulated once the data table alterations in the database. Here's the hyperlink for SqlCacheDependency

This link consists of an identical cache dependency solution. (It's for any file as opposed to a DB. You will have to do something about it according to the msdn link above to possess a cache reliance upon DB)

Hope this can help :)

I do not advice custom caching strategy. Caching is difficult. Based on your platform of preference you might like to decided on a third-party caching library/tool.