Can someone produce a family member concept of if this will work better hitting the database many occasions for small query results versus caching a lot of rows and querying that?

For instance, if I've got a query coming back 2,000 results. After which I've additional queries on individuals results that take maybe 10-20 products, will it be easier to cache the 2000 results or hit the database each time for every group of 10 or 20 results?

Other solutions listed here are correct -- the RDBMS as well as your data are important aspects. However, another main factor is the length of time it will require to sort and/or index your computer data in memory versus within the database. We've one application where, for performance, we added code to seize about 10,000 records into an in-memory DataSet after which do subqueries on that. Because it works out, keeping that data current and choosing out subsets is really reduced than simply departing all of the data within the database.

So make an effort to: get it done the easiest way possible first, then profile it and find out if you want to optimize for performance.

generally, network round trip latency is several orders of magnitude more than the capability of the database to create and feed data to the network, and also the capacity of the client box to eat it from the network connection.

But consider the width of the network bus ( Bits/sec ) and compare that towards the average round trip time for any database call...

On 100baseT ethernet, for instance looking 12 MBytes / sec bandwith rate. In case your average round trip time is say, 200 ms, your network bus can deliver 3 MBytes in every 200 ms round trip call..

If you are on gigabit ethernet, time jumps to 30 Mbytes per round trip...

If you separate a request data into two round outings, well that's 400 ms, and every query would need to be over 3Mb (or 30Mb for gigibit ) before that might be faster...

Unless of course there's a large performance problem (e.g. a very latent db connection), I'd stick to departing the information within the database and letting the db take proper care of things for you personally. Several things are carried out effectively around the database level, for instance

  • isolation levels (what goes on if other transactions update the information you are caching)
  • immediate access using indexes (the db might be faster to gain access to a couple of rows than you searching using your cached products, particularly if that data already is incorporated in the db cache as with your scenario)
  • updates inside your transaction towards the cached data (would you like to cope with upgrading your cached data too or would you "refresh" from the db)

You will find lots of potential items you might run into should you choose your personal caching. You must have an excellent performance reason befor beginning to consider proper care of everything complexity.

So, rapid answer: It is dependent, but unless of course you've good quality reasons, this has the aroma of premature optimizaton in my experience.

This likely differs from RDBMS to RDBMS, but my experience continues to be that tugging in large quantities is nearly always better. In the end, you are going to need to pull the 2000 records anyway, so you may too do all of it at the same time. And 2000 records is not really a lot, but that is dependent largely on which you are doing.

Make an effort to to profile and find out the things that work best. RDBMSes could be tricky monsters performance-smart and caching could be just like tricky.