I've got a
Dictionary<int, string> cached (for 25 minutes) which has ~120 ID/Title pairs for any reference table. I iterate over this collection when inhabiting dropdown lists and I am confident this really is faster than querying the DB for that full list every time.
My real question is much more about if it seems sensible to make use of this cached dictionary when exhibiting records which have an overseas key into this reference table.
Say this cached reference table is really a EmployeeType table. Basically would query and display a listing of worker names and kinds must i query for EmployeeName and EmployeeTypeID and employ my cached dictionary to seize the EmployeeTypeIDs title as each record is displayed or perhaps is it faster to simply possess the DB grab the EmployeeName and JOIN to obtain the EmployeeType string skipping the cached Dictionary altogether.
I understand both works but I am thinking about what's going to carry out the quickest. Thank you for any help.
I understand you will not such as the answer, but good sense dictates you need to do the simplest factor and when not fast enough then you definitely put remedy into it.
I'll explain myself. Ought to be fact should you cache it it'll most likely be faster while you would not be striking the database any time you load the page, however the gain is probably not noticeable for which you are doing (i.e. you may have another bottle-neck which makes that gain minor), beating the objective of caching to begin with.
The only method, again, is to get it done the simplest way (no caching) and when you are unhappy only then you will go the additional bit.
Optimisation 101 states do not do it unless of course you have to:- http://stackoverflow.com/questions/2473666/tips-for-optimizing-c-net-programs/2473875#2473875
But, yes, if this can be an entirely static research for that duration of the applying Also it occupies hardly any RAM then caching it might appear fairly harmless along with a Dictionary research from RAM is going to be faster than a visit to the database.
For the second part you may as well allow the database perform the join, it'll most likely obtain that table in RAM already, and also the elevated network payload would appear small.
However, if you don't have to get it done, do not do it! The risk here is you do that one, then another, then another, the code develops more and more complex and RAM fills track of stuff you think you will need but which actually are utilized rarely departing less space for that OS/ORM/DB to complete its work. Allow the compiler, ORM and database decide things to retain in memory rather - there is a larger team centered on optimisation!