The website I'm developing in php makes many MySQL database demands per page seen. Although most are small demands with correctly designed index's. I don't determine if it will likely be worthwhile to build up a cache script of these pages.

1) Are file I/O generally faster than database demands? Performs this rely on the server? It is possible to method to test the number of of every your server are designed for?

2) Among the pages inspections the database for any filename, then inspections the server to ascertain if it is available, then decides things to display. I would assume would take advantage of a cached page view?

And if there's every other info on this subject you could forward me to that might be greatly appreciated.

Thanks

It is dependent how the information is structured, just how much there's and just how frequently it changes.

If you have relatively a small amount, of relatively static data with easy associations - then flat files would be the right tool to do the job.

Relational databases enter into their very own once the connections between your data tend to be more complex. For fundamental 'look up tables' they may be a little overkill.

But, when the information is constantly altering, then it may be simpler to simply make use of a database instead of handle the configuration management manually - as well as for considerable amounts of information, with flat files you have the extra problem of how can you discover the one bit that you'll require, effectively.

This really is dependent on many factors. For those who have a quick database with much data cached within the RAM or perhaps a fast RAID system, odds are bad, that you'll gain much from simple file system caching on the internet server. Also consider scalibility. Under high workload an easy caching mechanism would probably be a bottle neck while a database is smartly designed to deal with high work loads.
If you will find less demands and also you (or even the operating-system) has the capacity to keep your cache in RAM, you may have the ability to gain some performance. However the question arises, if it's realy neccessary to do caching under low work.

From plain performance perspective, it's smarter to tune the database server and never complicate the information access logic with intermediate file caches. A great database server would perform the caching by itself when the answers are cacheable. (I am unsure what's teh situation with mysql).

For those who have performance problems, you need to profile the web pages to determine the actual bottlenecks. Even if you are -much like me- keen on the enhanced codes, placing a more powerful/more hardware in to the equation cost less around the long term.

Should you still want to use caches, think about using a current solution, like memcached.

If you are doing read-heavy access (searching for filenames, etc) you may take advantage of memcached. You can keep "most popular" (most lately produced, lately used, based on your application) data in memory, then only query the DB (and perhaps files) once the cache misses. Memory access is way, far faster than database or files.

If you want write-heavy access, a database is what you want. If you are using MySQL, use InnoDB tables, or any other engine that supports row-level securing. Which will avoid people obstructing while another person creates (or worse, writing anyway).

But ultimately, it is dependent around the data.