I am focusing on a PHP cms and, in testing, have observed that a number of from the system's MySQL tables are queried on nearly every page but they are hardly ever written to. What I am wondering is will this begin to weigh heavily around the database as traffic increases, and just how can one solve/prevent this?

My primary ideas would start storing a few of the more static data in files (using PHP serialization) but performs this really reduce server load? What I am concerned about is the fact that I'd be simply moving our prime load in the database towards the file system!

If somebody could clue me in around the better approach, that might be great. Just in case the amount of information itself includes a large effect, I have detailed a few of the data I'm going to be storing below:

  • Full listing of Nations (including ISO country codes)
  • Site options (skin, admin email, support Web addresses etc.)
  • Usergroups (including permissions)

You should know that reading through a table from the database on the effective server as well as on a quick connection will probably be faster than reading through it from disk in your local machine. The database will cache the whole of those small, regularly utilized tables in memory.

By applying exactly the same functionality yourself within the file system, there's merely a small possible accelerate, but an enormous opportunity to screw it up making it reduced.

It's most likely better to stick to while using database.

  1. Optimize your queries (using mysql slow query log) and EXPLAIN function.

  2. If tables are actually rarely written to you should use native MySQL caching. You've got nothing to alter in your soul code, just enable mysql caching during my.conf.

  3. Check out using template engine like Smarty (smarty.internet). It's it's own caching system that actually works pretty much and can REALLY reduce server load.

  4. You may also use Memcache, but it's well worth only using with through the roof load websites. (I believe that Smarty is going to be enough.)

Databases tend to be better at handling large data volumes compared to native file system.

Don't be concerned about optimizing your website to lessen server load, before you really possess a server load problem. :-)

The tables you pointed out (nations and customers) will usually be cached in memory by MySQL directly unless of course you're expecting a number of countless records during these tables.

Just in case where these tables won't easily fit in memory, you might want to think about a general-purpose distributed memory caching system, for example memcached.

In case your database is correctly indexed, it will likely be considerably faster to question data in the database. If you wish to speed that up, consider memcached or similar.

Databases are exactly for this function.. To keep and supply data. Filesystem is perfect for scripts and programming.

Should you encounter load problems, think about using Memcached or any other utility for database.

You may even consider attempting to cache various areas of your page straight into database as whole sections (eg. a sidebar, that does not change an excessive amount of, produced header section, ..)

you can cache output (flush(), doctor_flush() etc.) to some file and include that rather than getting multiple MySQL reads. caching is certainly faster than being able to access MySQL multiple time.

reading through a static file is a lot faster than adding overhead via php and mysql processing.

You have to assess the performance via load testing to prevent prematurely optimising.

It might be foolish and potentially increase overall load to keep data in files with serialization, databases work great at locating data.

If after analysis there's a genuine performance hit (that we doubt unless of course you're speaking about massive loading), then caching is really a better solution.

It's more essential to possess a smartly designed system that facilitates changes as needs arise.