I am creating my DB for functionality and gratifaction for realtime AJAX web programs, and that i don't presently possess the assets to include DB server redundancy or load-balancing.

Regrettably, I've got a table during my DB that may potentially finish up storing 100s of countless rows, and will have to read rapidly to avoid lagging the net-interface.

Most, if not completely, from the posts within this table are individually indexed, and I'd like to determine if you will find different ways to alleviate the burden around the server when running querys on large tables. But can there be eventually a cap is bigger (in rows or GB) of the table before just one unclustered SQL server begins to choke?

My DB has only twelve tables, with perhaps a couple dozen foriegn key associations. None of my tables convey more than 8 approximately posts, and just one or two of those tables will finish up storing a lot of rows. Hopefully the simplicity my DB will replace with the huge levels of data during these couple tables ...

The only real limit is how big most of your key. Could it be an INT or perhaps a BIGINT?

SQL will happily keep data with no problem. However, with 100 countless rows, your very best off partitioning the information. You will find many good articles about this similar to this article.

With partitions, you could have 1 thread per partition working simultaneously to parallelise the query much more than can be done without paritioning.

Rows are restricted strictly by the quantity of disk space available for you. We've SQL Servers with 100s of countless rows of information inside them. Obviously, individuals servers are big.

To be able to keep your web interface snappy you will have to consider the way you access that data.

An example is to step back from any kind of aggregate queries which require processing large swaths of information. Such things as SUM() could be a killer for the way much data it's attempting to process. During these situations you're much best calculating any summary or arranged data in advance and letting your website query these analytic tables.

Next you will need to partition the information. Split individuals partitions across different drive arrays. When SQL needs to visit disk it causes it to be simpler to parallelize the reads. (@Simon discussed this).

Essentially, the issue boils lower to just how much data you have to access at anyone time. This is actually the primary problem no matter the quantity of data you've on disk. Even small databases could be clogged when the drives are slow and the quantity of available RAM within the DB server is not enough to help keep an adequate amount of the DB in memory.

Usually for systems such as this considerable amounts of information are essentially inert, and therefore it's rarely utilized. For instance, a PO system might maintain past all bills ever produced, however they really only cope with any active ones.

In case your system has similar needs, then you may have a table that's for active records and just archive these to another table included in a nightly process. You can have statistics like monthly earnings (for example) recomputed included in that archival.

Some ideas.

My stomach informs me that you'll most likely be okay, but you'll suffer from performance. It is going to rely on the acceptable time-to-retrieve is a result of queries.

For the table using the "100s of countless rows", what number of the information is utilized regularly? Is a few of the data, rarely utilized? Perform some customers access selected data along with other customers choose different data? You might take advantage of data partitioning.