I went a research test against an indexed MySQL table that contains 20,000,000 records, and based on my results, it requires .004 seconds to retrieve an archive given an id--even if joining against another table that contains 4,000 records. It was on the 3GHz dual-core machine, with just one user (me) being able to access the database. Creates were also fast, because this table required under 10 mins to produce all 20,000,000 records.

Presuming my test was accurate, can one expect performance to become as as snappy on the production server, with, say, 200 customers at the same time reading through from and conntacting this table?

I suppose InnoDB might be best?

That is dependent around the storage engine you are likely to use and what is the read/write ratio.

InnoDB will improve if you will find large amount of creates. Whether it's reads with very periodic write, MyISAM may be faster. MyISAM uses table level securing, therefore it locks up whole table whenever you have to update. InnoDB uses row level securing, so that you can have concurrent updates on different rows.

InnoDB is certainly safer, so I'd stick to it anyhow.

BTW. keep in mind that at this time RAM is extremely cheap, so purchase a lot.

Is dependent on a variety of factors:

  • Server hardware (Especially RAM)
  • Server configuration
  • Data size
  • Quantity of indexes and index size
  • Storage engine
  • Author/readers ratio

I would not expect it to scale that well. More to the point, this type of factor would be to vital that you speculate about. Benchmark it and discover for yourself.

Regarding storage engine, I would not dare to make use of not InnoDB for any table of this size that's both read and written to. Should you run any write query that is not a primitive place or single row update you'll finish up securing the table using MyISAM, which yields terrible performance consequently.

There is no reason why MySql could not handle that type of load with no significant issues. You will find many other variables involved though (otherwise, it is a 'how lengthy is a bit of string' question). Personally, I have had numerous tables in a variety of databases which are well beyond that range.

  • What size is each record (normally)
  • Just how much RAM does the database server have - and just how expensive is allotted towards the various designs of Mysql/InnoDB.

A default configuration may permit a default 8MB buffer between disk and client (that might work acceptable for just one user) - but attempting to fit a 6GB+ database through that's condemned to failure. This problem was real btw - and was leading to several crashes each day of the database/website till I had been introduced directly into trouble-shoot it.

If you'll probably perform a good deal more with this database, I'd recommend getting someone after some more experience, or at best oing what you could to have the ability to provide some optimisations. Reading through 'High Performance MySQL, 2nd Edition' is a great start, out of the box searching at some tools like Maatkit.

As lengthy as the schema design and DAL are built good enough, you realize query optimisation thoroughly, may change all of the server configuration configurations in a high end, and also have "enough" hardware correctly set up, yes (aside from sufficiently pathological cases).

Same answer both engines.

You need to most likely execute a load test to ensure, but as lengthy because the index was produced correctly (meaning indexes are enhanced for your query claims), the Choose queries should perform in an acceptable speed (the Card inserts and/or UPDATES might be much more of a speed problem though based on the number of indexes you've, and just how large the indexes get).