I've got a database that must have the ability to scale as much as vast amounts of records or rows.
- Can this many rows be supported per single table? Could it be advisable?
- Would just one table be split over several groupings if utilized in a NDBCLUSTER.
- Other load balancing techniques?
- What exactly are some advisable techniques of implementing this type of database?
- What exactly are guidelines for any database with this particular many rows to achieve more performance?
- Would MySQL do, or must i look elsewhere.
We've tables with 22 million rows, and there is no bottleneck around the corner. A minimum of none that enough RAM can't fix. Generally there's very difficult good or bad. It is dependent around the character of the data, table engine, etc..
Should you revealed more information what type of data it's that you are saving, a response may well be more detailed.
My only general advice for big databases is, that I'd exceed the hardware options prior to going into replication and/or sharding (for performance reasons -- keeping a slave for backup is really a different story). You should also know your index-fu and also the apparent switches/options to be able to tune the database server.
More information, if you're able to let me know what type of data you are dealing with.