I have built a pleasant website system that suits the requirements of a little specialized niche. I have been selling these web sites within the this past year by implementing copies from the software using Capistrano to my web server.
It happens in my experience the only difference during these websites may be the database, the CSS file, along with a small group of images employed for the person client's graphics.
Anything else is the same, or ought to be... Since I've about 20 of those sites used, it's dealing with be considered a hassle to ensure that they're all up-to-date with similar code. Which problem is only going to worsen.
I'm convinced that I ought to refactor this technique, to ensure that I'm able to use a bouquet of used ruby code, dynamically choosing the right database, etc, through the Link to the incoming request.
It appears that you will find two methods for handling the database:
- using multiple databases, one for every client
- one database, having a client_id area in every table, as well as an extra 'client' table
The multiple database approach will be the easiest for me personally right now, since i have wouldn't need to refactor every model during my application to include the customer_id area to any or all CRUD procedures.
However, it might be an inconvenience to need to run 'rake db:migrate' for hundreds or 100s of various databases, each time I wish to migrate the database(s). Clearly this may be made by a script, however it does not smell excellent.
However, every client may have 20K-50K products within an 'items' table. I'm concerned about the rate of fulltext searches once the products table includes a half million or million products inside it. Despite a catalog around the client_id area, I suspect that searches could be faster when the products were separated into different client databases.
If anybody comes with an informed opinion on the easiest method to approach this issue, I'd greatly prefer to listen to it. Thanks much ahead of time...
You will find benefits of using separate DBs (including individuals you already listed):
- Fulltext searches will end up slow (based on your server's abilities) if you have countless large text blobs to look.
- Separating the DBs could keep your table indexing speed faster for every client. Particularly, it could upset a number of your earlier implementing clients for on the new, large client. All of a sudden their programs are affected for (for them) no no reason. Again, should you stay beneath your hardware's capacity, this is probably not an problem.
- Should you ever drop a customer, it would be marginally cleaner to simply clean up their DB rather than remove all their connected rows by client_id. And equally clean to revive them when they change their brains later.
- If any clients request for further functionality that they're willing to cover, you are able to fork their DB structure without modifying anybody else's.
- For that pessimists: Less chance that you simply accidentally destroy all client data with a mistake as opposed to just one client's data. )
Everything being stated, the only DB option would be most likely better given:
- Your DB server's abilities helps make the large single table a non-problem.
- Your client's databases are certain to remain identical.
- You are not concerned about having the ability to keep everyone's data compartmentalized for reasons of archiving/rebuilding or just in case of disaster.