I have were built with a difficult time looking for good good examples of methods to handle database schemas and data between development, test, and production servers.
Here's our setup. Each developer includes a virtual machine running our application and also the MySQL database. It's their personal sandbox to complete what you want. Presently, designers can make a big change towards the SQL schema and perform a dump from the database to some text file they commit into SVN.
We are attempting to deploy a continuing integration development server that will be running the most recent committed code. As we do this now, it'll reload the database from SVN for every build.
You will find there's test (virtual) server that runs "release candidates." Implementing towards the test server is presently a really manual process, in most cases involves me loading the most recent SQL from SVN and fine-tuning it. Also, the information around the test server is sporadic. You finish track of whatever test data the final developer to commit had on his sandbox server.
Where everything stops working may be the deployment to production. Since we can not overwrite the live data with test data, this requires by hand re-creating all of the schema changes. If there have been a lot of schema changes or conversion scripts to control the information, this could get really hairy.
When the problem only agreed to be the schema, It would be an simpler problem, but there's "base" data within the database that's up-to-date throughout development too, for example meta-data in security and permissions tables.
This is actually the greatest barrier I see in on your journey to continuous integration and something-step-develops. How can you solve it?
A follow-up question: how can you track database versions which means you know which scripts to operate to upgrade confirmed database instance? Is really a version table like Lance mentions beneath the standard procedure?
I am likely to write a Python script that inspections what they are called of
*.sql scripts inside a given directory against a table within the database and runs those that aren't there so as with different integer that forms part one from the filename. If it's quite a simple solution, when i suspect it will likely be, then I'll publish it here.
I have got a functional script with this. It handles initializing the DB whether it does not exist and running upgrade scripts as necessary. You will find also switches for wiping a current database and posting test data from the file. It comes down to 200 lines, and so i will not publish it (though I would use it pastebin if there's interest).
You will find a few good options. I wouldn't make use of the "restore a backup" strategy.
Script all of your schema changes, and also have your CI server run individuals scripts around the database. Possess a version table to keep an eye on the present database version, and just execute the scripts if they're for any more recent version.
Make use of a migration solution. These solutions vary by language, however for .Internet I personally use Migrator.Internet. This enables you to definitely version your database and progress and lower between versions. Your schema is specified by C# code.
Your designers have to write change scripts (schema and data change) for every bug/feature they focus on, not only simply dump the whole database into source control. These scripts will upgrade the present production database towards the latest version in development.
Your build process can restore a duplicate from the production database into a suitable atmosphere and run all of the scripts from source control onto it, that will update the database to the present version. We all do this every day to make certain all of the scripts run properly.
Take a look at how Ruby on Rails performs this.
First you will find so known as migration files, that essentially transform database schema and data from version N to version N+1 (or just in case of downgrading from version N+1 to N). Database has table which informs current version.
Test databases will always be easily wiped clean before unit-tests and populated with fixed data from files.