While you develop a credit card applicatoin database changes inevitably appear. The secret I've found is keeping your database build in keeping with your code. Previously I've added a build step that performed SQL scripts from the target database but that's harmful in a lot while you could inadvertanly add bogus data or worse.

My real question is do you know the tips and methods to help keep the database in keeping with the code? How about whenever you roll back the code? Branching?

Version amounts baked into the database are useful. You've two options, embedding values right into a table (enables versioning multiple products) that may be queried, or getting an explictly named object (like a table or somesuch) you can look at for.

Whenever you release to production, have you got a rollback plan in case of unpredicted catastrophe? Should you choose, could it be the use of a schema rollback script? Make use of your rollback script to rollback the database to some previous code version.

You need to have the ability to make your database on your own right into a known condition.

While doing same with useful (especially in early stages of the new project), many (most?) databases will rapidly become way too large for your to become possible. Also, for those who have any BLOBs then you are likely to have problems producing SQL scripts for the entire database.

I have certainly been thinking about some kind of DB versioning system, however i haven't found anything yet. So, rather than an answer, you will get my election. :-P

You prefer to have the ability to have a clean machine, obtain the latest version from source control, build in a single step, and run all tests in a single step. Causeing this to be fast enables you to produce good software faster.

Much like exterior libraries, database configuration should also maintain source control.

Observe that I am not to imply that your live database content ought to be within the same source control, sufficient to get at a clean condition. (Do support your database content, though!)

I love the way in which Django will it. You build models and also the advertising media are a syncdb it is applicable the models you have produced. Should you give a model you need to simply run syncdb again. This is simple to have your build script do each time you've made a push.

The issue comes when you really need to change a table that's already made. I don't believe that syncdb handles that. That will need you to use and by hand add the table as well as give a property towards the model. You'd most likely wish to version that alter statement. The models would continually be under version control though, if you required to you have access to a db schema ready to go on the new box without running the sql scripts. One other issue with this particular is monitoring static data that you simply always want within the db.

Rails migration scripts are pretty nice too.

A DB versioning system could be great, however i don't fully realize of these a factor.

While doing same with useful (especially in early stages of the new project), many (most?) databases will rapidly become way too large for your to become possible. Also, for those who have any BLOBs then you are likely to have problems producing SQL scripts for the entire database.

Backup copies and compression will help you there. Sorry - there is no excuse to not have the ability to obtain a a great group of data to build up against. Even when it is simply a sub-set.

Define your schema objects as well as your reference data in version-controlled text files. For instance, you are able to define the schema in Torque format, and also the data in DBUnit format (both use XML). After that you can use tools (we authored our very own) to create the DDL and DML that get you in one version of the application to a different. Our tool may take as input either (a) the prior version's schema &lifier data XML files or (b) a current database, which means you can always obtain a database associated with a condition in to the correct condition.