There exists a big DB schema that's ocnstantly altering - every application release comprises the application itself, along with a migration script to use towards the live DB as needed.

Nopw, in parallel we conserve a schema creation script which we are able to use at any time to construct a DB on your own (for testing reasons).

The one thing that vexes me slightly is this fact appears to become a violatyion of DRY - basically give a column to some table I must produce a script to get it done and create a similar switch to the schema build script.

Can there be any strategy to avert this? I figured of getting perhaps a 'reference DB' without any dynamic data inside it that people could simply export after every build is installed. Therefore we produce a migration script after which, when the build is live, export the schema into the 'create' script.

I am not believing that wouldnt become more work compared to process it replaces though.....

I have a similar problem/concern/vexation. The easiest way I have found the only method to follow DRY is by using a database modeling tool (e.g. CA Erwin, Sybase PowerDesigner) where all of the modeling jobs are completed to build / maintain reference schema. You'll be able to leverage the modification management abilities from the tool to create both diff script which will get performed to visit from to produce to produce B and create the reference implementation yourself.

You might, obviously, find you will find places in which you did not repeat yourself, however the comparison tools can have all variations, so that you can suck "Oh, I needed to change that quickly" changes into the reference implementation too.

Clearly, everything -- diff script, reference implementation, and model itself -- then get protected through source code management.