Like me using a new database project (within VS2008), so that as I have not created a database on your own, I immediately started considering how you can run a database within source control (within this situation, Subversion).

I discovered some good info on SO, including this publish: Keeping development databases in multiple conditions synchronized. Among the solutions particularly pointed to numerous a links, which had good, helpful information.

I had been reading through a number of posts by K. Scott Allen which describe how he handles database change. From the reading through (and please pardon the noobishness of my question), it appears as if the database is never checked right into a repository. Rather, scripts that may build the database, together with test data (also is populated from scripts) is checked in to the repository. Ultimately, which means that, whenever a developer is testing their application, these scripts, which are members of the build process, are run. This guarantees the database expires-to-date, but can also be run in your area of the many developer's machine.

This will make sense in my experience (should i be indeed reading through that properly). However, should i be missing something, I'd appreciate correction or additional guidance. Additionally, another question I needed to request - performs this also imply that I ought to NOT sign in the mdf or ldf files which are produced from Visual Studio?

Thank you for any help and extra insight. Always appreciated.

That's correct you need to sign in scripts not the database file itself.

I am not keen on building from test data unless of course the information itself will mimic how big data that production has (or perhaps in,the situation of recent databases, is supposed to have) . Why? because writing code against a table with 100 records does not let you know whether it will run in due time if you have 10,000,000 records. I have too many bad design options produced from individuals who think a little data set is alright for development.

Here, we don't allow devs to possess a separate database on the box (which typically limits the dimensions the database could be by virture of not a server mounted on SAN), rather they have to prevent the dev database that is periodically rejuvenated from push (after which all of the new dev scripts run) to help keep the information the best size. I believe it is crucial that your dev datbase atmosphere match push as carefully as you possibly can including equipment configuration, size the database etc. Anything frustrating than investing a very long time developng something which either will not work on all on push or needs to be studied lower immediately since it is slowing down the machine an excessive amount of.

Jumping lower off my soapbox now.

It's good idea to sign in scripts, since source code control is most effective to dealing with text files, instead of binary files. Variations within the script files can be simply examined included in the relaxation of the code changes associated with the database change. Additionally to checking within the database scripts, we sign in a database schema snapshot. This database schema snapshot enables us to ensure the schema in production matches the expected schema for given version from the product. On top of that, the database schema snapshot is really a handy method for trying to find posts and tables utilizing a plain text editor.

I personally use DataConstructor but am biased because I authored it.

You could utilize something like Liquibase to handle the database scripts. It really is a database upgrade framework, therefore it will keep an eye on the steps which have performed already, so when you wish to upgrade production, for instance, it only executes the brand new steps.