So I have become a task and also got the db team offered on source control for that db (strange right?) anyway, the db already is available, it's massive, and also the application is extremely determined by the information. The designers need as much as three different tastes from the data to operate against when writing SPROCs and so forth.

Clearly I possibly could script out data card inserts.

But my real question is what tools or methods would you use to construct a db from source control and populate it with multiple large teams of data?

Good to help you place your database under source control.

We've our database objects in source control although not data (aside from some research values). To maintian the information on dev, we refresh it by rebuilding the most recent push backup, then rerunning the scripts for just about any database changes. If what we should used to do would require special data (say new research values that are not on push or test logins), there exists a script for your too that is a part of source control and which may be run simultaneously. You wouldn't like to script out all of the data though as it might be very timeconsuming to recreate ten million records via a script (And when you've ten million records you do not want designers developing against a database with ten test records!). Rebuilding push information is considerably faster.

Since all of our deployments are carried out only through source controlled scripts, we do not have issues getting individuals to script what they desire. Hopefully you will not either. Whenever we first began (and when dev coudl do their very own deployments to push) we needed to really undergo a couple of occasions and remove any objects that were not in source control. We learned very rapidly to place all db objects in source control.

Usually, we simply place in source control the .sql files for (re-) building the schema.

Only then do we place in source control the script in a position to read a production or integration database to be able to extract and populate another group of data within the database caused by the prior .sql execution.

The concept is to buy newest data having a script robust enough to see them from the database which isn't always in the same version compared to one being build. (the truth is though, the main difference isn't that large, and also the data may be easily read).