I must request regarding your suggestions concerning unit testing against large databases.

I wish to write unit tests to have an application that is mostly implemented in T-SQL so mocking the database isn't a choice. The database is very large (approximately. 10GB) so rebuilding the database following a test run can also be difficult.

The application's purpose would be to manage the handling of programs for credit contracts. You will find customers in specific roles that alter the condition of agreement objects and my job would be to test thing about this process.

I am thinking about two approaches:

First Approach

Create contracts that meet specific conditions after which test changes of agreement condition (eg. transition from browsing some office to handled within this specific office). The contracts is going to be produced in application itself and they'll be my test cases. All of the tests come in transactions that might be folded back after carrying out these tests.


The benefit of this method is very straightforward test. Expected data might be easily referred to because I exactly understand how the item need to look like following the transition.


The disadvantage would be that the database cannot change in ways which will break the exam. Customers and contracts utilized in test cases should always look exactly the same and when you will see a necessity to alter the database the preparation process must be repeated.

Second Approach

Create contracts in unit tests. Programatically create contracts that will meet specific conditions. The information employed for creating agreement is going to be selected at random. Even the customers which will change agreement condition is going to be produced at random.


The benefit of this method is easy making changes to things and talent to operate tests on databases with various data.


Both objects (agreement and user) have plenty of fields and related data and I am afraid it might take a while to implement development of these objects (I am also afraid these objects could have some errors since the creation method is going to be quite difficult to implement without errors).

Exactly what do you consider both of these approaches?

Inflict Stack Overflow visitors believe it is well worth the effort create objects as referred to in second approach?

Does anybody here have experience creating such tests?

I am unsure I entirely accept your assumption that you simply cannot restore the database following a test run. As I certainly agree that some tests ought to be operate on a complete-size, multi-TB database, I do not understand why you cannot run much of your tests on the much, much more compact test database. Exist constraints that should be examined like "Can't be greater than a billion identical rows?"

My recommendation would really be to utilize a more compact test database for much of your functional specs, and also to create-drop all its tables with every test, with very little sample data out of the box essential to test out your functionality.

What about testing my way through a transaction after which roll it back? E.g: