I am presently along the way of testing our solution which has the entire "gamut" of layers: UI, Middle, and also the all pervading Database.

Before my arrival on my small current team, query testing ended through the testers by hand crafting queries that will theoretically return an effect set the saved procedure should return according to various relevancy rules, sorting, whoever else.

This had along side it effect of bugs being filed from the tester's query more frequently than from the actual query under consideration.

I suggested really using a known result set you could just infer the way it should return because you control the information present -- formerly, data was drawn from production, disinfected, after which populated within our test databases.

Everyone was still insistent on creating their very own queries to check exactly what the designers have produced. I suspect that lots of are still. I've it i believe this is not ideal whatsoever, and merely increases our testing footprint needlessly.

So, I am curious, which practices would you use to check situations such as this, and what can be looked at ideal for top finish-to-finish coverage you will get, without presenting chaotic data?

The problem I've is where's where to complete what testing. Will I just poke the service directly, and compare that dataset to that particular that we can pull in the saved procedure? I've got a rough idea, and also have been effective enough to date, however i seem like we are still missing something important here, so I am searching towards the community to ascertain if they've any valuable experience that can help formulate my testing approach better.

Testing saved procs will need that every individual who tests includes a separate demonstration of the db. This can be a requirement. Should you share conditions you will not have the ability to depend upon the outcomes of the test. They will be useless.

You must also make sure that you roll back the db to it's previous condition after every test in order to result in the results foreseeable and stable. Due to this have to roll back the condition after every test these tests will require considerably longer to accomplish than standard unit tests so they'll most likely be something you need to go beyond evening.

You will find a couple of tools available that will help you with this particular. DbUnit is one i believe Microsoft were built with a tool Visual Studio for Database Professionals that contained some support for DB testing.

Here are a few recommendations:

  1. Make use of an isolated database for unit testing (e.g. Not one other test runs or activity)
  2. Always place all of the test data you want to query inside the same test
  3. Write the tests to at random create different volumes of information e.g. random quantity of card inserts say between 1 and 10 rows
  4. Randomize the information e.g. for any boolean area random place and true or false
  5. Have a count within the test from the variables (e.g. quantity of rows, quantity of trues)
  6. For that Claims execute query and compare against local test variables
  7. Use Businesses Services transactions to rollback database to previous condition

Begin to see the link below for that Businesses Services Transaction technique:

http://sites.asp.internet/rosherove/articles/DbUnitTesting.aspx

Included in our continuous integration, we run our nightly 'build' from the database queries. This requires a suite of DB calls that are up-to-date regularly in the real calls within the code in addition to any expected ad-hoc queries.

These calls are timed to make sure that:

1/ They do not take too lengthy.

2/ They do not differ extremely (inside a bad way) in the previous evening.

In by doing this, we catch errant queries or DB changes rapidly.

The query planner is the friend, particularly in this situation. It is good practice to determine that indexes are utilized whenever you expect these to be which the query does not require work to become done. Even when you've stress tests incorporated inside your suite, it's still smart to catch costly queries before your application begins grinding to some halt.