What I have done many occasions when testing database calls is setup a database, open a transaction and rollback it in the finish. I have even used an in-memory sqlite db which i create and destroy around each test. Which works and it is relatively quick.

My real question is: Must I mock the database calls, must i make use of the technique above or must i use both Body for unit test, one for integration tests (which, in my experience a minimum of, appears double work).

however , if you are using your manner of establishing a database, opening transactions and moving back, your unit tests will depend on database service, connections, transactions, network and the like. Should you mock this out, there's no dependency with other bits of code inside your application and you will find no exterior factors impacting on your unit-test results.

The aim of one test would be to test the littlest testable bit of code without including other application logic. This can't be accomplished when utilizing your technique IMO.

Making your code testable by abstracting your computer data layer, is a great practice. It'll make your code better quality and simpler to keep. Should you implement a repository pattern, mocking your database calls is rather easy.

Also unit-make sure integration tests serve different needs. Unit exams are to prove that a bit of code is technically working, and also to catch corner-cases. Integration tests verify the connects between components against an application design. Unit-tests alone cannot verify the functionality of a bit of software.

HTH

All I must increase @Stephane's response is: it is dependent how you fit unit testing into your very own development practices. If you have finish-to-finish integration tests including a genuine database that you simply create and tidy as needed - provided you've covered all of the different pathways using your code and also the various situations that could occur together with your customers hacking publish data, etc. - you are covered from the perspective of the tests suggesting in case your product is working, that is most likely the primary reason behind getting tests.

I'd guess though that getting all of your results tell you every layer of the system makes test-driven development very hard. Requiring every layer in position and dealing for a test to pass through virtually excludes investing a couple of minutes writing an evaluation, a couple of minutes which makes it pass, and repeating. What this means is your results can't guide you when it comes to how individual components behave and interact your results will not pressure you to definitely make things loosely-combined, for instance. Also, say you give a new feature then one breaks elsewhere granular tests which run against components in isolation make searching for what went wrong much simpler.

Therefore I'd say it's well worth the "double work" of making and looking after both integration and unit tests, together with your DAL mocked or stubbed within the latter.