One important thing about unit testing but also about testing in general is that you can trust your tests. If that nice little bar shines green, then your logic should be fine. If you cannot trust in your tests then they're pretty useless. But what does trust mean? When can I trust my tests?
The following things come to my mind when trying to answer this question:
Do test-first development
Now thats one of the most difficult parts, unit testing "beginners" will have to face. You will always again catch yourself, first writing the implementation part. That happens to me too, but in such a case, I immediately stop writing that logic, comment it out and see the test fail.
Seeing that test fail is so important to be able to trust it, because if you see your test succeed, although you commented out that code part you'd like to test, then you're in trouble. Basically the test isn't good for anything.
Analyzing is important to strengthen your trust. Having tests in place, covering 20% of your code doesn't really mean you didn't break anything if you see all of your tests succeed. Sounds reasonable, doesn't it?
It is therefore important to also have some statistics about your test-code-coverage in place. Now since I'm right now writing on my thesis project I'm bringing an example of Java and Eclipse, but Peter
assured me, VS2010 has a similar thing already integrated (I have to check it out these days).
In Eclipse you have a nice plugin called EclEmma
which does this job for you. Let's look at an example.
I have a repository object, basically an extremely simplified version of Fowler's repository pattern
(maybe I shouldn't even mention it, it's too simplistic, but anyway, the idea is the same). The repository is basically the connector between my logic and DB persistence related stuff. It keeps an in-memory cache and otherwise queries the DB accordingly. I wrote according tests to make sure everything works as expected. Running the tests with EclEmma results in a success of my test and furthermore my code gets highlighted depending on whether that logic is covered by some test case or not:
Oops, note the red part, it means no test covers this logic. Apparently I forgot to test my repository with an attached DAO object. By creating a mock implementation of the according IBluetoothDeviceDao class, an according test case and by running the tests using EclEmma again, I get the following:
You immediately see the difference. I increased my code coverage. The remaining red part is because I didn't yet explicitly test the exception handling logic.
This visualization is extremely nice because you can verify the coverage immediately after each test-run.Referenceshttp://www.eclemma.org/
Questions? Thoughts? Hit me up on Twitter