Wednesday, June 17, 2009

Driving unit test takeup with code coverage

For anyone who has not fully gotten organised with unit testing and code coverage, this is for you! :) My project involved a large inherited codebase which has good black-box testing, but little unit testing and no coverage metrics. Tackling unit testing was always something impossible for me -- how do you take 96, 000 lines of Python and 1, 000, 000 lines of C, and build unit tests? (nb the lines-of-C count seems high, but it's what 'wc' said.)

The general advice is not to try -- but to get traction by just writing one unit test for the next bit of new code you write. Then, the next bit, etc. Eventually, you will have figured out unit testing and will then make an appropriate judgment on what to do about the body on untested code.

I have typically found this to be quite a large hill to climb. I work only on a subset of the code, which practically requires me to invoke most of the code just to get to my part. Most of my methods require such a lot of setup that it seemed quite infeasible to tackle unit testing without doing some more thinking about how to do this in a sane way. Setting up and tearing down my application was just not going to be feasible if I were going to put a lot of unit tests in place -- I reckon the setup would have cost between 1 and 7 minutes per test!

This got relegated to the too-hard basket, and set aside until now. Here's how I found a way to get traction.

What turns out to be pretty straightforward, is integrating coverage testing. You more-or-less just switch it on, and it will record to a file across multiple program invocations. This can be used to count coverage across test scripts, or user testing, or development mode use, or indeed in operations.

I ran this through about half my black-box tests, and found I was sitting at around 62% code coverage. That's not too shabby, I reckon! I know for a fact that there are quite large chunks of code which contain functionality which is not a part of any operational code path, but is preserved for possible future needs. I estimate 25% of our code would fall into that category, lifting the coverage to 87% for the sub-area of code which I work on. Now I've got numbers that look like a challenge, rather than an insoluble problem!

I think that's the key... make lifting code metrics an achievable challenge, then it will seem more attractive. It's probably important not to target a particular number. I know '100% or bust' may be what some advocates would put forward, but for anyone new to the concept, I personally feel that simply measuring the current coverage, then understanding where that number comes from and what it means, it the more important achievement.

What is clear is that I'm not going to easily lift my coverage metrics beyond a certain point simply through tactical black-box testing. I'm going to have to write tests which go through the code paths which aren't part of the operational configuration. I'm going to have to write tests with very specific setup conditions in order to get to lines of code which are designed to handle very specific conditions. All of a sudden, I've got achievable and well-defined goals for unit testing.

I call that a win!

Cheers,
-T