Friday, July 3, 2009

HELP: How do you do Test Driven Design and Prototyping?

Here's where I fall over with TDD. Let's imagine a standard day in my life...

I have some programming problem. I need to build a Thingy to do Stuff. I don't already have anything that does something similar, so I sit down and think about the problem. Along the way, I figure out some approaches to the problem. I don't really believe in BDUF, so mostly I'll just start coding. This kind of exploration is what helps me think, and so I'll build 2 or 3 partial programs before I start to converge on something approaching a solution. Let's dot-point the process so far:

* Problem. Solution?
* Analyse
* Maybe scrawl out a flowchart
* Write a program that for some simple input, generates something like the right output
* Gather up more input data sets, and pump them through the program, extending and fixing as I go
* Reach workable solution

Okay, now a few background points. This isn't how I'd approach a big, team project. But it's how I approach anything I have to solve by myself. I can't just navel-gaze and come up with a great program design, and if we're being honest, I'll bet you can't either. To reach a decent design, I basically need to build 2 or 3 mediocre attempts first.

Now, as far as I understand it, TDD goes hand in hand with unit testing, which is all about small, well-tested, re-usable components. Well, that's great if your fundemental starting point as a designer / developer is the component. But really, it's not. Your starting point is the problem, and the process is one of decomposition and analysis.

Some problems lend themselves to an easy decomposition. A problem which lends itself to a decomposition will immediately make you think "hey, I know how to solve this. If only I had a sorter, a comparison algorithm, some kind of message generator and an input parser this would be a cakewalk!". That kind of problem isn't so hard, and is made out of nice, well-defined objects whose role is well-understood.

Other problems make you think "uh-oh. This one's going to take some coffee, a whiteboard and a fair bit of muttering." Some part of me thinks that the better and more experienced you get, the more new problems should tend to fall into the first category, but in fact I just tend to get given harder and harder problems (or so I think!)

So this is a question to TDD experts. What is the design process that should be followed when confronted with a new problem?



  1. I'm no TDD master, but here are approaches I've used successfully:

    0. Analyze problem, wished-for output based on input, etc.
    1. Select realistic inputs and outputs example that would show that your application works
    2. Write "big-box" test using that input and output in your testing framework (this isn't technically a unit test since it'll end up testing a collection of objects, but it is still pure TDD).
    3. Divide large functionality into smaller pieces, think about the API you want to access those pieces of functionality
    4. Pick a piece of functionality, write a test for it.
    5. Go into TDD loop until piece is finished
    6. Go to 3.
    7. Stop when original test is passing

    When I'm coding this way I sometimes realize my original acceptance test is wrong, so I'll have to modify it halfway through.

    Another technique is to start with acceptance tests, code up the solution all at the top level of abstraction (one giant function or object) and then do test driven refactoring: identify a set of functionality that should go together, write a test for it, move functionality. Sometimes it's easier to see groupings for abstraction after it's written.

  2. See

    I personally think that in order to do TDD you essentially have to have done the design of the software in your head.

    The problem for me is that the design isn't the result of A union B therefore C type of thinking, but rather is the result of system-wide awareness that is akin to a dream-like state of no-concentration yet heightened awareness. I've come up with outstanding systems that way, systems that would have taken many more years to be developed through iterative, analytical progression.
    If you follow sci-fi, it is near the mentat practice in Herbert's Dune. That's the closest I can think of.

    When I get on one of those mental trips, I can't explain what happens, but I can code the final result in free-flow, and get really close the mark.

    I agree it's rather unconventional, but from reading developer blogs, I feel it's relatively common.


  3. Thanks for the comments. Ryan's approach is about the closest that I can think of to actual test-driven design / prototyping, but even that rests on the assumption that when you attempt to modularise something, the first thing you know is what you want it to do -- exactly.

    Maybe it's possible to design by massaging the tests -- building up the expected output, imagining whether it will do what you were aiming at.

    Christopher's reply is probably closest to what I think is required for greenfields design. To break new ground, I think you just need to achieve some insight into the problem, rather than anything else. However, I think that you need to do something to harness that insight. "Imagination is cheap without the details" is a quote which comes to mind. I think that you really need to be sure that you're always grounded in a good understanding of the problem at hand. That said, I've definitely had situations where only a key piece of insight has made an entire project tractable -- turned it from a real problem into "just work".

    I still feel like there is more I can learn from TDD though, especially when it comes to design. I've love to hear more from the blogosphere!

  4. I do struggle with this, as I tend to follow the same approach of prototyping first.

    However, once I have a prototype, even one that I know is severely flawed, I can start to write tests that will describe it's functionality. As these tests evolve, they tend to expose the flaws in an explicit manner, and point me in the direction of a more elegant solution.

    In essence, my first prototype primarily exists to provide structure for my tests.

  5. Imagine your ideal API for the functionality, assuming someone waved a magic wand and created it for you.

    Then think of the most ridiculously simple example use - preferably something like adding zero to a number or processing an empty list.

    Write a test case and make it pass, even if you have to fake it. Then refactor to clean.

    If you only know what the highest-level API would be, use that. If you only know a tiny low-level piece, start with that. Just start with what you know, and keep moving. Ideas come more readily when you're working. ;-)

  6. I guess I can imagine a divide-and-conquer approach along those lines. Start with a single input case and a single output case.

    Step 1: Build something to produce the latter from the former, add the test, make it pass.

    Step 2: Come up with a modular design. Add unit tests for module classes / functions.

    Step 3: Add another input case. Iterate.

    Maybe that could be okay?