Category Archives: Uncategorized

99 Bottles – Mindful TDD Practise

In the last 2 meetings of XP Man, we’ve been mobbing to write a sudoku solver, as part of an exercise to explore the limits of TDD. As we discussed in our retro at the end of the second session, we’d struggled to make real progress. The reasons suggested for this were many and varied; possibly the format of such a large group who were unfamiliar with mobbing together, and a relatively short time-frame led to the code being pulled in many different directions.

My personal thoughts were that while the next step to take in TDD can lead to some disagreement and discussion (which test to write next, the simplest way to make it pass, when and what to refactor) our disagreement seemed to be more fundamental than that, and could even be interpreted as having different views on the what process of TDD is itself.

Earlier that week, in a separate group (our Book Club meeting) we had just finished reading Sandi Metz and Katrina Owen’s 99 Bottles of OOP, and had chosen to try a kata to mindfully practise what we had learnt. This struck me as being a very interesting contrast to our experience at XP Man. After pondering this for a while, I’ve decided to write this blog to capture our practise session. Moreover, I humbly hope to use it to illustrate how the XP Man session differed from TDD as I understand it.

Anyway, on with the illustration. We chose a classic kata for our book club experiment, the Roman Numerals Kata, in which we transform a string containing a value in Roman numerals into an integer. Even though this is a well known exercise, most of the members of the group hadn’t done it before, so our attempt was largely uninfluenced by past experience.

I’m not going to include the tests here to keep the amount of code under control. As you can imagine, they’re pretty straight-forward, passing in the Roman value and expecting the corresponding integer.

Our first tests converted I to 1, and we made the simplest possible implementation:

    private int roman(String roman) {
        return 1;
    }

When we added our test for II to 2, one of our group spotted that we could just return the length of the string…

    private int roman(String roman) {
        return roman.length();
    }

and the tests passed, including the next test for III being 3 (I’ll come back to this shortly).

Next we added a test for IV being 4, and made it pass with an explicit condition:

    private int roman(String roman) {
        if (roman.equals("IV")) {
            return 4;
        }
        return roman.length();
    }

And then we extended it with the case for V.

    private int roman(String roman) {
        if (roman.equals("IV")) {
            return 4;
        } else if (roman.equals("V")) {
            return 5;
        } else if (roman.equals("I") {
            return 1;
        }
        return roman.length();
    }

At this point, we made our first connection with the book, and decided to refactor the roman.length() for the I, II and III cases into explicit conditions.

    private int roman(String roman) {
        if (roman.equals("IV")) {
            return 4;
        } else if (roman.equals("V")) {
            return 5;
        } else if (roman.equals("I")) {
            return 1;
        } else if (roman.equals("II")) {
            return 2;
        } else if (roman.equals("III")) {
            return 3;
        }
        return roman.length();
    }

The book tells us to make our code more similar, to reveal the patterns therein and be able to spot the deeper abstractions. The two different ways of writing code (using length() and using explicit conditions) made it much harder to see that these were actually similar cases. Trying to introduce an abstraction too soon had actually made it harder to see what we were doing. Sandi Metz also mentions this in her blog The Wrong Abstraction where she says ‘Prefer duplication to the wrong abstraction’. This did leave us with the awkward default case of returning the length() that we knew was now not being used by the tests, but we thought we’d make a mental note of the fact and press on to see if the problem would resolve itself.

The book club mod didn’t feel quite ready to start finding abstractions just yet, so we decided to add a test for VI and add another condition.

    private int roman(String roman) {
        if (roman.equals("IV")) {
            return 4;
        } else if (roman.equals("V")) {
            return 5;
        } else if (roman.equals("I")) {
            return 1;
        } else if (roman.equals("II")) {
            return 2;
        } else if (roman.equals("III")) {
            return 3;
        } else if(roman.equals("VI")) {
            return 6;
        }
        return roman.length();
    }

With a bit of refactoring to remove the unnecessary else keywords, we were able to rearrange the code in numeric order.

	private int roman(String roman) {
        if (roman.equals("I")) {
            return 1;
        }
        if (roman.equals("II")) {
            return 2;
        }
        if (roman.equals("III")) {
            return 3;
        }
        if (roman.equals("IV")) {
            return 4;
        }
        if (roman.equals("V")) {
            return 5;
        }
        if (roman.equals("VI")) {
            return 6;
        }
        return roman.length();
    }

While we didn’t have tests for all cases, the book-clubbers recognised this as another idea from 99 Bottles. We felt that this was what the book refers to as a Shameless Green solution. By the book’s definition Shameless Green is ‘the solution which quickly reaches green while prioritizing understandability over changeability’ and ‘patiently accumulates concrete examples while awaiting insight into underlying abstractions’. A shameless green solution makes a good point to start looking for abstractions.

This makes the first point of contrast to what I remember being in the XP Man mob. Other people in the group my disagree, but to my mind I felt like we didn’t arrive at any point where we had a shameless green solution, one where we’d just tried to get the tests green without adding any particular design. At the most we had 2 or three cases in the code, and I remember at times we’d prematurely started to look for abstractions. I think it would have been useful to have a few more cases before we started to look for abstractions. To my mind, not having a shameless green solution with a reasonable number of cases made it hard to spot the patterns to base abstractions around.

So, back to the Roman numerals. From shameless green, we started looking for patterns. The I being 1 case seemed pretty atomic, but we spotted that the II being 2 was really more like II being 1 + 1, so we made it so.

    private int roman(String roman) {
        if (roman.equals("I")) {
            return 1;
        }
        if (roman.equals("II")) {
            return 1 + 1;
        }
        if (roman.equals("III")) {
            return 3;
        }
        if (roman.equals("IV")) {
            return 4;
        }
        if (roman.equals("V")) {
            return 5;
        }
        if (roman.equals("VI")) {
            return 6;
        }
        return roman.length();
    }

and then we generalised to all the cases:

    private int roman(String roman) {
        if (roman.equals("I")) {
            return 1;
        }
        if (roman.equals("II")) {
            return 1 + 1;
        }
        if (roman.equals("III")) {
            return 1 + 1 + 1;
        }
        if (roman.equals("IV")) {
            return -1 + 5;
        }
        if (roman.equals("V")) {
            return 5;
        }
        if (roman.equals("VI")) {
            return 5 + 1;
        }
        return roman.length();
    }

Another of the book-clubbers then suggested that each of the ones in 1 + 1 could be generated by the recursive calls, which the mob agreed with, and so produced:

    private int roman(String roman) {
        if (roman.equals("I")) {
            return 1;
        }
        if (roman.equals("II")) {
            return roman("I") + roman("I");
        }
        if (roman.equals("III")) {
            return 1 + 1 + 1;
        }
        if (roman.equals("IV")) {
            return -1 + 5;
        }
        if (roman.equals("V")) {
            return 5;
        }
        if (roman.equals("VI")) {
            return 5 + 1;
        }
        return roman.length();
    }

and that this could be done for all the multiple-numeral cases:

    private int roman(String roman) {
        if (roman.equals("I")) {
            return 1;
        }
        if (roman.equals("II")) {
            return roman("I") + roman("I");
        }
        if (roman.equals("III")) {
            return roman("I") + roman("I") + roman("I");
        }
        if (roman.equals("IV")) {
            return - roman("I") + roman("V");
        }
        if (roman.equals("V")) {
            return 5;
        }
        if (roman.equals("VI")) {
            return roman("V") + roman("I");
        }
        return roman.length();
    }

We then spotted from the code that the the I and V cases were similar and so moved them to the start of the function to make them stand apart from the multiple-numeral cases.

    private int roman(String roman) {
        if (roman.equals("I")) {
            return 1;
        }
        if (roman.equals("V")) {
            return 5;
        }

        if (roman.equals("II")) {
            return roman("I") + roman("I");
        }
        if (roman.equals("III")) {
            return roman("I") + roman("I") + roman("I");
        }
        if (roman.equals("IV")) {
            return - roman("I") + roman("V");
        }
        if (roman.equals("VI")) {
            return roman("V") + roman("I");
        }
        return roman.length();
    }

The next step was spotted by one of the junior members of the group, and involved making the code more similar again. The case for III was now made of 3 recursive calls, but could be changed to recursive calls for I and for II. This made the cases of II, III and VI more similar again, as could be seen from the similarity of the if statements. This was something else the book had told us to look out for as it hinted that there was an abstraction to be discovered.

    private int roman(String roman) {
        if (roman.equals("I")) {
            return 1;
        }
        if (roman.equals("V")) {
            return 5;
        }

        if (roman.equals("II")) {
            return roman("I") + roman("I");
        }
        if (roman.equals("III")) {
            return roman("I") + roman("II");
        }
        if (roman.equals("VI")) {
            return roman("V") + roman("I");
        }

        if (roman.equals("IV")) {
            return - roman("I") + roman("V");
        }
        return roman.length();
    }

We could see that the II, III and VI cases were similar, and could be solved by recursion, so we moved the still outstanding special case for IV higher up to leave the interesting cases last.

    private int roman(String roman) {
        if (roman.equals("I")) {
            return 1;
        }
        if (roman.equals("V")) {
            return 5;
        }

        if (roman.equals("IV")) {
            return - roman("I") + roman("V");
        }

        if (roman.equals("II")) {
            return roman("I") + roman("I");
        }
        if (roman.equals("III")) {
            return roman("I") + roman("II");
        }
        if (roman.equals("VI")) {
            return roman("V") + roman("I");
        }

        return roman.length();
    }

We could then generalise these cases to being the roman value of the first roman numeral added to the value of the remaining numerals. Note that this step also resolved the dangling case of the length() that we were worried about.

    private int roman(String roman) {
        if (roman.equals("I")) {
            return 1;
        }
        if (roman.equals("V")) {
            return 5;
        }

        if (roman.equals("IV")) {
            return - roman("I") + roman("V");
        }

        return roman(roman.substring(0, 1)) + roman(roman.substring(1));
    }

Again, this was something we remembered from the book. We were reducing duplication, but not just by finding identical characters in the code, but by finding duplicated concepts. Here the first part of the return statements, roman(“I”)/roman(“I”)/roman(“V”), represent the same concept the value of the first roman numeral even though they are not the exact same text.

However, while looking back at this for this blog, I think that maybe we made a slightly bigger leap in the last step than we should have. Maybe the excitement of discovering the abstraction made us rush ahead a little. I think we could have made a couple of smaller steps of making the code more similar before finally arriving pulling out the abstraction. As it happens, the bigger step worked and the tests passed, so all was OK this time.

At this point the book club mod had run out of time and so had to stop. We’d had an hour session, but had lost 20 minutes at the start trying to setup AV in a new room. What you see here had been achieved by a group of 5 in only 40 minutes. While we had not completed the kata, the book-clubbers felt like they had practised well and applied their learning, and that was valuable.

One thing worth noting is that since arriving at our Shameless Green solution, the book-clubbers did nothing other than refactor. That refactoring took the form of very small changes to the existing code, one at a time, where the tests always passed. This for me is the essence of Test Driven Design, i.e. the design of the code should emerge from the test cases alone. We did not go looking for a design to solve this problem, the design found us from the examples, just by us just paying attention to the small details (bike-shedding, as Kevin put it).

Another observation to be made is the cadence of TDD. If we naively follow the the TDD mantra of Red/Green/Refactor, it seems that these 3 stages should equate to roughly equal amounts of work, repeated in that order. In reality, getting to Shameless Green is mainly a process of Red/Green with only a little Refactoring. Once enough test cases are simply expressed in the code, a more lengthy period of Refactoring often ensues. This is borne out by the book. Chapter 1 introduces Shameless Green, and then chapter 2 circles back and explains how to drive it from the tests. The book has 4 further chapters that are almost all refactoring (apart from the final section of the final chapter that implements a new feature that the refactorings have made easy).

This is the final point of contrast I’d like to make with the XP Man experience. On more than one occasion, the XP Man mob started writing speculative code. For me this did not feel like TDD. I was expecting code to only be written to make a failing test pass, and other than that for code to change via baby-step refactorings.

Being an exercise to discover the limits of TDD, the sudoku solver still interests me and I’d like to see a solution generated via TDD (if possible!). Maybe we’ll undertake it with the book club mob.

One final message to the XP Man folks in particular. Hopefully this illustrates my understanding of TDD. If anything is unclear I’d be happy to answer questions. Moreover, if anybody has a different view on the process of TDD, then I’d be very happy to hear them set out in blog form or otherwise.

Growing Legacy Code, Guided by Tests

In our weekly practise session at work, I recently ran a hands-on session that I had first experienced at the London Software Craftsmanship Community on Baby Steps.  In a nutshell, the exercise is to work in really small (2 minute) pomodori, in which you pair to write a failing test and then try to make it pass.  If the test is green within the 2 minutes you can commit, if it is still red, then you have to roll back.  The idea behind the exercise is to improve TDD skills by trying to learn to work in smaller steps, encouraging the smallest amount of change to progress the implementation.  We chose to use the Tennis Kata as our problem to solve, which involves writing some code to score a single game of tennis.

The experience of the pairs started out much as you would expect.  They struggled to do something small enough for the first few pomodori, but after a while started to get into the right rhythm.  Having said that, the pairs all embraced the roll-backs with good humor and persevered to reduce the scope of what they were attempting until they were able to start to make regular progress.

We discussed afterwards for why the earlier pomodori resulted in so many roll-backs and a couple of the symptoms were:

  1. Lack of focus on the name of the test and consequent lack of direction in the 2 minute period.

  2. Writing too much infrastructure for the first test and its implementation.  Lots of setup, fields and supporting structure in your classes are not needed. The first test is difficult in that you have to start from nothing. You really need to focus on getting it written and passing, foregoing all other niceties, there isn’t time for anything else. This chimes with what Keith Braithwaite is getting at in his TDD as if you meant it, and gives the lowest barrier to getting the first test passing. This is also a good illustration of lean principles (but maybe that’s for a future blog).

These were all good lessons learned and, as expected, the number of roll-backs decreased as the session continued.

However, after a good number of commits, something else started to happen…

Though it was not mentioned at the start, the pairs took the occasional pomodoro to refactor. This was interesting as it indicated that tech debt had managed to build up in their implementations in a matter of minutes. It also made a clear illustration of how tech debt prevented progress on delivering the next piece of working functionality. No failing test had been written and made to go green within that 2 minute period. Clearly this is evidence as to why we should keep our production code as well factored as possible.

However, even with a few refactoring pomodori, the pairs started to get bogged down at around the point where the tennis game goes to Deuce.  Insufficient refactoring had meant that it had become difficult to make progress and so the pomodori started to return to roll-backs rather than commits.

At this point in might be worth commenting that I was expecting something of this ilk as it was rather like my experience when I originally attempted the exercise.  Though the process was intended to encourage the participants to try to work in smaller steps, it probably also encouraged less that ideal TDD style.  It’s all too easy to take the 2 minute pomodori as encouragement to work quickly, and committing on green took the focus away from the refactoring step of the TDD cycle: Red, Green, Refactor.

This was not, however, an entirely negative activity.  When the pairs had encountered the problem for long enough we stopped and discussed our experience.  It was clear that something was wrong and the pairs were feeling the pain.  It was good to recognise this, identify that the code was giving feedback, which was that it was pleading to be refactored.  Hopefully the experience will allow the pairs to be more sensitive to this feedback when working on production code.

In my experience, refactoring is often the neglected step in TDD.  When I first experimented with TDD I would often produce ‘legacy solutions’ that just added more and more conditional statements and state variables as the implementation progressed.  For me, this feel like Growing Legacy Code, Guided By Tests.

Traditional legacy code often comes about by the code being modified at different times (often by different people) and as time passes its quality decays and it becomes brittle and resistant to change.  This process of adding one layer then another and another, without ensuring that all the layers sit well together, is also known as code rot.  However, using TDD badly, we seem to be able to make code rot in a matter of minutes!  The accelerated decay process seems to work by rapidly applying this layering effect and ends up with a similar result.  For the inexperienced TDD practitioner, it’s all too easy to skip the refactoring step; the ‘high’ of seeing the green bar can make you think all is OK, when actually your work is not yet complete.

It takes great discipline and sensitivity to the code being written to ensure it is as well-factored as possible at all stages.  Though I am aware of this I still make mistakes and leave refactoring later than I should, I have been trying for some time to improve my TDD skills this way. I have found this exercise very helpful in clarifying some of this, and allowing me to share that knowledge with others.

As for the team, we are now re-implementing the Tennis Kata, without the 2-minute pomodoro constraint (but still working in baby steps) with the focus on refactoring and ensuring the implementation does not get out of hand.  The code we are writing is much cleaner and simpler and gives a high contrast to the code written first time.  We are already much happier with the results.

Running an Intro To TDD session for Chester Devs

I thought I’d jot down my experiences with planning and presenting an Introduction to Test Driven Development recently, as when I was looking for inspiration for the session on the web there didn’t seem to be much out there.  There’s plenty on TDD, but not on how to introduce it, so here’s my experience for anybody who’s interested.

All this started when Fran Hoey announced that he wanted to set up the Chester Devs group.  I got in touch and offered to do a session based around getting hands on with some form of exercise.  After a little constructive discussion, it became clear that really the first session should be An Introduction to Test Driven Development, as all the exercises I was considering would need some experience of TDD, and we were unsure how familiar the group would be with the technique.  The only problem being that I didn’t have anything of that sort prepared!

So, what to do…?  I first off started randomly putting down thoughts about what I wanted to cover (if I were being really pretentious, I would consider this driving the session by selecting tests with which to judge whether the session succeeded, thus generating the Introduction to TDD by TDD… and while it would be a nice self-referential tick-in-the-box, I don’t really think this happened).  What I really wanted was a hands-on session as I’d experienced several such meetings with the London Software Craftsmanship Community and found these to be an excellent way to learn; nothing beats getting your hands dirty and working with the techniques yourself.  We also had a time limit of about an hour and a half for the session, so it had to be reasonably compact.

Wanting to get wider input for the session, I contacted Sandro Mancuso of the London Software Craftsmanship Community looking for ideas, and he in turn put me onto Alastair Smith who had recently done a similar session for the Cambridge Software Craftsmanship Community.  They both gave valuable input, firstly on really focusing the session on TDD, and also being prepared for developers who had not previously seen TDD struggling to take it on board in such a short session.  All good advice!  I squirrelled it away…

The other major part of my preparation was to re-read Kent Beck’s Test Driven Development book in detail for the nth time.  This formed the basis for the presentation and the ideas to put across.  I found that even though I’d read the book several times before, there were still very valuable nuggets of information in there that I’d either forgotten or overlooked.  I really don’t think the presentation would be half of what it was without this source of good quality information.  I found it interesting that when you’re reading books to then use them as a basis to tell others about their content, you digest them a lot more carefully.  I also re-read the introduction to Growing Object-Oriented Software Guided by Tests by Steve Freeman and Nat Pryce.  Again this is a great source of inspiration, but did go a little further that I really needed for simply an introduction to TDD.  There’s no way I wanted to try to include flushing out of object designs and mocking in this introductory session.

I also needed an exercise to undertake during the session, and it was Fran who actually came up with the best suggestion for this.  I was a little unsure at first, but we ended up investigating the Greed Kata and it turned out to be pretty good for the job.  In its simplest form, it’s a collection of 8 rules for scoring a single roll of 5 dice (the full dice game would not be implemented).  The rules were pretty elementary and so would be easy to understand in the session, I didn’t want to have to go into long and detailed explanations about the rules during the session and this fitted the bill nicely.  It was also simple enough that the participants could focus on the TDD techniques and wouldn’t be distracted by the puzzle.  I practised the kata several times before the session, to make sure I understood it well enough, but I did always have a nagging doubt at the back of my mind that opening it up to a number of other keen, sharp developers would find some corner of the kata that I hadn’t explored.

So, I had the bulk of the research in place, so now I needed to plan the structure for the session.  I was painfully aware of the hour and a half time limit, and that TDD is a huge topic, and that if we went over time we would probably lose the devs to tiredness after a long day.  For this reason, I decided to take a chance on giving the shortest possible introduction to TDD so we could get the devs working on the Greed Kata and then we could discuss it at the end.  I opted for giving out some simple rules for TDD, and these formed the first 3 slides of the presentation:

  1. Don’t design a solution up front, instead write the next test
  2. Red/Green/Refactor
  3. Goals for the session being to experience TDD and not necessarily to complete the exercise.

I would ask the room to indulge me in doing things this way, and by doing this we would then have an experience of TDD which we could discuss.  Furthermore, it would avoid overloading the attendees with information that they would then probably be still trying to digest when doing the exercise and may confuse matters.  I was also aware that this might backfire, and the group might just not have a clue what to do.  For this reason I prepared 3 things:

  1. I was willing to talk the group through the first test
  2. I was prepared to do the first test on my laptop using the projector (or possibly even the full Greed Kata)
  3. I had prepared Kent Beck’s Fibonacci example as an alternative to show, so that the group might pick up the idea of what to do with the Greed Kata

Ideally I was hoping that after the group had had some time with the Kata we would just discuss it, but just in case the group didn’t have too much to say, I prepared some slides that tried to explain why I had asked for the particular steps to be followed.  For example, in asking people not to design a solution up front, I was asking them to place working code before design; and in writing the simplest failing test, they would be using a divide and conquer technique to solve the problem.  This made the presentation nice and circular and tied the rules I would ask people to accept to an explanation.  It would also allow some deeper thinking about TDD should it not fall out from the discussion.

On the day itself, we turned up at Chester Uni (our hosts) and found an excellent room set aside for us with some delicious sandwiches provided.  Definitely a good start.

After Fran had introduced the session, I was given the floor and fired up the slides.  I started with asking about the experience the group had with TDD, and it turned out that we had a roughly 50/50 split with TDDers and newbies, with a couple of people who had some intermediate knowledge (knew about tests, but hadn’t used them for full TDD).  This was good as we would be able to pair people up neatly for the exercise.  I explained my background: 3+ years’ experience with TDD, and I didn’t claim to be any kind of expert and that I was still learning.  I also explained the format for the session in that I would be asking people to take on-board some simple rules, and to trust me in doing so.  This would allow us to experience TDD and then we could discuss it as a group.  This was part of a little Expectation Management just in case some people in the room were too shocked by the paradigm shift from traditional design-first development to TDD; I wasn’t here to sell TDD, just encourage us to experience it and make up our own minds.  As it turned out, either it worked pretty well, or we didn’t have anybody who was too upset by the idea.  At least it made me feel a little easier that I wasn’t cornering people and forcing them into TDD.

We went through the first slides and as far as I could tell everybody understood, so we discussed the exercise and paired people up.  We had a few problems with people not having a complete dev environment, missing things like test frameworks, but either by managing the pairing or by other means we got everybody on a working machine and got them started.

By just milling around between the pairs, it was clear that not everybody knew where to start.  So after a few minutes I paused the exercise and we had a two minute discussion about what the first test would be – one of the things I had planned for.  Some people were trying to handle error cases first and throw exceptions and the like, and while these may be good tests to start with in the real world, I gently suggested that we didn’t need to do these for the exercise.  There were also pairs who were just working their way through the sheet top to bottom.  This would give the first test as being ‘rolling a single 1 scores 100’, but with a bit of discussion within the group we came round to a slightly simpler initial test being ‘a bad hand scores nothing’.

As the pairs got on with the exercise, it seemed to me that the paring of a TDDer with a newbie was helping, and the pairs were turning into mini mentoring sessions.  I guess we were lucky with that and it certainly made the session run more smoothly.  There were a few hiccups in implementations I saw but nothing unexpected for a group is had been given a rapid brain dump followed by an exercise they’d never seen before.  It was clear that choosing the next simplest test was not easy and pairs would often miss steps, for example a couple of pairs had tested that the code could score 100 for the first die being 1, but not for any subsequent dice, or had not tested for two single 1s before moving on to triples.  There were some other silly points for example, more than one pair were managing to roll zeros with their dice, which when pointed out they rapidly put right.  All in all, I think the Greed Kata worked out fine on this occasion.  It didn’t have any real gotchas and the pairs were able to get up and running with it quite rapidly.  I think I’d use it again for a similar session.

I lost track of time slightly but brought the exercise to a close after about 40 minutes or so.  Most pairs had made some progress with the kata and were around the point of dealing with the requirements for scoring triples.  We had a brief discussion of how people felt about it, which I wanted the attendees to drive so I simply asked for comments (I didn’t want to lead the discussion down any particular route).  I should have taken notes as we probably had about half a dozen comments, but the ones I can remember were:

    • It’s hard – I guess this is quite a standard reaction. I know I found it difficult when first trying to follow the true TDD way.
    • Can you write bad code with TDD? – My response was: Yes, and the skill is in refactoring, and it’s a tricky area and one in which I feel I’m still learning.
    • It’s difficult to know what the next simplest failing test is. – Again, a classic experience for newbies, and I think I recommended starting with small examples like the one we were working on as it takes time to hone your skills

Though the discussion was going well, and the group were even batting ideas back and forth between themselves rather than going via me, after a while the ideas slowed down so I returned to the slides I had prepared.  I think these were useful in that it allowed a little more explanation of what we had been doing to come out, and for the whole exercise to be put in context.  I’m not sure the group noticed so much, but I didn’t relate the explanations I was giving to the Greed Kata so much, but this was partly due to the Greed Kata being only a very simple exercise and also because the explanations were really for those initial general TDD rules.  So instead of using the Greed Kata, when I needed an example, I would speak about a theoretical real-world project.  An example of this would be one of the explanations of ‘not coming up with a design first’:

‘Many people have been on a project where you set off with some grand plan to implement some architecture, and you do so, and when you then begin to test it you find a case that you didn’t account for, so you put in a hack and damage your architecture and move on, but then you find another case, and this really doesn’t work so you hack your architecture even more, this continues until the architecture is completely compromised – the TDD alternative of making it work first by making the tests pass, and then adding the architecture in afterwards (by refactoring) always ensures that the architecture is fit for purpose.’

I’m not sure that it really matters, but I have a nagging feeling that it would be better if the example could better illustrate the points I was trying to make.  The fact that this was only an introduction to TDD probably allows me to get away with it.  Anyway, I’ll be on the look-out for inspiration in this area and may include it should I do this session again.

My final slides were encouraging people to explore TDD in a wider way.  I stressed that the session we’d just covered was only an introduction.  I referred people to other Katas online and to various blogs and books that I considered worthwhile reading.  I also gave a link to a solution to the Greed Kata I had done, stressing that this wasn’t any kind of model solution but that it was just there for reference and comparison to other people’s efforts.  I even mentioned that I thought I’d made a ‘deliberate mistake’ in there…  There’s certainly something I’m debating with myself as to whether I kept as close as possible to the spirit of TDD.  I’ll leave it to the interested reader to investigate and find out.

That’s about it really.  From my perspective I think the session went pretty well, though I think it’s really the attendees that should have the final word.  I’d certainly run the session again and would recommend this approach to anybody else thinking of doing something similar – and by this I mean please feel free to use any of this material in running your own sessions.  If anybody does have any thoughts or experiences to share, then I’d be glad to hear them.