Agile Bridge

I'm writing this on a WikiWikiWeb because I hate writing papers and articles. Papers and articles can't be changed, and I keep changing my mind, which is sometimes easier to change than hair-colour. -- CohanCarlos



An Agile Bridge for Testing, by CohanCarlos

Introduction

I'd like to tell you about a solution we found to a small problem related to AgileProcess teams. And hopefully, someone else will try it out and tell me about what we did wrong and what we did right, and whether we ought to have done it at all or done it very differently.

The problem was as follows. There was a separate Test Team working with developers who were comparatively agile. Besides, the nature of the product made programmatic tests a bit ineffective, so most of the testing had to be done at the GUI layer. How were the developers going to make optimal use of the test team while at the same time remaining at least a little agile?

One of the really nice things about ExtremeProgramming is that it is not essential for developers to produce truckloads of documentation or conduct smoke signal rallies to tell each other about their work. It is just not needed when people do PairProgramming. Besides, documenting work is not that effective as a means of communication.

Now, bring in an island floating on the high seas with a blindfolded supporting team upon it. How do we coordinate what we do? 'A ship will be sent to your port at 11 am the coming Monday bearing parcels of paper for your enlightenment. Please be so good as to initial every sheet and return them to us the week after. Ship us the parcels containing the documentation for your test cases the week after we get your initials. We shall initial every sheet of your documentation of your test cases as a sign of our approval and return them yet another week later,' does not seem to be an acceptable answer from the point of view of agility, though it may work for a traditional process.

We have found in small experiments that we could dispense with all the documentation if we borrowed a few ideas from lean production and from existing ExtremeProgramming tricks of the trade.


Piers of the Bridge

We used DeveloperStories to coordinate the two teams.

A DeveloperStory is like a UserStory, but written by developers describing any small functionality they put into the program. It's just three or four lines flagging a change. Like, if the developer adds a button. The button does nothing. When you click it, a null pointer exception occurs. H(sh)e writes a story on the button and gives it to the tester who immediately gets the build and tests the button.

The button is what we call a FeatureLet. It's a piece of a feature that's not yet been completed.

You see, this way, the code gets tested immediately. The way it is usually done, it gets tested weeks later, when the barge with the release makes a stop at the testers' island. Since it is tested immediately, in fact as fast as the developers can integrate and build it, they get feedback immediately. Otherwise, they end up grimly facing onslaughts of bugs way too late to be able to do much about them, and feeling rightly upset about the whole deal.

Now one other thing we did was measure ourselves by TurnaroundTime. This is the time it takes for a defect that has entered the code to be discovered. Everyone tries real hard to minimize this. We want to find the defects fast! No-one has a problem with that because it does not suggest the developer has been sloppy, unlike some other popular measures we use.

This works well for agile teams because

  1. ContinuousIntegration is fully exploited.
  2. Feedback is timely.
  3. Elaborate documentation is not required.


Bridges Everywhere

This is one little experiment, but it suffices to show where bridges are needed. Look around. Separate test teams exist in places you would least expect to find them.

For instance, documentation teams. There are sometimes separate documentation and translation teams performing a rather specialised task with their own set of sometimes expensive tools, and they will expect smoke signal sessions too.

If you peer closely at the XP customer, you will see a human with "test team" written all over his/her face. The customer's test team has to run the acceptance tests!

Then, in some firms, management needs a bridge. TestDrivenManagement? has been proposed as a way of bridging this gap. I know too little about it to tell you more.

Your management test could be - the turnaround time should drop from X to Y by the end of this iteration/project. To read more about TestDrivenManagement?, check http://www.sdmagazine.com/documents/s=7839/sdm0303a/sdm0303a.htm?temp=TaZO8N4STw

-- SomikRaha


Two Principles

In fact, it seems likely that agile processes could work in any environment provided the bridging system makes use of two simple principles.

  1. It coordinates between the islands using a demand-driven system instead of a schedule-driven system.
  2. It provides for work to be requested in the smallest chunks possible.

These principles are observed by the agile testing process described above.


Lean Production

AgileBridges borrow many concepts from Japanese LeanProduction? which is philosophically very close to XP.

The three concepts of LeanProduction? are JustInTime, Kanban and Kaizen, and each is valuable in AgileBridges.

In this AgileBridge,

Lean Software Development is an interesting area - leading to EvolutionaryDesign. Tom and Mary Poppendieck have been working in this area - check their book at http://www.poppendieck.com/ld.htm. They have studied the Japanese manufacturing industry and brought in useful ideas

-- SomikRaha


You won't believe this! Take a look at what I found!!

JustInTimeProgramming

Ok.. that was an interruption.. let's get back to the article or paper or novel or whatever. Where was I? Whew! This tool IS cool.

Conjecture

Well, I think the TwoPrinciples are reflected in the three LeanProduction? concepts mentioned above. JustInTime and Kanban lean to the first principle, that on doing things in a demand-driven fashion. Kaizen is about minimizing the size of the requests (the tightness of the controls in the pull-system) - actually, Kaizen is more than that, but that is one of the things improved - which is the second principle.

Why is a PullSystem? so critical?

I don't know. All I know is, TraditionalTesting? tends to be schedule-driven, like normal batch-driven ManufacturingProcesse?s that JapaneseLeanProduction? departed from.

Okay WikiGnomes, I know this format ain't really WikiWiki, but this is a thought dump. Shall refactor over the next few days.


Pair Programming

PairProgramming took place in a very interesting way that we did not anticipate. Developers often began to work with one tester for prolonged periods of time, invite them to design discussions and for week-end "let's catch up" runs.

One reason for this could be that a lot of the information exchanged between the tester and developer is exchanged verbally during regular meetings and the stories serve more as flags marking newly prospected territory than as maps to consult. Thus, context becomes important, and context is better preserved in a dialog than in isolated messages.

But that does not explain why developers started asking testers to join them on weekends when they were trying to catch up. I would think that the tester had begun to shoulder some of the responsibility for ad hoc testing that would have been performed by the developer, (who was not doing any test-first design).

(Ok, before someone get's mad about our not doing TDD, let me say I wish we could, but honestly, that product was quite difficult to test without special tools that only the separate test team had).

This would also seem to reveal another use of featurelets. Since they are extremely small pieces of functionality, the cycle of feedback becomes tight enough perhaps to allow developers to rely on the test team more than they usually do.


Facts about the Experiment

The experiment was conducted with a group of four testers and about six developers. However, we were finally only able to observe progress on one project with two developers and one tester. The reason was that the other developers never managed to get down to integrating continuously.

We are still working on collecting data and metrics and hope to compare the performance of the project with others in terms of defect rates. If others who read this want to try this out and supply us with data points, we welcome volunteers. :)

Related Links:

If the readers have links that I would be interested in embedding in this article, please leave them here for me to pick up. Feel free to comment anywhere using TwoSingleQuotes so I can spot the comment easily.


Acknowledgements

Thanks to all the developers and testers who volunteered, and to those who allowed us to try this out.

They are:

Thanks to the WikiGnome who refactored the pages to eliminate a WikiNamePluralProblem here :).


See also DeveloperStory, TurnaroundTime, FeatureLet, TwoPrinciples


CategoryAgileMethodology


EditText of this page (last edited January 11, 2005) or FindPage with title or text search