The Information Repository and Reporting System has been in production since early 1998. I joined the project in August 2001, just as Version 2.0 (actually the 5th release - there was 1.0, 1.1, 1.2, 1.5) was being rolled out to production. everyone was scurrying about in the usual panic to fix last-minute problems in order to get the release out the door. The system did get shipped, owing to the heroics of a few - yet another day in the trenches.
For this project, the development and client teams were co-located - this is a good thing! Also, the client was already in charge of creating and managing the requirements - another good thing. A few weeks after I arrived, the development team leader moved on to another project and was replaced, purely by chance, by an old friend of mine.
I didn't find out until later that the team leader who left was a bit of a control freak who insisted that he do all the analysis and simply spewed specs to the developers. To his credit, the design of the system was actually pretty good. However, there was a terrible communications gap between the developers and the client, despite the fact that they were sitting together. The client never saw the system until it went into user testing after a 6-8 month development cycle.
I pitched the idea of using XP to the new team leader and to our project manager. They immediately bought into the notion of high levels of communication in XP, and were all for trying it out. "It couldn't be any worse", was one of the comments.
We sat down with the client, explained the XP approach, and they were quite enthusiastic. We set about getting the requirements (stories) together for the next release, and providing high-level estimates for the time required. In this particular case, we had an absolute date that certain stories had to be in production. We worked back from that date, accounting for deployment time, etc., and split the stories into iterations.
I'll never forget the meeting in which we set the completion date for the first iteration - I told the client that the date was set in stone, and we would adjust the scope if necessary. They looked skeptical to say the least. We then went about breaking the stories into tasks and getting to work on them. It's worth noting that there were 3 developers on this project, so we obviously couldn't use PairProgramming at all times. Additionally, we didn't use the standard unit testing frameworks because it would have meant changing a lot of existing code, which was deemed to be too much of a risk. We did, however, implement a test utility that allowed us to quickly test the code that was most affected by the changes for this new release. Testing that took 10-15 minutes in previous versions of the system now took 5 seconds!
While we were developing, we called the client in numerous times to have a look at what we were doing. After all, they were only a few feet away! In previous versions, they weren't even allowed to get screen shots, since "they might change".
We met the target date for the first iteration with no difficulty, and gave the client a working version of the system for their testing. A few minor problems surfaced during their acceptance testing, which we fixed right away.
We started the second iteration, and actually were ahead of schedule (due mainly to the faster testing turnaround time) when the client called from a meeting with an outside contractor of theirs in Toronto. We were told that a significant chunk of what we were working on was going to have to change. I had checked in the code from the area most affected about half an hour earlier! The client faxed some of the changes from Toronto, and then met with us a day or two later when they returned. We quickly adjusted the stories and tasks to accommodate the changes, and restarted development. We completed the iteration on time, despite the changes, and again the user performed their acceptance testing.
Iteration 3 also was completed ahead of schedule, so we were able to add about 10 additional small stories that had been bumped from the original ReleasePlan. The client tested; we fixed any problems.
The client then performed a final, across the board set of acceptance tests for which they scheduled 4 weeks. They were done in 2.5 weeks, including Windows 2000 testing that wasn't originally in the test plan.
We packaged the system, and shipped it on the day it had been promised 6 months earlier. In terms of the number of additions/changes implemented, the release was the single biggest since the original 1.0 release in 1998. It was the only release ever to be shipped on the proposed release date.
Throughout the process, the stress level was very low, and team spirits were quite high - we actually had fun! I also noticed something very interesting about the team's approach by the time we released the system - there was one minor problem while we were rolling the system out across the country. In our rollout plan, we were migrating the regional databases one at a time so that the regional users would only have a 1 day outage. However, we neglected to 'turn on' a process that populated a reporting database from the OLTP data until the very end of the rollout. We were able to recover quickly enough, but everyone was quite annoyed at themselves that we had 'blown' the 'perfect' release! There was a real pride of ownership that had been instilled that didn't exist in previous versions. I have to admit, I was annoyed myself, but I asked everyone in a status meeting if they had ever seen a perfect release! ;)