What about planning? How did the ChryslerComprehensiveCompensation people decide what was the next thing to do and how much effort to spend doing it?
The first step was a complete low resolution pass through the system as use cases (we called them "stories" to emphasize the shift in political power: technical people making technical decisions, business people making business decisions). This resulted in 135 4x6 cards of description. You might want to look at some UserStoryExamples, or read UserStory for a little more info.
At the same time as the user was doing this (six weeks), the developers were building prototypes of all the parts of the system they thought would be hard.
We then spent a day turning the stories and the experiences into a schedule.
First the user sorted the stories by priority, into three piles:
Then the developers sorted them by risk into three piles:
Can someone explain why developers can only move a card to a less risky pile? Thanks.
Because risk is directly associated with knowledge. 'I don't know how to do that' means it is high risk, so I put it in the high risk pile. Someone who has done it before sees it and says 'I know how to do that' and moves it to a low risk pile. A third developer comes along and says 'I don't know how to do that, but since it's in a low risk pile then someone else does' and leaves it alone.
One guess is that a particular developer might be better experienced than the developer who put it in the high risk pile. He/she could therefore remove some uncertainty and move the card to a less risky pile. Moving the card in the opposite direction would imply detailed knowledge of each developer's experience, which is pretty bold... This guess assumes that the group's collective experience is regarded when placing the cards. Time estimation also becomes individual.
Another guess is that all cards are initially placed on the high risk pile. This way you cannot accidentally miss a risky card.
These sorting activities were a key to the whole rest of development. It was in seeing all the cards laid out on a great big table that everyone understood the scope of the system. It was in discussing what the cards meant that lots of alignment began, developers to customer and developer to developer.
Next we estimated the schedule. We wrote how many weeks of ideal developer effort each one would take. Without the six weeks of undirected exploration, these numbers would have been pulled out of people's butts. With the exploration, people felt pretty confident in their estimates. Then we added up the numbers, divided by the number of developers, and multiplied by 2, a load factor derived from observations during exploration. The schedule was too long (of course), so the user fitted the tasks to the available time by setting aside low priority cards.
TimMackinnon - I find the above paragraph confusing. If you divide by the number of developers, this seems to imply that your LoadFactor must be greater than 2 (in fact probably in the region of 4 or 5), because your developers are working in pairs. Most of the talk about the LoadFactor gives a value of 2.5 - which seems more in line with dividing by the the number of pairs. E.g. if we have tasks that will take 6 days of ideal time, and two programmers, 6/2*2 = 6 (which seems not to account for any interruptions). If we divide by pairs: 6/1*2 = 12 (you were interrupted half of the time). It seems like a small point - but its easy to follow the above formula and get caught out. It would be nice if we could clear this up.
According to the XP folk, if two developers can each do n units of work alone in a particular amount of time, then by PairProgramming for the same amount of time they can together do more than 2n units of work. In other words: the PairProgramming reduces, rather than increases, the LoadFactor. -- BrettNeumeier
My observed behavior does not equate to what Brett says. Firstly - we divided by the number of developers and we were out by 1/2. We then started measuring by pairs and using our velocity - this has been much more successful and correlates better with my initial question - even a year later. I still don't find that you can do more than 2n units of work in pairs. Using java our experience has been that the load factor for a single pair is more like 3.x. Which would equate to 6.x using individual load factors. I'm a huge fan of paired programming, you get better quality, and code that is easily changeable - but in my experience its not an outright measure of faster performance - its probably the same. The improvements are something you observe over the longer term and I don't think you can measure it quite the same way -- TimMackinnon
Then we resorted the remaining cards by priority and risk, with priority having more weight in the sort. Then we dealt the cards into ten roughly equal piles, each representing three weeks of calendar time. I deliberately made the first couple light, so we could have a ManufacturedSuccess. We also tried to group like cards together where that didn't make the pile too big. We then assigned a theme to each iteration (in practice, everyone just used the iteration numbers).
That was our commitment schedule. We presented it that afternoon to the big wig managers of both development and the customer. That was one of the most fun presentations I've ever done like that. I felt such total commitment and support coming from the team. This was THEIR schedule. THEY had done it. THEY were going to stick to it. THEY believed it. I explained our planning process and our conclusions. We got the usual "what can you do to pull in the schedule?" question and answered "nothing".
Next up, the PlanningGame.
-- KentBeck
This is lovely stuff, and imho much superior to the iteration planning in EvoFusion. The developer buy-in Kent describes is a LaoTse pattern called UnknownRulers.
-- PeterMerel
I notice that none of the ExtremePlanning related pages talk about refactoring (at least, I couldn't find any). So far, every project I've been on has needed some consolidation and refactoring at various times in its lifecycle. Where does this fit in? Do you do it? -- AnthonyLander
Extreme planning relies on a strict separation of responsibility. Business makes business decisions, Development makes technical decisions. Extreme planning is a structured way for Business to make business decisions, but with the appropriate constraints provided by development (the estimates and the correspondence of the estimate numbers to the calendar).
Refactoring is a purely technical decision. Whether it is faster to clean up then do something, or do something and then clean up, or just do something, or clean up on both sides, is something only Development can decide, so it doesn't come into the planning process in any way.
There are times where CreepingCrud? catches up with you. Development should periodically schedule time (right after a release, when production demands are variable, works well) to learn, experiment, and clean up bigger stuff. But this should be 10% of development time, or 5%.
I've visited teams where they tried to tell the customer that they needed 3 months or 6 months to clean up before they could make any more progress. Following the rest of the XpCredo? prevents that from ever occurring. If you have the tests, then you can always nibble away at the refactoring while still making progress (OpportunisticRefactoring). -- KentBeck
See, for example, ExtremeProgrammingRoadmap and RefactorMercilessly. We try to keep the system clean all the time. First C3 phase we panicked and let refactoring slide, and it cost us an iteration. This phase we did better, but still took a week to do some things that weren't planned: most of these were not refactoring, but some were. --RonJeffries
Ok.. I certainly agree that telling business you need 3 months to refactor is an exercise in frustration and futility. But software does bloat up despite your best (merciless?) efforts. It's analogous to CreepingObesity. But I'm still a bit confused: Where does this 5% or 10% fit into the estimation? On a one year project, we're talking about roughly 3-6 weeks of development effort. Does this just get subsumed into the load factor? Did I just answer my own question? Ron: It takes a while for everyone to get the hang of things, doesn't it? -- AnthonyLander
When we're doing what we know we're supposed to do, we refactor as we go. We believe it makes us go net faster over the course of the project than if we didn't. Refactoring a little when things are a little off is much simpler than recovering when things are way off. But whether it makes us go net faster or not, yes, it's covered by the load factor. -- RonJeffries
That's also an idea that is in the Tao Te Ching: http://www.chinapage.org/gnl.html#63
While reading ExtremeHour, a question arose in my mind that is better asked here: How much of the "messing around" was driven by the stories the user was in the process of developing? Would this answer be different for a team that had less experience with the problem domain than the C3 team? -- RandyCoulman
I have the same question. Was the development team getting some "feed-forward" from the business team to help them direct their exploration during the six week period? -- WayneConrad
All of the messing around was driven by the stories. In particular, there's lots of stuff you can't estimate until you have implemented it, at least a little. All the stories had to be estimated before the CommitmentSchedule could be created, so that's what drove the prototypes. (When you have to implement 20 similar stories, sometimes it's fine to just actively estimate the first one and extrapolate from there.) -- KentBeck
Thanks
There are some tools that support agile project planning. TargetProcess (free) and VersionOne.