Software development when the participants are geographically separated most of the time. Some think this is key to being a competitive organization; others think that this can quickly turn into an AntiPattern if you're not careful.
Not to be confused with DistributedProgramming, where the final home of the application(s) is the issue rather than how it is developed. Sometimes called "Dispersed Working" to make the distinction clear.
Distributed Teams vs Outsourcing vs TeleCommuting
These three situations are different and need to be approached with different patterns:
Opposition View
Separation of developers from users often leads to poor developer understanding of system needs. It is usually best to have developers and users colocated.
I agree.
This is a functional, not geographic, problem! The geographic location of users and developers is not the main problem, the main problem in this area is communication and collaboration, which can take place over great distances as well as across the table, and with the emergence of virtual conferencing, there will be little difference. There are time differential problems to be worked out to schedule such virtual face to face conferences if the parties are widely separated. But there are even advantages in this respect in that in properly configured arrangements, development can take place 24 hours a day with no one having to work during the night time.
Actually it is a geographic problem. The effort to communicate increases with geographic separation (and time). There is no more powerful means of communicating the needs of the end users to the developers than to have the developers sit alongside the users. Site visits can have a similar effect, but the effect fades over time. Far too much information is filtered out when it is reduced to written or verbal language and the sense of importance of issues can be completely lost.
An interesting off shoot of this line of thinking is that if the users are geographically dispersed, collocating developers and users may require that the developers be dispersed as well. This may lead to developer conflict as local work preferences are identified and need to be resolved.
Whether this is a geographic or a functional problem depends on the rest of the development model as well. Well modularized projects are easier (but not trivial) to distribute, even across geographical and temporal boundaries. Most important though is the presence of a strong product vision and "flag bearers" who can champion this at each site. In my experience the single most hindering hinderance(...) in cross-site development is the lack of such a strong guiding vision. Note that this might seem at odds with point 1 below but only if one makes the mistake of equating architecture with product.
What it requires to make DistributedSoftwareDevelopment work is
(1) A development process that fosters independent thinking and doesn't "sweat the small stuff".This includes letting developers be their own designers -- it won't work if an architect or manager wants to control the whole process at a micro-level.
(2) A basic design that is truly modular and extensible (i.e. proper use of encapsulation, well-structured inheritance hierarchies -- usually this means being chock-full of DesignPatterns).
(3) An EXCELLENT code-management systems (ENVY from OTI/IBM comes to mind)
I can think of three projects ranging from small to pretty large that have made this work that I'm familiar with.
(1) The biggest is probably the development of IBM Smalltalk and VisualAge. There are at least three groups that I know of. There is the group in RTP that actually builds the VisualAge classes, there is the base IBM Smalltalk group in Ottowa, and then there is another group in Raleigh that does extensions to the base image. They have well-defined roles, use ENVY to keep their code in sync, and seem to do pretty well.
(2) I was on a Geographically distributed project with KSC under contract to Cold Spring Harbor Labs and Genomica. We would meet about once a month for design meetings, and the rest of the time we would just trade design information by email, and code through ENVY. The key to this one working was that the parts of the system we were working on were more or less independent, and that we enforced class ownership.
(3) KSC and Texas Instruments did a Geographically distributed project on part of the ControlWORKS system last year. Again, the keys to the success were what has been mentioned above.
(4) see quovix.com. they are building software apps for businesses using a network of 1,500 developers around the world. the whole thing is coordinated by a small group of quovix engineers...
My guess is that if you violate any of the above three assumptions, then this does become an anti-solution.
Most OpenSource projects are DistributedSoftwareDevelopment efforts, so those experiences might be instructive. See OpenSourceProjectOrganization for more.
As far as other tools besides ENVY that are up to the task of geographically distributed code management, CVS (the ConcurrentVersionsSystem) is commonly used. The makers of ClearCase have an add-on called MultiSite which is specifically for geographically distributed development. Perforce has an underlying client-server model which, combined with its blinding speed, is also well suited for this particular task.
The code repository management solutions pretty much come in two varieties: a centralized repository accessed by remote clients over a network, and mirrored repositories or replicated repositories where each site has their own replica of the master repository (sometimes there can be more than one master) and one repository "syncs-up" with another at planned times and intervals.
The centralized model assumes less risk, but is often slower for doing checkouts and checkins over the internet, and there can be reliability and availability issues if the central site goes down (all the remote sites can't do any checkins or checkouts during that time).
The mirrored-model may be done with a strict master-slave relationship between repositories. Also, some notion of site-mastership has to be applied either to various directory-trees, or else to various branches of the system-wide version tree (or both).
For directories, what this means is that, a given directory and its contents (possibly including subdirectories) are granted exclusive write-access to only one site. That site is the "master" for that subtree. Other sites have read-only access to elements not "mastered" by their site. It's not quite CodeOwnership, more like CodeSiteOwnership?.
Alternatively (or perhaps additionally) certain branches of the version tree may need to be "site-mastered" (something MultiSite refers to as "branch-mastership") such that versions on a given branch or codeline may only be created at the site that is the master for that branch/codeline.
Site-mastering of elements and directories helps ensure that a given file is added or removed at only one site. If both sites added a file of the same name in the same directory during the interval between "sync-ups", one of them would be a doppleganger and would probably need to be renamed.
Similarly, site-mastering of branches/codelines helps ensure that a given version on a given branch is created (or removed) at only one site. This allows parallel development between sites on the same file if desired, so long as each uses a separate codeline from the other, mastered to the local site.
Both of these site-mastering strategies are very common among distributed teams that have a genuine need for parallel development on a common set of elements. The main advantages of the mirroring approach are that access for checkin/checkout tends to be faster since it is always from the local repository, and other sites can continue working when one site has a network glitch that might otherwise stop work for a few hours, or a few days.
The main issues are finding the right frequency and time to sync-up repositories, deciding whether one-way or two-way mirroring is needed, enforcing site-mastering strategies (MultiSite supports this directly), and coping with the fact that the sites in question may never all have the exact same versions of the source at the exact same time, but instead within a tolerably short time-difference at any given time (which turns out not to be as bad as it sounds, so long as only one site is used for cutting and configuring the official release version for any given release).
These repository patterns are just a few of the many SCM "patlets" we mined in the "SCM Hot Topic" at ChiliPLOP97. Some of them are written up under my ACME pages (see TopicalPatternLanguageWebSites).
-- BradAppleton
One important factor is people-related, rather than process-related:
However, our VP never made the trip--a fact that didn't go unnoticed by the Portland folks, and which was often mentioned when discussing morale. -- DaveSmith
One other factor is the fear of outsourcing. When my employer started doing distributed development with our Indian subsidiary; many here viewed it with suspicion, thinking that the collaboration was a training exercise that would eventually result in us being laid off and the team in India doing all the work. Fortunately, that hasn't come to pass. Five years later (with our development team being spread across four different nations), those fears seem to have subsided quite a bit - each site gets interesting work to do, and we've learned to collaborate more effectively. At any rate, effective collaboration cannot occur if one site views the other as competition for their jobs.
This page contains a lot of insights that are inherited by general distributed development of any objects or processes, so I DiiGo this page and add it to the WikiTrail ExtremeOpenBusiness for further refinement. -- FridemarPache [http://trailfire.com/fridemar/marks/224279]
One assumption made in the discussion so far is that all the developers actually belong to the same organisation (or maybe subcontractors thereof).
Several of my major projects were research consortia, where the software (proof-of-concept, demonstrator, not production code thank goodness) is developed by several teams (typically a mix of government, industry and academia). There is no hard contractual relationship between the organisations, so you can't hold anybody to anything much. There is no central code repository. There isn't a single development methodology (though each organisation has its own internal processes it must obey). I'm still unsure how _any_ development methodology can cope with this sort of situation, though it is quite common in certain areas.
Waterfall development is largely impossible since the requirements are unclear (it's research, after all). Time-boxed iterations don't really work either, since you can spend an entire iteration sitting around because you have unfulfilled dependencies on another partner. This means you either waste developer-hours, or you have to 'pause' and reschedule so everyone is just working a few hours a week keeping up with email, meetings etc, waiting until they can get some real work done.
One of the projects of this style actually went very well, another less so. It seems to largely come down to the culture of the teams and the individuals concerned. If everybody involved has an Agile mindset, then it all seems to 'just work'. If the mindsets are mixed, and include bureaucratic traditional managers, then it is very painful.
"OrganiSation" (popular UK) WTF?, Blech!... -vs- "OrganiZation" (Traditional Oxford English, USA & Canada).
Well, pardon me for being English. Did you have any comments on the substance rather than the spelling?
See also: PairProgrammingAtHome, VirtualPairProgramming, ProgrammingInsideTheHome, TeleCommuting, DistributedTeams, OpenSourceProjectOrganization, ExtremeOpenBusiness