Organizations which design systems are constrained to produce systems which are copies of the communication structures of these organizations. -- M.E. Conway
ConwaysLaw appears in MythicalManMonth, citing an earlier article by Conway from Datamation. Also appears in Yourdon, E. N., and Constantine, L. L. Structured Design (Prentice Hall, 1978), p. 400.
Over the years I've heard a number of creative derivations of ConwaysLaw, including
-- DaveSmith
There is a interesting article by AlistairCockburn that builds on ConwaysLaw. It consists of a set of patterns that take social structure into account during design. The article is in CommunicationsOfTheAcm? October 1996. Actually, he has it online too! http://members.aol.com/acockburn/papers/softorg.htm
If a group of N programmers implements a COBOL compiler, there will be N - 1 passes. Someone in the group has to be the manager. -- MattRickard
If a MicrosoftCorporation software project is assigned to N engineers, it will be implemented in (at least) N COM components.
If n developers (or pairs) work on a GUI, there will be n ways to perform any given function. -- MartinPool
If there are n developers, there will be n logging/tracing mechanisms. -- KrisJohnson
Would OnceAndOnlyOnce come in here? -- More TheresOnlyOneWayToDoIt?, which enjoins you to avoid having different mechanisms that provide the same facility.
Please predict the architecture of systems built using ExtremeProgramming.
''This one is simple. A system made with XP will consist of pairs of classes. :-) -- MichaelFeathers (true, eh?)''
I've observed is that a well-run XP project tends to be highly coherent _inside_ the system boundary. The team tends to be very protective of the boundary, however, producing high barrier to entry for developers outside of the team. It is interesting to me that this seems to be a natural outgrowth of the team's own cohesion.
Thus, the XP team will create interfaces to code outside the system; these interfaces will illustrate ConwaysLaw. Communication inside the team is close and constant, resulting in closely integrated code. Refactoring is constantly needed to prevent this from metastasizing into tightly coupled, out-of-control dependencies. (As an aside, breakdowns in the code's cohesion are an early indicator of personal conflict within the team.)
I've experienced where the development process was a reflection of the hiring process. The first to be hired on for a new project were Hardware Engineers, followed by the Systems Engineers, followed by Software Engineers. Naturally, the product was designed hardware-first, then defined by Systems such that everything not conquered by hardware was delegated to software.
For years to follow, this pattern held. It even went so far that in a later product concept discussion, the Systems Engineer was talking about software compensating for hardware before the hardware was even defined.
I can understand the need for cost tradeoffs in system design, but this seemed like little more than laziness. <sigh> Guess I was just being YoungAndIdealistic?.
--ChrisFay
The issue with the hardware-systems-software hiring driven structure is not so much hiring but power and .status. As a rule, high-status employees are given high-status lifecycle roles, like overall architecture, irrespective of their suitability for them.
Note that easier does not mean easier to do well, but rather easier to do at all. Doing a good architecture/design is much harder work than doing good code, but doing a bad architecture/design is rather easier than having to code it.
So if hardware designers have more .status and power than software engineers, they will tend to be given responsibility for the architecture, and then they will tend to define an architecture in which the difficult and tiresome issues just happen to be in the software, and viceversa.
Usually longer serving employees have more .status and power than more recently hired ones, which means that the easier and more prestigious parts of the lifecycle are reserved to them, and it is the more junior employees that tend to be charged with doing the more difficult and mundane tasks.
Why reduce your chances for a bonus or promotion? When .status and power drive architecture, architects tend to be rather self serving, and more senior employees did not survive several rounds of donwsizing by being stupid...
--Blissex
I work in a company whose product footprint has grown rapidly in the last few years. However, most of these additions have been inorganic, that is, by acquiring other companies. The net result is that our current architecture is neatly divided at the code level into original company's module, Foocompany's module, Barcompany's module etc! It is a single product.
I guess that would be an illustration of ConwaysLaw.
I have tried to convince people that modules should be separated on technical basis and not personnel basis. Am I wrong?
What confuses me about this law is if this is an AntiPattern of some sort. Does it need to be fixed? Or are things fine the way they are?
There is a closely related AntiPattern: StovepipeSystem, with its own page here: StovepipeAntiPattern.
Thanks. Let me elaborate more to explain the situation. Assume there is a company, A, that writes banking software. There are two main modules - customer and account. So far so good. Now they acquire another company, B, that offers stock trading accounts. Now, a technical architecture of the new system should still have two main modules - Customer (merged) and account (may be with a new subclass?). Instead, usually, ConwaysLaw comes into effect - the product is now divided into two modules - A and B with team A working on A and team B on B. They might unify it at the UI layer, but the architecture is still messy because it reflects the organization structure (the presence of two teams) instead of the technical reality. I am not sure if I am effectively communicating my point here.
OpenSource is more easily understood by ConwaysLaw. The term OpenSource only looks at the product of the process (the source code), but it doesn't understand the process itself. The reason why developers feel compelled to release the source code is that they have to co-operate with developers they will never meet or know about. Or in other words, developers need the source code to the programs they are working on (that's a duh). When developing software for the Internet, this problem is very acute. It's impossible to develop a NetworkStandard such as SMTP without having some sort of open source reference implementation. However, once viewed in this frame, open source is no longer a zero or one proposition. One can release source code only to people one wants to include on the project, which is where the concept of community source arises. Finally, within an open source project, one can often tell the actual power structure by the degree of modularization; in particular, one can tell the places of greatest interest, or why the groups of people have come together, by seeing what parts have been most fully developed and made most flexible and with the most backwards compatibility. -- SunirShah
Should we infer that improving the corporate culture, especially communication, prior to the design of a new system can improve its success as much or more than devoting more resources to the design and implementation team?
--Bruce Hamilton