What I am trying to get at, I think, is that there are still parts of building a system that XP does not address. In particular, I think that the overall architecture, not just the software/logical architecture (aka SystemMetaphor), needs to be considered at some point in the lifecycle of the project. For example, I would hate to buy a $5.5 million Cray to do ray-tracing when $150,000 of Intel boxes running Linux could do the job. At the same time I would hate to commit to Linux when that does not fit into the enterprise wide OS scheme. Somewhere in the lifecycle of a system these architecturally significant issues must be addressed and I don't see XP doing that.
Somewhere, if XP claims to be a full-lifecycle system process, XP will need to address how and why those decisions are made. If not, it needs to declare these types of decisions out of scope of XP.
-- HankRoark
I can't speak for XP, but I do this by gathering the non-functional requirements for each UserStory - who much, how fast, etc. - and using them to project the flow rates and storage implications for the initial architectural spike. If it seems like these are going to be unusual - terrabytes of data being searched within microseconds with many miles from here to there - then I turn these projections into high-priority user stories in their own right, and spike them up front. This may indeed lead to the purchase of new tools and the establishment of new project standards.--PeterMerel
What I am trying to get is that XP does not address these issues explicitly, from what I can see on Wiki. Deciding that terabytes of data needs to searched in microseconds seems to mean that certain hardware and networking is going to be committed to in order to make a SpikeSolution. One of the tenets of XP is that changing software no longer takes exponentially longer as the size of the system grows. The same does not apply to other parts of the system (e.g., hardware, networking). These costs lots of money, typically upfront. As such, I would hate to spend huge amounts of money on them if that size is not needed. Also, I would hate to change them out if I underestimated the application's needs. Further, architecture needs to take in account if personnel can be found to use the technology that is being committed to and if that technology will work, both technically and politically, inside and enterprise. Seems that these steps are missing from XP. I dare say that XpDoesNotAddressAllArchitecturalIssues?. I would like to be proven wrong. --HankRoark
I think you're optimizing too early. A SpikeSolution doesn't have to run with the same speed as the final deployed system -- it just needs to work. The "architecture" you're talking about is the deployment environment -- which, IMHO, only matters if you don't control it. -- RobCrawford
I think it is important to learn early if your system is going to work. At some point testing needs to be done to make sure a system meets certain performance, scalability, availability, and security requirements. This is on top of the functional requirements. You will need to choose hardware and networking to test these requirements. This can require a large capital expense. When and how does XP make these decisions? On top of that, you don't have have full control over the deployment architecture, so you have limits on what you can use. How does XP take that into account? Also, for very large data sets or computational problems the hardware required to work on those data sets may not exist or be prohibitevly expensive. At some point in the process a decision must be made as to if the project is feasible. How does XP make that level of decision, for example? Somewhere in all these discussion is a listing of the size of the C3 database. I am talking about systems that use orders of magnitude larger data sets. I don't see XP addressing these types of architectural decisions. --HankRoark
I think we can agree that they're about scaling deployment. Whether they're architectural or not is irrelevant - see why on ArchitectureAnalysisDesignBullshit.
So I see nothing wrong with using the initial XP Spike to validate your deployment requirements, nor with prioritizing and controlling those requirements just as you would the logical ones. Moreover I see nothing wrong with acquiring adequate hardware to use to Spike your solution. This doesn't mean going for a full deployment at all; this means FunctionalTest(s) for your deployment requirements. Spike a Linux solution. Spike a GemStone solution. Spike a DNA solution. See what works, and when you're satisfied that you've Spiked enough, then put software meat on these bones via the usual XP process. Where's the difficulty in that? --PeterMerel
Earth to Hank, I'm sorry, I can't figure out what you want. Please describe here how some other methodologies address your questions, for comparison. I am aware of no methodology that addresses FDDI vs Token Ring. Thanks. --RonJeffries
Ron, it as been a while, but the last time I reviewed the RationalUnifiedProcess (what a name?) it took into account the deployment environment/architecture. I don't see how XP addresses this. See the top of HowXpPlansDeployment for what I think is the real question. --hr (p.s. sorry for the confusing message, I often use specific examples in hopes of getting concrete answers I can then generalize from)
I still don't get it. How does RUP decide what kind of computer to buy? What do you mean when you say it "takes it into account"? --rj
It doesn't specify which computer to buy, but it does say the deployment plan is part of the overall system architecturethat needs to be considered. --hr