Fusion Methodology

Fusion is a popular OOAD methodology within HP, though a little obscure outside it. The latest variety of the method is based on the UnifiedModelingLanguage, and so may be a little more accessible than its predecessor, ClassicFusion. An iterative intermediary is EvoFusion.

-- PeterMerel

See http://fog.hpl.external.hp.com:80/fusion/

The ClassicFusion book: Object Oriented Development: The Fusion Method

ISBN 0133388239


The account of the 1997 OOPSLA session on Fusion found on the web, mentions the publication of a book on Team Fusion:

"... Derek Coleman is the co-author of Team Fusion: Object-Oriented Development using UML, and two previous books on Fusion ..."

I can't find any references to this book - is the title correct? Has it been published yet?


Am I right in thinking this methodology is just another development strategy based on BigDesignUpFront? On page 7 of the article by Todd Cotton quotes Ould stating: "The success of the incremental delivery approach rests on the ability of the designer to create--from the start--an architecture that can support the full functionality of the system so that there is not a point during the sequence of deliveries where the addition of the next increment of functionality requires a massive re-engineering of the system at the architectural level."

Doesn't this imply that the users understand what functionality they will want to implement years down the road?

The FusionMethodology incorporates user feedback at multiple places, which is a good idea. But how does the methodology cope when new user functionality comes in that is not supported by our 'Object Model'?

Where does maintenance of the code come into play with this methodology? MaintenanceAndMethodologies

What happens if I have a need to refactor the code at some point in the future? Do we need to do a BigBangTesting effort?

--FrancisTownsend

As far as "choosing the right architecture up front" goes, what do the other methodologies do? What happens if XPers pick an inappropriate SystemMetaphor?

Potential for automated testing The analysis phase of ClassicFusion does deliver a grammar of "allowed events". Substrings in this grammar specify individual "system operations". Each system operation has pre and post conditions, and a summary of the expected state changes. I think this has some potential for an automated testing tool. (See VerifyOutputWithGrammar for an example of one of these grammars.)

Weaknesses The main weakness of ClassicFusion is the lack of mechanisms for describing concurrency. Newer versions of this methodology (TeamFusion or whatever its called today) introduced stages for explicit consideration of software architecture, and optional sub-stages for database and GUI design.

Simplicity of ClassicFusion This is the sort of methodology I'd use to teach beginners OOA/OOD, because the method is straightforward and the book is small. There aren't a lot of potentially confusing options or process patterns in ClassicFusion - that will help beginners.

(What no) Discussion? There doesn't seem to be much discussion of Fusion on the web/maillist these days. Where have all the Fusion people gone? (Perhaps there's a much better Fusion website on the HP intranet.)


I worked on a couple of projects using a somewhat modified EvoFusion. One was a training project of medium size and was a Resources Control System for the Lab we were working for (about 4 techs in a 30+ group of archaeologists). We didn't actually code anything (except some skeletons for the Communications Cycle). My impression of it (and I also took the role of 'secretary') is that the amount of documentation it requires is too much. It almost took me half a day every day to keep the System Dictionary, Class Diagram, Use Cases, schemata, &c up-to-date with the new entities that appeared after Analysis, Design, and started low level analysis of the first cycle.

Unitary cycles were about the size of XP ones, but complete cycles ready-for-delivery were about 2 or 3 months, which now looks like a way-too-long time (not enough feedback) but then (a couple of years ago) looked much better than up-front "classical" way of doing.

The other project was bigger, a Control System of Cultural Heritage for our State (well, that would be the spanish equivalent). Feedback with users was important we had a kind of mini-magazine with news, report of advancement and tests, questions to be answered, &c... about weekly. Cycles were of 3 weeks but, again, delivery was longer, about 7 or 9 weeks (if I remember well). No pair programming, and (some) customers (specially managers) were pretty angry because our Requisites Capture phase was too disrupting for their work... so I guess asking for an Expert to be in-site for XP would have been hard... Didn't know how it ended, as I moved just after architecture was pretty much wrapped up and before starting the first coding cycle...

All in all, EvoFusion seems much more realistic and concerned with customers' needs than other methodologies, but still it kind of has that static nature... changes that could easily be imported to the design can be easily inserted in between cycles, but if you have to change something important... I don't think it's that much flexible...

And that's right, system operations could really be used for automated testing, but as our coach discarded the schemata (documentatio takes too long), we lost the motivation for very low level testing...

Hope this helps -- DavidDeLis


Bibliography:


CategoryMethodology


EditText of this page (last edited August 15, 2014) or FindPage with title or text search