On comparing tools, languages, and paradigms. These three will be collectively referred to as "tool" for short.
I've had to describe this enough times that I decided to create a topic for it rather than re-reinvent the wheel.
This usually becomes an issue when somebody comes along and says, "Y tool is better than your X tool." Below is generally the approach I recommend for demonstrating your claim.
The steps are roughly as follows:
- Describe your tool/language/paradigm's benefits as best possible so that both parties know the scope of the claim, and can weed out assumptions up-front if possible.
- Select a few semi-realistic scenarios under which you plan to demonstrate. (More scenarios may be added later.) Run the scenarios by the other party to confirm that both parties agree they are sufficiently representative of their domain. This is because you don't want to be accused of picking a tool optimized for a different domain. Being "good" for one domain may not necessarily be good for another. Change scenarios are often a good test because maintenance tends to be the sore side of "software engineering". For example, "I want to add a new column to table Foo."
- Create coded examples of the selected scenarios. If the language is not the primary issue being demonstrated, then try to stick to a C-style derivative if possible (C, C++, JavaScript, Java, PHP, etc.). C-style is the Langua Franca of programming for good or bad, and thus reduces time spent reading the code and reduces questions about the language selected. Pseudo-code is fine as long as it's either clear or you are willing to spend the time to explain any new "inventions" in it.
- Describe what you are looking at to measure "better". The best metrics involve numbers and counts, such as lines of code (or number of tokens, etc.). For example, number of lines of code, functions, and/or blocks that need to be changed per change-scenario. Poor metrics are those that are hard to objectively describe, such as "elegance" and "clean". Elegance may be in the eye of the beholder. The important thing is to keep the metrics as objective and relevant as as possible. How does the tool save time and prevent errors, for example, and how are you measuring that. Non-numeric metrics may be okay, but you accept more risk of disagreement with them, such as LaynesLaw. Needless to say, metric selection is probably the hardest part of this all.
- Present your scoring sheet to the other party and allow them to review it.
- The other party may present metrics, or at least descriptive problems of their own based on the examples. For example, maybe the number of ways in which errors can be produced is reduced, but at the expense of having more code. It's best to anticipate these in advance rather than let the other party find them, because otherwise you may appear to be "obsessing" on very specific metrics.
- The other party may present other scenarios that shed light on aspects of the tool that your initial scenarios didn't address well. And it's back to step one.
- Ultimately LetTheReaderDecide. You may not convince the other party, but at least you documented your claims of improvement for others in an approach that attempts to be as objective and methodical as possible.
-top
See Also: CodeChangeImpactAnalysis, HowToSellGoldenHammers
CategoryMetrics