Twelve Tough Questions

from GilbMeasurabilityPrinciple:

See the "Twelve Tough Questions" paper at...

The questions:
  1. NUMBERS - Why isn't the improvement Quantified?
  2. RISK - What is the risk or uncertainty and why?
  3. DOUBT - Are you sure? If not, in what way are you not sure?
  4. SOURCE - Where did you get that information? How can we verify it?
  5. IMPACT - How does your idea affect my goals?
  6. ALL CRITICAL FACTORS - Did we forget anything critical?
  7. EVIDENCE - How do you know it works that way?
  8. ENOUGH - Have we got a complete solution?
  9. PROFITABILITY FIRST - Are we going to the profitable things first?
  10. COMMITMENT - Who is responsible?
  11. PROOF - How can we be sure the plan is working?
  12. NO CURE? - Is it no cure, no pay?

Related links:


A simple question for TomGilb.

In "Twelve Tough Questions" (- KaiGilb? is one of Tom's four sons, not all of whom now work with him), the first question is, "Why isn't the improvement quantified?" However, the paper itself does not quantify any improvements that would result from the asking of the questions in the paper. That's what I'd love to see, so I could believe that Question 1 could ever have a useful answer.

I'm not sure it can. It's a rhetorical question! For Gilb, there is no excuse for the improvement not to be quantified. That's why it is the first of the tough questions.

As the man himself says (<-Twelve Tough Questions, Question 1):

"We are quick to quantify those precious resources “money” and “time”. But competitiveness is not reducing cost or time alone. It is only useful if the “right” product or service is delivered. The “right stuff” is the critical question. The right stuff can be classified as “qualities” and ”benefits”, or more generally as “advantages”.

"Practically every advantage can be ultimately evaluated in terms of money. But, most of them need direct measures of their own, so that we can control and envisage them before the payoff.

"How often do you read the words: “improved”, “better”, “enhanced”. They should be forbidden in serious management planning. They need to be replaced by two numeric points on a scale of measure, your current level of performance, and your planned level of performance in the future.

"So, for example, the phrase “leading to a substantial increase in product reliability” should be replaced by “reaching 99.9% uptime during customer use, by next year, as opposed to less than 85% this year.”

"I have found that “intangibles” are quantifiable. I have found that qualitative ideas can be quantified hang on! almost without exception. The concept that management must quantify to get control is not new. But most managers today still have a large number of concepts, important to their daily work, which they do not view as quantifiable. Neither do their immediate surroundings set a quantification example either. This is a combination of lack of leadership and training their boss should have insisted on quantification and shown the way. The facts on how to quantify things should be made available."

Once, I had the chance to talk with TomGilb about a similar issue. In short, my statement was (very simplified), "I have problems with your web site, it doesn't suit my needs. Wasn't it built with your own method? Where are the requirements, and what do the measurements look like?". The answer was simple, and along the lines of, "No, we haven't used the method. If we had, it would have become an excellent example of how the method improves things." :-) I interpreted it as, "We are lazy guys, but that doesn't mean that you can afford to be lazy in your professional life".

The new website http://www.Gilb.com is a considerable improvement over the old. Check it out if you don't believe me!

Regarding your concerns with Question 1. There are useful answers, and they point the finger at management, not developers: Why isn't the improvement quantified?

The questions IMHO are made to point out problems in an organization, not at a particular development process. TomGilb has other sayings on the latter :-) -- ThomasWeidenfeller

See also in PrinciplesOfSoftwareEngineeringManagement, "The Measurability Principle" (Section 9.1) All attributes can and should be made measurable in practice. It sounds like a BeancountersWetDream, but in practice I can't even guess when the particular piece of code I am working on at the moment will be ready, and if I don't know, how will anyone be able to measure it?

There are a few reasons for this (no offense intended):

I haven't seen any case studies on making project attributes measurable, and the benefits that can be derived therefrom. I would love some idea on how I could do better estimates on even my personal process.

You are mixing things up here. Effort estimations and quality requirements (attributes). The latter defines what you want to achieve (performance, robustness, usability, etc.), the former says how much effort do you or others think it will take to fulfill the requirements. These are two steps: Setting goals and guestimating how much it will take to reach them. Your initial guestimates might be totally wrong, but by continuously reevaluating them, you make them more precise. This means, you measure (or more precisely track and correct) two things: How well have you achieved your goals, and how much effort did you spend. -- ThomasWeidenfeller

The best technique I have at the moment is guess how long it would take in IdealProgrammingTime, and multiply that by 3. -- JohnFarrell You should multiply by pi: it sounds better, and lets you take a few afternoons off ;-)


I think the "Twelve Tough Questions" are good, for improving the clarity and rationality of decision making.

But it concerns me that if applied carelessly, they can be un-Agile. There's something very plan-driven about the whole thing. Sometimes it may be quicker and less expensive to just try something, and see what happens, than to gather and analyze reams of information. Try something. If it doesn't work out, undo it. -- JeffGrigg

Anything can be un-Agile if applied carelessly. TomGilb is one of the most agile operators around and a staunch proponent of iterative development as a subset of continuous process improvement ("Evo", as he terms it:"In Evo, ‘Act’ means reviewing the Evo plan, determining the gap priorities, finding alternative steps and, deciding on the next step from the various alternatives. If the results of the last step are not satisfactory, a course of action to correct it might be decided upon."). To quote from the introduction to this paper:

"We can always break a large plan down into a series of smaller deliverable results. We should do the most useful things first. We should be dealing with high short-term payback plans even for our long term objectives. If we cannot master the short term, we probably shouldn’t be trusted to master the long term. If things fail, the losses should be limited by design. Everything will change in the middle of the project, so you need to be able to change as rapidly."

I'm guessing his response to the "try something" mantra would be "try what...and why that...and what do you mean by try?" In the end, doesn't the "try" seek to respond to the 12 Tough Questions. We choose to try something RISKy or we put PROFITABILITY FIRST to obtain PROOF of the IMPACT on ALL CRITICAL FACTORS... What we choose to try and what we seek to achieve thereby are as much in need of evaluation against the questions as any other endeavor in SystemsEngineering.


EditText of this page (last edited October 19, 2005) or FindPage with title or text search