Simulation Of The Future

Most of my software design decisions are shaped by a SimulationOfTheFuture essentially. This includes costs times probabilities (see DecisionMathAndYagni). The alternatives are rule-based techniques such as "always do X". However, any rule that starts with "always" is usually suspect. Something needs to be used to choose among multiple conflicting rules and perhaps we can borrow some of the techniques from the field of financial investments; for it's their job to not only make good choices, but also document the reasons for their choices. (And there is a range of simple-to-fancy tools for such). --top

Don't such ruminations lead to a form of YagNi?

No, because YagNi says very little about probability. Yagni says wait until an actual need arrives. But what if we are 80% sure a need will arise? Or if there is a 50% chance of the need but the cost of adding it later is very expensive if we ignore it now? YagNi offers no tool to decide among all these possibilities.

If you're 80% sure you need it, then YouAreGoingToNeedIt. YagNi isn't meant to offer a tool. Judgement and common sense are the tool.

What about 70%, 60%, 50%, etc? Further, what if it's 80% but very costly? We don't want to install an expensive feature unless we are close to 100% sure we needed it. And "judgement and common sense" differ among people, as one can see from the "style sheet" fights. CommonSenseIsAnIllusion. SimulationOfTheFuture offers a common model beyond vague notions (although the values plugged in may differ).

Use your judgement.

Otherwise, the alternative is a set of agreed-upon metrics with agreed-upon rules. What rules might those be? And what metrics?

We are never going to agree on everything, but agreeing to a common decision model, such as a SimulationOfTheFuture, can improve communication and narrow down the areas of disagreement.

What "common decision model" are you referring to? I see no model presented on this page, just a vague description.

Why is it vague? I don't see vagueness. DecisionMathAndYagni shows an example. These kinds of calculations are common in finance.

DecisionMathAndYagni is just a roundabout way of saying "make an educated guess", and there's no specific process, model, empirically or theoretically tested formula, or validated plan on this page.

An educated guess doesn't communicate anything to other parties. Here you put down on paper (or a spreadsheet) the options being considered and plug in probability and effort estimates for the components. If you mean there's no empirical evidence that SOTF is better than the alternatives for producing successful projects, that may be true (although I haven't checked the financial literature for the financial equivalent). However, there are not many "write-able" alternatives, and none with a good case given for them. I invite you to list the alternatives and make a case for them.

Note that if it's in a given organization, those with the best estimates (probabilities and effort) will be given more weight in the future.

It trivially maps numbers to guesses. It offers no insight; it's no different from saying "we have a 90% chance of succeeding" when you mean "I think we are likely to succeed." This sort of pseudo-numerical estimation is, in practice, of no value. At best, it's a waste of time, at worst it gives an appearance of rigour and accuracy where there is none.

A potentially insightful process might begin by determining a bounds of error for each estimation -- e.g., when we say 90%, do we really mean 90% because we know it's 90% or do we really mean somewhere between 20% and 95% but we're being optimistic -- and factoring that into the process.

Another potentially insightful process might be to use Bayesian inference to determine the probability of events or outcomes, rather than classical or Frequentist approaches. See (for example) https://www.maths.nottingham.ac.uk/personal/tk/files/talks/nott_radiology_01_11.pdf

You are right in that there are "fancier" ways to deal with probabilities and risk ranges. However, I think you are missing the point. It puts our "guesses" into a system in large part to communicate what we believe and why we believe it on a larger scale. For example, if there are 5 variables in the estimate, and party A and party B approximately agree on 3 of the 5, then party A and B don't have to debate/research those three variables anymore (or less so). The scope of disagreement has been narrowed to two variables. Without such a model, neither side would know where the disagreements are and waste time arguing over things they already agree on or overly-vague statements. As far as "appearance of rigor", it's a tool. If you don't know the limitations of the tool, then you probably shouldn't be involved. I'll take clear fuzziness over fuzzy fuzziness any day.

Are you disputing that it helps narrow down the areas of disagreements, or that narrowing them down is of no use?

I dispute that it helps narrow down disagreements. As I wrote above, it's no different from saying "we have a 90% chance of succeeding" when you mean "I think we are likely to succeed." This sort of pseudo-numerical estimation is, in practice, of no value. At best, it's a waste of time, at worst it gives an appearance of rigour and accuracy where there is none. In fact, it only increases the amount of disagreement, as parties who might otherwise agree that "we are likely to succeed" now can't agree whether it's an 83% likelihood or an 87% likelihood.

Creating accuracy out of nothing -- and when use you something as fine-grained as a percentage, you're implicitly assuming there's at least two digits worth of accuracy -- is a recipe for quibbling. More useful might be a coarse scale of Likely, Unlikely, and Unknown. Or, if you can be really certain, Highly Likely, Likely, Unknown, Unlikely, and Highly Unlikely. Any finer grain than that is highly likely (!) to descend into quibbling.

I have to disagree for reasons already stated. A codified guess is usually better than an un-codified guess. A semi fuzzy head model is better than a very fuzzy head model, which is what English alone usually produces. The phrases you describe have to be translated into numbers anyhow to run computations off of. If some stupid people misuse the model, so be it. You can't go around being afraid of idiots who abuse or piss on models. Lock the damned door against the idiots when you are working it.

And it's not just the numbers, but also the model itself. If the other party disagrees with the model, they are welcome to present a better model. For example, if they feel that the certainty range is fuzzy and that such makes a significant difference, they can introduce a model that factors in certainty of an estimate and not just the value. It's often best to start with a fairly simple model and only "fancify" the areas of dispute rather than fancify the entire model.

Every software engineer has a reason why they do things (I would hope). This is only attempt to extract the reasoning steps/logic/formulas one takes in their head. If SimulationOfTheFuture is not part of the reasoning steps you use, please present a decent alternative and describe why it's better. If you are unable to articulate the steps, then why bother arguing if you don't have an articulate point? "Your model sucks but I have no replacement model" is not very useful, unless you want to argue anecdotal evidence is "good enough". Just state, "I feel X is better but have no clue on how to articulate why I feel this way", and move on.

If you wish to generate useful estimates of future outcomes, there are existing mechanisms. As I wrote above:

"A potentially insightful process might begin by determining a bounds of error for each estimation -- e.g., when we say 90%, do we really mean 90% because we know it's 90% or do we really mean somewhere between 20% and 95% but we're being optimistic -- and factoring that into the process. Another potentially insightful process might be to use Bayesian inference to determine the probability of events or outcomes, rather than classical or Frequentist approaches. See (for example) https://www.maths.nottingham.ac.uk/personal/tk/files/talks/nott_radiology_01_11.pdf"

Otherwise, you have offered nothing more than a way of making vagueness -- e.g., "I think we'll succeed" -- look like non-vagueness -- e.g., "We have an 87% chance of success" when there is absolutely nothing upon which to base the 87% except the original "I think we'll succeed." Vagueness masquerading as accuracy is a shortcut to error.

For one, I don't think that is necessary up front. Wait until the range of uncertainty becomes an issue of dispute before it's added to the model. And one can run the model using the low end of the estimate (bottom 5%), middle, and upper end (top 5%) to usually get a good feel of the impact differences. The human head generally doesn't "run" the models you describe, and thus are not necessarily a good reflection of one's decision-making process in a typical environment. Remember, we are modeling a thought process(es), not necessarily reality at first.

Why 5%? Why not 7%? Or 12%? Or 5.3%?

And vagueness only "masquerades" in one's head. If it's doing that, then one's head is viewing it wrong. The masquerade operation only happens in the head, and if so, then the head is processing it wrong and needs to be corrected. Again, you can't be afraid of idiots misreading the model. Repeat it over and over until it sinks in: "It's only a thought exercise, not a formal estimate". Or put up a big WARNING sign.

   WARNING: This exercise is only a thought experiment
   to aide in communication. It is NOT intended to be a
   formal project outcome or scheduling forecast. Using
   it as a formal forecasting tool can result in blindness,
   dismemberment, sexual dysfunction, confiscation of your
   Star Trek figurines, and/or being demoted to the mail
   room.
It's a pointless thought exercise. If you want accurate estimates, there are better ways, as mentioned. If you don't want accurate estimates, it's a bad idea to dress them up as if they were accurate. 93.5% of scientists agree with me.

The primary point is NOT accurate estimates, at least not up front. If you don't understand this, I don't know what else to say at this point; I don't know a better way to describe what I'm trying to get at.

If you don't need accurate estimates, why are you playing with make-believe "accurate" estimates? Just use rough estimates. As suggested above, use Highly Likely, Likely, Unknown, Unlikely, and Highly Unlikely.

Rough estimates don't explain themselves. And I already explained my objection to such phrases and have nothing new to add.

What do you mean by "rough estimates don't explain themselves?" How do equally rough percentages "explain themselves" any better?

But, I'm willing to compromise and offer a "wordy" scale where the numbers are used "under the hood" such that the participant doesn't have to see numbers. Draft:

Probabilities:

Certainty Level:

You claimed a "codified guess is better than a non-codified guess", without justification.

One is codified for external analysis/scrutiny, one's not. "I feel X is bad" communicates NOTHING about the mental steps taken to arrive at such. X-raying a third of the pipes is usually more useful to diagnosis than X-raying zero percent and only knowing there is a clog.

What do you mean by "[o]ne is codified for external analysis/scrutiny, one's not"? At least "fairly likely" says something about both the estimate and its relative accuracy. "76%" lies about its accuracy and says no more about "the mental steps" than "fairly likely".

I don't understand. Please elaborate. I wasn't talking about words versus values there.

"Fairly likely" makes no claim to accuracy other than "fairly", which is at best "somewhat". "76%" implies two significant digits of accuracy, on a scale of 0-100. Can you honestly claim that every use of a percentage in your system has 100 points of granularity and two significant digits of accuracy?

If your mental decision models truly process options at that accuracy, then you are an oddball. The human brain is not that accurate, at least not without specific task practice. You are focusing on little things. We want to look at the big picture first, and THEN narrow differences down to little issues like degree of significant digits. StepwiseRefinement in a rough sense. If I give 72% and you give 76% for a given parameter and that 4% differences makes a big difference in model results, THEN we refine that area of dispute. I strongly suspect most disagreements will NOT be related to significance of decimal places. You are being fastidious about things that usually don't matter in practice.

You misunderstand. My point is precisely that no one's "mental decision models truly process options at that accuracy." In other words, no one actually uses percentages for ad-hoc probability, so it is meaningless to try to use them as if people do use them for ad-hoc probability.

I use rough approximations of percentages in my head, but I cannot speak for everybody. At least it's a common language idiom.

Only as a form of (perhaps subconscious) white lie, a minor deceit, to imply more accuracy and precision -- and, therefore, believability -- than actually exists.

I'd have to see that such is an issue in practice when using such models. I believe you are being fastidious about the wrong things.

Have you tried your approach with a group of people?

Kind of, but most don't seem to use probability-based models, but rather feel that their "general impressions" are sufficient because they are "more observant than normal" because of their "powerful absorbent brain" or that ArgumentFromAuthority has sufficient validity or "shut up because I outrank you". Introspection is unfortunately relatively rare in IT and the work world. I'm hoping WikiZens are more introspective about the reasons why they believe what they do instead of the more common "social evidence". So far you disappointing me and leaning toward ArgumentFromAuthority and ArgumentByElegance. -t

Indeed, most people resist probabilities or percentages unless they can be accurately determined. Intuitively, they know they're meaningless unless they're actually known, and people don't like to be wrong. However, most people happily use "very likely, very unlikely, likely, unlikely", etc. That's why polls and questionnaires typically use Likert scales instead of probabilities or percentages.

I'm not against using such words if the other side is comfortable with them. I never said was against them.

And rough estimations are NOT "meaningless" just because they are rough. That's a fallacy.

Rough estimations that are obviously rough, like "very unlikely", are fine. Rough estimations like "34%", when you actually mean "very unlikely", are meaningless.

That's a false statement.

No, it's true. What relates precisely 34 out of 100 points to "very unlikely"?

I didn't map "very unlikely" to 34%. Where did you get that value?

That's how likely the project is to succeed. That's my estimate of probability.

But I would take your statement "very unlikely" and plug something closer to 10% into our draft model, and you'd look at the values I plugged in and say, "Wait! 10% is too low. I meant something closer to 30%. Please change it." See it's already improving communication between individuals! QED. -t

I'm not sure how debating over arbitrary percentages -- either of which could mean "very unlikely" -- accomplishes anything. Would a sensible manager really make an important decision with only two off-the-cuff percentages and a vague notion? I wouldn't.

They are NOT arbitrary. Estimates are not arbitrary just because they are estimates. Repeating that falsehood does not make a truth.

Every manager or team-mate is different. What's the alternative? "I feeeeeeeeel that X is better"? That sucks moldy donkey wankers. One's head must go through some kind of algorithm or model to make a choice. Either we live with ArgumentFromAuthority, or we find ways to communicate our decision process among ourselves.

Are you claiming ArgumentFromAuthority is better than an imperfect estimation and probability model? If so, why even bother debating design choices? Just let the person with the biggest gun/bank/lawyer tell us all what to do. Fuck science, eh.

{Arbitrary: Based on random choice or personal whim, rather than any reason or system. Yep, those numbers are definitely arbitrary. Your faked percentages are dishonest and make a mockery of science. (And there is no ArgumentFromAuthority either way, so I don't know why you're bringing it up.)}

It's not "random choice or personal whim". Thus, your def doesn't disqualify it. Further, your def presents a false dichotomy. Further for our work we MUST make estimates about the future based on experience and organizational knowledge. It is required of us as part of our paid duties. This topic's suggestions are just attempts to better organize and document and communicate such estimates. Perfect-or-nothing is just stupid boneheaded stubbornness found in the likes of Bin Laden and Grammar Vandal.

{You've said explicitly that the accuracy of your numbers isn't important. The situation under discussion is when you don't have enough information to make an accurate (to two decimal places) estimate. Since you don't have that information, the precision being provided is indeed just the personal whim of the one giving the numbers. That makes it arbitrary. (And a definition cannot be a FalseDichotomy, false dichotomies have to be arguments, and definitions are not arguments.)}

Decisions MUST be made whether we have perfect info about the future or poor info about the future. We can't unplug the Earth and sleep forever just because we don't have perfect info (although the most fastidious want just that). I do not believe that accuracy "does not matter" in the absolute sense. I think you misunderstood me. My design choices are determined by a model. I can document the model and write it down to show anybody who questions my decisions. They are not just "I say so because I say so and I vote myself smarter than you". If there are significant disagreements over the specific parameters (estimate points) of the model, then we can discuss those further to try to narrow down why each party has such a different estimate. If it lacks specifics or other decision paths, we can add those to the model. It's kind of a form of StepwiseRefinement of estimates.

I know it's not perfect. I am aware that it's not perfect. I agree it's not perfect. But it's relatively simple and BETTER THAN "because I say so".

{It's "because I said so" masquerading as something more reliable. That makes it worse.}

Indeed. It's deceptive. That makes it dishonest.

It's only "dishonest" if it's INTENTIONALLY deceptive. Naivety is not "dishonest". "Dishonest" requires intent. It's only "deceptive" if the one(s) using the tool is naive or stupid. ArgumentFromAuthority can also be "deceptive". I'll pick the lessor of two evils, Thank You. If you choose not to use it, fine. Bugger off and celebrate dark-age ArgumentFromAuthority. (I'm not really against the fancier Beynesian etc. approaches, but they likely will not be accepted by other members of an org. If they do accept them, good.)

It is intentionally deceptive. It deliberately attempts to make vague estimates appear to be more accurate than they are, by: It's actually a form of ArgumentFromAuthority. True authority -- in an academic sense, at least -- generally implies knowledge and experience. The pretend "authority" here is you (or whoever uses this approach), because it implies that your estimate is more believable -- i.e., has more authority -- than a vague estimate ("obviously, you must have done some sort of calculation to wind up with numbers!") when in fact you have no more knowledge and experience upon which to base your estimate than a person who honestly and accurately claims "very likely", or "unlikely", or "I don't know."

I'm sorry, but I find that a really silly complaint. The risk/cost of potential precision confusion is far smaller than that of using NO technique. Is it a problem? YES, I agree it's a problem. Is it a greater net problem than the alternative? NO! You are making a mountain out of a molehill, and ignore the problems with the altnerative. But I'm done arguing about it for now. Time to LetTheReaderDecide. See ParableOfTheBruises.


My smart-phone has a "percentage estimate" for the battery level. It's very approximate. Should everybody sue Steve Jobs et. al. or Mr. Android Google or Mr. Samsung? Go ahead and sue, it could be very entertaining. -t

{I do find such displays deceptive unless they actually have two significant digits of accuracy.}

The extra digits may indicate general small trends, I would note, which would be useful in checking current usage rates (as long as the inaccuracies are generally consistent for at least a few hours.) But, that's not really applicable to this topic.


Certainty and Predictability, surely we can depend on the future as having these two characteristics, ... I don't think so, In fact my past experience has led me to believe the opposite: Uncertainty, fuzziness, with unpredictable and chance happenings are more likely characteristics. Not simply "randomness", things "reasonable or understandable". The real shakers of the future are rooted in "human nature" and in "mother nature". While progress in understanding the two has been made, until we become powerful enough and have a "lever long enough" and an understanding of "what really makes things work", all the math, estimation, probability, science, psychology we can throw into a model huge enough to account for "all the significant matters and operations" which the future may have in store for us, we are simply left with the option spelled out by guessing, or following our humble hunches. -- DonaldNoyes.LookingForward?.20130425

But if we can convince someone that we should try, and the money is available, this is an area with "Job Security". Who can furnish proof today about how right you will be about a future 10, 50, 100 years and even thousands of years away. By the time they find out the model and the output are wrong, the "money is spent"! Buyers, BEWARE!


SimulationOfTheFuture is an age-old technique. For example, if an animal is deciding to go into cave A or cave B to hunt for food, it is essentially running a SimulationOfTheFuture in its mind in terms of estimated yield, effort, and risk based on experience of past good and bad outcomes of each cave and the severity of the experiences. It is using patterns and frequencies of the past to estimate the future.

It's true that one often does not remember each individual event, but the brain tends to summarize multiple experiences into kind of a general "feeling" with a weight behind it (type of feeling and weight of the feeling). Thus, our remembered experiences are "lossy" summaries/abstractions, but it's what we have to work with.

For example, in the past, 3 times the animal found its favorite kind of frog species in cave A, but found an adequate frog species 7 times in cave B. The memories would roughly break even because the weighting behind the memories of the adequate frogs is stronger than that of the yummy frogs because that "node" or lobe of the mind was triggered more often despite lighter "pressure" due to less taste. This is an approximation of the value-times-probability (and cost-times-probability) atom/pattern often used for SOTF.

There are two general factors being considered in this example: taste and hunger satisfaction. Cave A has a higher score for taste and cave B has a higher score for hunger satisfaction. If the animal once encountered an angry bear in one of the caves, then a fear weighting would also likely be applied. (Considering factors such as taste, hunger, and fear are instinct for most animals.)

This looks like something you made up. Do you have any evidence?

{And how many frog-hunting species use specific percentages to quantify vague feelings?}

What exactly do you mean by "specific percentages"? This is meant as an illustrative example, not a National Geographic documentary, by the way. Neural net weights are an "approximation of the value-times-probability...", as described (emphasis added). I didn't make up neural nets. I should point out that there are different ways to achieve the same thing in NN's. In general, the more a kind of event is experienced, the stronger the memory or "feeling" about it. The intensity of the event has a similar effect. These may be manifested in the NN by a stronger weighting at a specific node, or more connections to areas related to the "kind" of experiences, for example. Either way, it's an approximation of the "multiplication" we talk about.

{So you're now claiming SimulationOfTheFuture is a NeuralNetwork? Wat? Above, you appear to advocate the use of specific percentages. Is that not the case?}

Where did you get "is-a"?

{"I didn't make up neural nets", you wrote. Apparently (and I'm not clear how, but you wrote it) they're related to this.}

Neural nets can and do approximately emulate cost/value-times-probability-based models. Most will agree that their memories of a given aspect of office life are stronger if they encounter the aspect more often, and/or if the experience was intense (such as getting yelled at by your boss for something gone wrong.)

{And do they make up percentages to stand in for vague notions?}

They are NOT "made up". They are approximations. You are doing it again. I refuse to get pulled in this time and reinvent that argument here to violate wiki OnceAndOnlyOnce. We've been over this already above.

{Ugh. You are wrong. You are coated in wrong. Your breath smells of wrong. You wallow in wrong and wrong soaks you. You are covered in a thick film of wrong. Your pants and shirt are wrong and your underwear is wrong. You step in wrong and it sticks to your shoes. Your parents were wrong for having you, and their parents were wrong for having them. Your cat and dog are wrong. If you had a hamster, it would be wrong. You live in three-storey wrong with a double-car wrong. You drive to work in wrong, where you do wrong all day. In the dictionary under "wrong", there is a picture of a wrong thing. It's not a picture of you, because that would be wrong.}

And it's wrong to try to have a discussion with an idiot like you. "Oh oh, decimal points will confuse executives and kill puppies, oh oh oh oh! You are using the wrong brand of toilet paper, oh oh oh!"

Oh, and go eat a toad!

{What kind of toad?}

A nasty foul-tasting one. I am merciful in that it's not poisonous, but don't tempt my patience!

It appears that no evidence is forthcoming. I will continue to assume that the original claim is garbage until some comes.

Where's your alternative behavior model? A brilliant know-it-all like you surely has one.

I don't have one since I've never been that interested in modeling animal behavior.

Obviously.


I decided that a lot of the bickering above borders on being useless. In practice an org has to buy into whatever estimation technique is being proposed and it's up to the decision makers in a given org whether they want no probability analysis, rough/approximate probability analysis, or usage of advanced/fancy techniques. I doubt most orgs will go with the high-end techniques regardless of the merit of doing such, such that whether the high-end is the "best" way or not is moot. Most will likely go for the rougher approximation for (what they see as) simplicity and avoiding training time's sake. I'll LetTheReaderDecide which level their org is likely to accept. -t

When there's significant money at stake over the outcome of a decision, you can bet organisations will use the techniques most likely to generate accurate results. The approach described above would be appropriate for a budget up to and including the price of a cheap lunch. It's unlikely to be used for anything more costly than that. What you call "advanced/fancy techniques" -- by which I presume you mean Bayes' Theorem and the like -- can be calculated on paper in seconds.

That's fine. LetTheReaderDecide if their org will accept that. I'm just a writer, not God.


See also: FuzzyLogic, BayesianUncertainty?, GameTheory


CategoryMetrics, CategoryDecisionMaking, CategoryEconomics


AprilThirteen


EditText of this page (last edited November 17, 2014) or FindPage with title or text search