# Simulation Of The Future

Most of my software design decisions are shaped by a SimulationOfTheFuture essentially. This includes costs times probabilities (see DecisionMathAndYagni). The alternatives are rule-based techniques such as "always do X". However, any rule that starts with "always" is usually suspect. Something needs to be used to choose among multiple conflicting rules and perhaps we can borrow some of the techniques from the field of financial investments; for it's their job to not only make good choices, but also document the reasons for their choices. (And there is a range of simple-to-fancy tools for such). --top

Don't such ruminations lead to a form of YagNi?

No, because YagNi says very little about probability. Yagni says wait until an actual need arrives. But what if we are 80% sure a need will arise? Or if there is a 50% chance of the need but the cost of adding it later is very expensive if we ignore it now? YagNi offers no tool to decide among all these possibilities.

If you're 80% sure you need it, then YouAreGoingToNeedIt. YagNi isn't meant to offer a tool. Judgement and common sense are the tool.

What about 70%, 60%, 50%, etc? Further, what if it's 80% but very costly? We don't want to install an expensive feature unless we are close to 100% sure we needed it. And "judgement and common sense" differ among people, as one can see from the "style sheet" fights. CommonSenseIsAnIllusion. SimulationOfTheFuture offers a common model beyond vague notions (although the values plugged in may differ).

Otherwise, the alternative is a set of agreed-upon metrics with agreed-upon rules. What rules might those be? And what metrics?

We are never going to agree on everything, but agreeing to a common decision model, such as a SimulationOfTheFuture, can improve communication and narrow down the areas of disagreement.

What "common decision model" are you referring to? I see no model presented on this page, just a vague description.

Why is it vague? I don't see vagueness. DecisionMathAndYagni shows an example. These kinds of calculations are common in finance.

DecisionMathAndYagni is just a roundabout way of saying "make an educated guess", and there's no specific process, model, empirically or theoretically tested formula, or validated plan on this page.

An educated guess doesn't communicate anything to other parties. Here you put down on paper (or a spreadsheet) the options being considered and plug in probability and effort estimates for the components. If you mean there's no empirical evidence that SOTF is better than the alternatives for producing successful projects, that may be true (although I haven't checked the financial literature for the financial equivalent). However, there are not many "write-able" alternatives, and none with a good case given for them. I invite you to list the alternatives and make a case for them.

Note that if it's in a given organization, those with the best estimates (probabilities and effort) will be given more weight in the future.

It trivially maps numbers to guesses. It offers no insight; it's no different from saying "we have a 90% chance of succeeding" when you mean "I think we are likely to succeed." This sort of pseudo-numerical estimation is, in practice, of no value. At best, it's a waste of time, at worst it gives an appearance of rigour and accuracy where there is none.

A potentially insightful process might begin by determining a bounds of error for each estimation -- e.g., when we say 90%, do we really mean 90% because we know it's 90% or do we really mean somewhere between 20% and 95% but we're being optimistic -- and factoring that into the process.

Another potentially insightful process might be to use Bayesian inference to determine the probability of events or outcomes, rather than classical or Frequentist approaches. See (for example) https://www.maths.nottingham.ac.uk/personal/tk/files/talks/nott_radiology_01_11.pdf

You are right in that there are "fancier" ways to deal with probabilities and risk ranges. However, I think you are missing the point. It puts our "guesses" into a system in large part to communicate what we believe and why we believe it on a larger scale. For example, if there are 5 variables in the estimate, and party A and party B approximately agree on 3 of the 5, then party A and B don't have to debate/research those three variables anymore (or less so). The scope of disagreement has been narrowed to two variables. Without such a model, neither side would know where the disagreements are and waste time arguing over things they already agree on or overly-vague statements. As far as "appearance of rigor", it's a tool. If you don't know the limitations of the tool, then you probably shouldn't be involved. I'll take clear fuzziness over fuzzy fuzziness any day.

Are you disputing that it helps narrow down the areas of disagreements, or that narrowing them down is of no use?

I dispute that it helps narrow down disagreements. As I wrote above, it's no different from saying "we have a 90% chance of succeeding" when you mean "I think we are likely to succeed." This sort of pseudo-numerical estimation is, in practice, of no value. At best, it's a waste of time, at worst it gives an appearance of rigour and accuracy where there is none. In fact, it only increases the amount of disagreement, as parties who might otherwise agree that "we are likely to succeed" now can't agree whether it's an 83% likelihood or an 87% likelihood.

Creating accuracy out of nothing -- and when use you something as fine-grained as a percentage, you're implicitly assuming there's at least two digits worth of accuracy -- is a recipe for quibbling. More useful might be a coarse scale of Likely, Unlikely, and Unknown. Or, if you can be really certain, Highly Likely, Likely, Unknown, Unlikely, and Highly Unlikely. Any finer grain than that is highly likely (!) to descend into quibbling.

I have to disagree for reasons already stated. A codified guess is usually better than an un-codified guess. A semi fuzzy head model is better than a very fuzzy head model, which is what English alone usually produces. The phrases you describe have to be translated into numbers anyhow to run computations off of. If some stupid people misuse the model, so be it. You can't go around being afraid of idiots who abuse or piss on models. Lock the damned door against the idiots when you are working it.

And it's not just the numbers, but also the model itself. If the other party disagrees with the model, they are welcome to present a better model. For example, if they feel that the certainty range is fuzzy and that such makes a significant difference, they can introduce a model that factors in certainty of an estimate and not just the value. It's often best to start with a fairly simple model and only "fancify" the areas of dispute rather than fancify the entire model.

Every software engineer has a reason why they do things (I would hope). This is only attempt to extract the reasoning steps/logic/formulas one takes in their head. If SimulationOfTheFuture is not part of the reasoning steps you use, please present a decent alternative and describe why it's better. If you are unable to articulate the steps, then why bother arguing if you don't have an articulate point? "Your model sucks but I have no replacement model" is not very useful, unless you want to argue anecdotal evidence is "good enough". Just state, "I feel X is better but have no clue on how to articulate why I feel this way", and move on.

If you wish to generate useful estimates of future outcomes, there are existing mechanisms. As I wrote above:

"A potentially insightful process might begin by determining a bounds of error for each estimation -- e.g., when we say 90%, do we really mean 90% because we know it's 90% or do we really mean somewhere between 20% and 95% but we're being optimistic -- and factoring that into the process. Another potentially insightful process might be to use Bayesian inference to determine the probability of events or outcomes, rather than classical or Frequentist approaches. See (for example) https://www.maths.nottingham.ac.uk/personal/tk/files/talks/nott_radiology_01_11.pdf"

Otherwise, you have offered nothing more than a way of making vagueness -- e.g., "I think we'll succeed" -- look like non-vagueness -- e.g., "We have an 87% chance of success" when there is absolutely nothing upon which to base the 87% except the original "I think we'll succeed." Vagueness masquerading as accuracy is a shortcut to error.

For one, I don't think that is necessary up front. Wait until the range of uncertainty becomes an issue of dispute before it's added to the model. And one can run the model using the low end of the estimate (bottom 5%), middle, and upper end (top 5%) to usually get a good feel of the impact differences. The human head generally doesn't "run" the models you describe, and thus are not necessarily a good reflection of one's decision-making process in a typical environment. Remember, we are modeling a thought process(es), not necessarily reality at first.

Why 5%? Why not 7%? Or 12%? Or 5.3%?

• I just thought those were reasonable cut-off points if we want to sample the bell-ish curve in 3 places. There are "standard" statistical tests that use the 5% or 95% points so I kept that convention here. If there is still disagreement, then finer slices can be dug into. But usually it's not such subtleties that cause the disagreements in my experience; but rather extremely different estimates of a given value within the model, such as the probability of a given user making Mistake X.

• What "standard" statistical tests would those be?

• Standard Deviation, based on 95%

• Ah, so you mean within two standard deviations of the mean, assuming a normal distribution? If so, why do you assume the distribution is normal? Why not assume within three standard deviations? Or one? Or do you mean a typical probability threshold of 5%? If so, why not an equally-typical threshold of 1%?

• Should we assume by default they are not normal? What would that leave us? Again, you are being fastidious about the wrong things.

And vagueness only "masquerades" in one's head. If it's doing that, then one's head is viewing it wrong. The masquerade operation only happens in the head, and if so, then the head is processing it wrong and needs to be corrected. Again, you can't be afraid of idiots misreading the model. Repeat it over and over until it sinks in: "It's only a thought exercise, not a formal estimate". Or put up a big WARNING sign.

```   WARNING: This exercise is only a thought experiment
to aide in communication. It is NOT intended to be a
formal project outcome or scheduling forecast. Using
it as a formal forecasting tool can result in blindness,
dismemberment, sexual dysfunction, confiscation of your
Star Trek figurines, and/or being demoted to the mail
room.
```
It's a pointless thought exercise. If you want accurate estimates, there are better ways, as mentioned. If you don't want accurate estimates, it's a bad idea to dress them up as if they were accurate. 93.5% of scientists agree with me.

The primary point is NOT accurate estimates, at least not up front. If you don't understand this, I don't know what else to say at this point; I don't know a better way to describe what I'm trying to get at.

If you don't need accurate estimates, why are you playing with make-believe "accurate" estimates? Just use rough estimates. As suggested above, use Highly Likely, Likely, Unknown, Unlikely, and Highly Unlikely.

Rough estimates don't explain themselves. And I already explained my objection to such phrases and have nothing new to add.

What do you mean by "rough estimates don't explain themselves?" How do equally rough percentages "explain themselves" any better?

• The granularity of dispute is smaller. We generally have 3 possibilities:

• The model is wrong/disputed
• The values (likelyhoods) are wrong/disputed
• The uncertainty range makes a difference (special case of "wrong model")

• It narrows down where the dispute or diff of opinion lays. "Your front door is ugly" is more useful info than "your house is ugly" because it's a narrower problem and thus easier to fix or at least easier to discuss because it's a smaller/simpler part.

• All three are the case.

• Where? Offer an alternative model if you don't like SOTF.

• I did. Above. Twice. Again: "A potentially insightful process might begin by determining a bounds of error for each estimation -- e.g., when we say 90%, do we really mean 90% because we know it's 90% or do we really mean somewhere between 20% and 95% but we're being optimistic -- and factoring that into the process. Another potentially insightful process might be to use Bayesian inference to determine the probability of events or outcomes, rather than classical or Frequentist approaches. See (for example) https://www.maths.nottingham.ac.uk/personal/tk/files/talks/nott_radiology_01_11.pdf"

• Well, per below, that's GoldPlating a "notion explorer". I suggest exploring the simpler linear model first and THEN step up if it seems to have a lot of certainty-range problems with it.

• Hardly. They're simple and useful tools, unlike the chocolate hammer you've presented here.

• It's not easy to see the propagation of changes and where the results come from along the different "steps".

• Have you tried them? Both approaches are very simple.

• No, I have not. Both sides have to agree to use it. And I remain skeptical that they will provide sufficient insight over the classical approach.

• So having tried neither, your skepticism is based not on evidence or experience, but obstinance? Note that Nate Silver used Bayesian techniques (amongst others) to successfully predict the outcome in all 50 states in the 2012 US presidential election. If genuinely you wish to "simulate the future" rather than randomly attach vaguely-related numeric labels to guesses, then I suggest you consider approaches that are considerably less flawed than your proposal on this page.

• Elections and software design are apples and oranges. Besides, almost nobody will take the time participate in such an exercise. I'm not necessarily against the idea, but rather realize it takes acceptance from both sides, which probably won't happen in practice. Plus, it doesn't hurt to use a simpler model as a more X-ray-able counter-check. If you can convince them to use the other model, that's fine, good for you. But many of us don't have that persuasion ability. If we did, we'd perhaps be in marketing, not IT.

• I doubt the answers would be significantly different in most cases between the simple model versus the fancier ones such that for smaller projects or decisions using the fancier model is not worth the time. One area that may cause problems is differences in uncertainty between 2+ paths. However, even those can be added to the simpler model such as say adding a cost path for late projects. For example, if the project is late, it might cause bankruptcy. Path A may have a lateness probability estimate of 3% while path B has it at 10% because path A uses more familiar technologies. (The risk acceptance level varies widely between organizations or managers. One has to know the expectations and risk tolerance of the org or owners to come up with a realistic cost of events such as bankruptcy. If you are not sure, then use the estimated market value of the organization as the cost of bankruptcy. If the project is really that important, then using the fancier models may indeed be warranted, but again, likely nobody will care to bother with them. I'm just the messenger.)

But, I'm willing to compromise and offer a "wordy" scale where the numbers are used "under the hood" such that the participant doesn't have to see numbers. Draft:

Probabilities:

• Very likely (95%)
• Fairly likely (75%)
• Roughly even (50%)
• Fairly unlikely (25%)
• Very unlikely (5%)

Certainty Level:

• High certainty
• Medium certainty
• Low certainty

You claimed a "codified guess is better than a non-codified guess", without justification.

One is codified for external analysis/scrutiny, one's not. "I feel X is bad" communicates NOTHING about the mental steps taken to arrive at such. X-raying a third of the pipes is usually more useful to diagnosis than X-raying zero percent and only knowing there is a clog.

What do you mean by "[o]ne is codified for external analysis/scrutiny, one's not"? At least "fairly likely" says something about both the estimate and its relative accuracy. "76%" lies about its accuracy and says no more about "the mental steps" than "fairly likely".

I don't understand. Please elaborate. I wasn't talking about words versus values there.

"Fairly likely" makes no claim to accuracy other than "fairly", which is at best "somewhat". "76%" implies two significant digits of accuracy, on a scale of 0-100. Can you honestly claim that every use of a percentage in your system has 100 points of granularity and two significant digits of accuracy?

If your mental decision models truly process options at that accuracy, then you are an oddball. The human brain is not that accurate, at least not without specific task practice. You are focusing on little things. We want to look at the big picture first, and THEN narrow differences down to little issues like degree of significant digits. StepwiseRefinement in a rough sense. If I give 72% and you give 76% for a given parameter and that 4% differences makes a big difference in model results, THEN we refine that area of dispute. I strongly suspect most disagreements will NOT be related to significance of decimal places. You are being fastidious about things that usually don't matter in practice.

You misunderstand. My point is precisely that no one's "mental decision models truly process options at that accuracy." In other words, no one actually uses percentages for ad-hoc probability, so it is meaningless to try to use them as if people do use them for ad-hoc probability.

I use rough approximations of percentages in my head, but I cannot speak for everybody. At least it's a common language idiom.

Only as a form of (perhaps subconscious) white lie, a minor deceit, to imply more accuracy and precision -- and, therefore, believability -- than actually exists.

I'd have to see that such is an issue in practice when using such models. I believe you are being fastidious about the wrong things.

Have you tried your approach with a group of people?

Kind of, but most don't seem to use probability-based models, but rather feel that their "general impressions" are sufficient because they are "more observant than normal" because of their "powerful absorbent brain" or that ArgumentFromAuthority has sufficient validity or "shut up because I outrank you". Introspection is unfortunately relatively rare in IT and the work world. I'm hoping WikiZens are more introspective about the reasons why they believe what they do instead of the more common "social evidence". So far you disappointing me and leaning toward ArgumentFromAuthority and ArgumentByElegance. -t

Indeed, most people resist probabilities or percentages unless they can be accurately determined. Intuitively, they know they're meaningless unless they're actually known, and people don't like to be wrong. However, most people happily use "very likely, very unlikely, likely, unlikely", etc. That's why polls and questionnaires typically use Likert scales instead of probabilities or percentages.

I'm not against using such words if the other side is comfortable with them. I never said was against them.

And rough estimations are NOT "meaningless" just because they are rough. That's a fallacy.

Rough estimations that are obviously rough, like "very unlikely", are fine. Rough estimations like "34%", when you actually mean "very unlikely", are meaningless.

That's a false statement.

No, it's true. What relates precisely 34 out of 100 points to "very unlikely"?

I didn't map "very unlikely" to 34%. Where did you get that value?

That's how likely the project is to succeed. That's my estimate of probability.

But I would take your statement "very unlikely" and plug something closer to 10% into our draft model, and you'd look at the values I plugged in and say, "Wait! 10% is too low. I meant something closer to 30%. Please change it." See it's already improving communication between individuals! QED. -t

I'm not sure how debating over arbitrary percentages -- either of which could mean "very unlikely" -- accomplishes anything. Would a sensible manager really make an important decision with only two off-the-cuff percentages and a vague notion? I wouldn't.

They are NOT arbitrary. Estimates are not arbitrary just because they are estimates. Repeating that falsehood does not make a truth.

Every manager or team-mate is different. What's the alternative? "I feeeeeeeeel that X is better"? That sucks moldy donkey wankers. One's head must go through some kind of algorithm or model to make a choice. Either we live with ArgumentFromAuthority, or we find ways to communicate our decision process among ourselves.

Are you claiming ArgumentFromAuthority is better than an imperfect estimation and probability model? If so, why even bother debating design choices? Just let the person with the biggest gun/bank/lawyer tell us all what to do. Fuck science, eh.

{Arbitrary: Based on random choice or personal whim, rather than any reason or system. Yep, those numbers are definitely arbitrary. Your faked percentages are dishonest and make a mockery of science. (And there is no ArgumentFromAuthority either way, so I don't know why you're bringing it up.)}

It's not "random choice or personal whim". Thus, your def doesn't disqualify it. Further, your def presents a false dichotomy. Further for our work we MUST make estimates about the future based on experience and organizational knowledge. It is required of us as part of our paid duties. This topic's suggestions are just attempts to better organize and document and communicate such estimates. Perfect-or-nothing is just stupid boneheaded stubbornness found in the likes of Bin Laden and Grammar Vandal.

{You've said explicitly that the accuracy of your numbers isn't important. The situation under discussion is when you don't have enough information to make an accurate (to two decimal places) estimate. Since you don't have that information, the precision being provided is indeed just the personal whim of the one giving the numbers. That makes it arbitrary. (And a definition cannot be a FalseDichotomy, false dichotomies have to be arguments, and definitions are not arguments.)}

Decisions MUST be made whether we have perfect info about the future or poor info about the future. We can't unplug the Earth and sleep forever just because we don't have perfect info (although the most fastidious want just that). I do not believe that accuracy "does not matter" in the absolute sense. I think you misunderstood me. My design choices are determined by a model. I can document the model and write it down to show anybody who questions my decisions. They are not just "I say so because I say so and I vote myself smarter than you". If there are significant disagreements over the specific parameters (estimate points) of the model, then we can discuss those further to try to narrow down why each party has such a different estimate. If it lacks specifics or other decision paths, we can add those to the model. It's kind of a form of StepwiseRefinement of estimates.

I know it's not perfect. I am aware that it's not perfect. I agree it's not perfect. But it's relatively simple and BETTER THAN "because I say so".

{It's "because I said so" masquerading as something more reliable. That makes it worse.}

• The "masquerading" is an imaginary monster in your head and your head only. Get a Prozac prescription if it bothers you so much. There is no objective person, place, or thing observed masquerading as anything whatsoever here. Repetition of the claim of masquerading happening is useless to the reader without some objective observation or confirmation of it taking place. Repetition does not make truth.

• {My head only, eh? You've had multiple people call it that here, so it isn't just my head. As for objective evidence, it's been presented on this page already. The scenario is when you don't have an estimate with two digits of accuracy. You suggest reporting it as if it had two digits of accuracy. That is (objectively) a thing masquerading as something else.}

• There is NO objective evidence that "precision confusion" creates common and real problems greater than that of the alternatives. And you have no objective masquerade-a-tron measuring device. Masquerading happens in the head, not in a test tube, The objective universe outside of our heads doesn't give a flying shit about "masquerading" whatsoever. Pseudo-rigor.

• { You didn't ask for objective evidence that "precision confusion" (whatever that is) creates common and real problems, you asked for objective evidence of a masquerade. The evidence for that is your words on this page. Evidence of how people can be mislead by excess precision includes http://forum.johnson.cornell.edu/faculty/mthomas/PricePrecisionEffect.pdf. }

• That's not evidence of intent, only evidence of confusion. "Dishonest" REQUIRES intent. You solved the wrong variable.

• { You didn't ask for evidence of dishonesty either. }

• Sorry, I meant "masquerading". It also requires intent.

• { "A false outward show" (the definition of masquerade used here) requires intent? }

• http://en.wiktionary.org/wiki/masquerade : "3. Acting or living under false pretenses; concealment of something by a false or unreal show; pretentious show; disguise." (1 & 2 don't really apply here)

• { And where's "intent" in that definition? }

• Pretense: "2. Intention or purpose not real but professed." (Also Wiktionary)

• { I see you ignored the other definitions of "pretense" including the primary one. }

• The first one didn't appear to apply. I generally interpret "masquerading" to involve intent to deceive or hide (and suspect most others do also) as opposed to accidental confusion. If you want it to exclude intent, I suggest you use a different word or phrase.

• { Why wouldn't "a false or hypocritical profession" apply? In any case, I gave you the definition I intended, and "intent" isn't part of it. }

• That's the problem with English (and most spoken languages), there are multiple definition paths and we each tend to emphasize different paths in our head. I personally would avoid the word "masquerade" if intent to deceive was not the intended implication, but your head reads it diff apparently. I would just say "it risks confusion", which does not imply the source of the confusion (intent or accidental). I would say "masquerade" leans heavily toward the "intent" side of things and most definitions seem to carry that implication over the accidental kind. Therefore I'd choose to avoid it unless I explicitly intended to emphasize intent to deceive (NPI); or apologize for picking the wrong word, fix it, and move on.

• { I see no reason to apologize for using a word that accurately describes the situation. You had to ignore several definitions on route to finding one that included intent, including the one I told you I was using. I find it unlikely that any synonym would have met with your approval. }

• I already gave you an acceptable synonym/replacement: "it risks confusion". I believe that if one studied most definitions and usage patterns in written works they would conclude that "intent" is usually implied for masquerade. My path selection is thus not the rarer path; yours is. But this is not a pivotal issue and I don't want to debate word trivia at this time anymore. If you disagree with my assessment of the word, I refuse to care anymore. -t

• { "it risks confusion" isn't a synonym. }

• You don't need a synonym, you need a word that says what you mean.

• { I used a word that said what I meant. }

• Sigh. Back to the more general issue. The techniques I suggest are not intended for casual participants (such as a non-technical "side" manager). The participants should be properly informed that the estimates are very approximate. Sure, sometimes stupid people will confuse things despite the warnings and screw up anyhow, but we cannot roll up into fetal positions and hide from the world just because stupid people exist.

Indeed. It's deceptive. That makes it dishonest.

It's only "dishonest" if it's INTENTIONALLY deceptive. Naivety is not "dishonest". "Dishonest" requires intent. It's only "deceptive" if the one(s) using the tool is naive or stupid. ArgumentFromAuthority can also be "deceptive". I'll pick the lessor of two evils, Thank You. If you choose not to use it, fine. Bugger off and celebrate dark-age ArgumentFromAuthority. (I'm not really against the fancier Beynesian etc. approaches, but they likely will not be accepted by other members of an org. If they do accept them, good.)

• {Considering how much you've railed against including intention in definitions, I find it ironic that you would come running to intent as a defense. That said, unless you consider deception good, it's clearly the greator of two evils. It's no less an ArgumentFromAuthority, and it's deceptive to boot.}
• If the existing definition is inherently tied to "intent", there is nothing we can do about it. I recommend against CREATING definitions tied to it, but society already violated that sin and created the word "dishonest" before we were born, so we are stuck with it, and you made it worse by using such words, and bringing in the problems with them, such as objective measurement of "intent". THIS is a good demonstration on why one should avoid them if they want sound arguments.

It is intentionally deceptive. It deliberately attempts to make vague estimates appear to be more accurate than they are, by:
• (a) using percentages, which many people automatically assume to be accurate than vague language because they're numeric. Remember the old Ivory soap adds which claimed it was "99 and 44/100 percent pure"? It's like that.
• (b) implies fine grained precision -- two decimal points -- where no such accuracy exists.
It's actually a form of ArgumentFromAuthority. True authority -- in an academic sense, at least -- generally implies knowledge and experience. The pretend "authority" here is you (or whoever uses this approach), because it implies that your estimate is more believable -- i.e., has more authority -- than a vague estimate ("obviously, you must have done some sort of calculation to wind up with numbers!") when in fact you have no more knowledge and experience upon which to base your estimate than a person who honestly and accurately claims "very likely", or "unlikely", or "I don't know."

I'm sorry, but I find that a really silly complaint. The risk/cost of potential precision confusion is far smaller than that of using NO technique. Is it a problem? YES, I agree it's a problem. Is it a greater net problem than the alternative? NO! You are making a mountain out of a molehill, and ignore the problems with the altnerative. But I'm done arguing about it for now. Time to LetTheReaderDecide. See ParableOfTheBruises.

My smart-phone has a "percentage estimate" for the battery level. It's very approximate. Should everybody sue Steve Jobs et. al. or Mr. Android Google or Mr. Samsung? Go ahead and sue, it could be very entertaining. -t

{I do find such displays deceptive unless they actually have two significant digits of accuracy.}

The extra digits may indicate general small trends, I would note, which would be useful in checking current usage rates (as long as the inaccuracies are generally consistent for at least a few hours.) But, that's not really applicable to this topic.

Certainty and Predictability, surely we can depend on the future as having these two characteristics, ... I don't think so, In fact my past experience has led me to believe the opposite: Uncertainty, fuzziness, with unpredictable and chance happenings are more likely characteristics. Not simply "randomness", things "reasonable or understandable". The real shakers of the future are rooted in "human nature" and in "mother nature". While progress in understanding the two has been made, until we become powerful enough and have a "lever long enough" and an understanding of "what really makes things work", all the math, estimation, probability, science, psychology we can throw into a model huge enough to account for "all the significant matters and operations" which the future may have in store for us, we are simply left with the option spelled out by guessing, or following our humble hunches. -- DonaldNoyes.LookingForward?.20130425

But if we can convince someone that we should try, and the money is available, this is an area with "Job Security". Who can furnish proof today about how right you will be about a future 10, 50, 100 years and even thousands of years away. By the time they find out the model and the output are wrong, the "money is spent"! Buyers, BEWARE!

SimulationOfTheFuture is an age-old technique. For example, if an animal is deciding to go into cave A or cave B to hunt for food, it is essentially running a SimulationOfTheFuture in its mind in terms of estimated yield, effort, and risk based on experience of past good and bad outcomes of each cave and the severity of the experiences. It is using patterns and frequencies of the past to estimate the future.

It's true that one often does not remember each individual event, but the brain tends to summarize multiple experiences into kind of a general "feeling" with a weight behind it (type of feeling and weight of the feeling). Thus, our remembered experiences are "lossy" summaries/abstractions, but it's what we have to work with.

For example, in the past, 3 times the animal found its favorite kind of frog species in cave A, but found an adequate frog species 7 times in cave B. The memories would roughly break even because the weighting behind the memories of the adequate frogs is stronger than that of the yummy frogs because that "node" or lobe of the mind was triggered more often despite lighter "pressure" due to less taste. This is an approximation of the value-times-probability (and cost-times-probability) atom/pattern often used for SOTF.

There are two general factors being considered in this example: taste and hunger satisfaction. Cave A has a higher score for taste and cave B has a higher score for hunger satisfaction. If the animal once encountered an angry bear in one of the caves, then a fear weighting would also likely be applied. (Considering factors such as taste, hunger, and fear are instinct for most animals.)

This looks like something you made up. Do you have any evidence?

{And how many frog-hunting species use specific percentages to quantify vague feelings?}

What exactly do you mean by "specific percentages"? This is meant as an illustrative example, not a National Geographic documentary, by the way. Neural net weights are an "approximation of the value-times-probability...", as described (emphasis added). I didn't make up neural nets. I should point out that there are different ways to achieve the same thing in NN's. In general, the more a kind of event is experienced, the stronger the memory or "feeling" about it. The intensity of the event has a similar effect. These may be manifested in the NN by a stronger weighting at a specific node, or more connections to areas related to the "kind" of experiences, for example. Either way, it's an approximation of the "multiplication" we talk about.

{So you're now claiming SimulationOfTheFuture is a NeuralNetwork? Wat? Above, you appear to advocate the use of specific percentages. Is that not the case?}

Where did you get "is-a"?

{"I didn't make up neural nets", you wrote. Apparently (and I'm not clear how, but you wrote it) they're related to this.}

Neural nets can and do approximately emulate cost/value-times-probability-based models. Most will agree that their memories of a given aspect of office life are stronger if they encounter the aspect more often, and/or if the experience was intense (such as getting yelled at by your boss for something gone wrong.)

{And do they make up percentages to stand in for vague notions?}

They are NOT "made up". They are approximations. You are doing it again. I refuse to get pulled in this time and reinvent that argument here to violate wiki OnceAndOnlyOnce. We've been over this already above.

{Ugh. You are wrong. You are coated in wrong. Your breath smells of wrong. You wallow in wrong and wrong soaks you. You are covered in a thick film of wrong. Your pants and shirt are wrong and your underwear is wrong. You step in wrong and it sticks to your shoes. Your parents were wrong for having you, and their parents were wrong for having them. Your cat and dog are wrong. If you had a hamster, it would be wrong. You live in three-storey wrong with a double-car wrong. You drive to work in wrong, where you do wrong all day. In the dictionary under "wrong", there is a picture of a wrong thing. It's not a picture of you, because that would be wrong.}

And it's wrong to try to have a discussion with an idiot like you. "Oh oh, decimal points will confuse executives and kill puppies, oh oh oh oh! You are using the wrong brand of toilet paper, oh oh oh!"

Oh, and go eat a toad!

A nasty foul-tasting one. I am merciful in that it's not poisonous, but don't tempt my patience!

It appears that no evidence is forthcoming. I will continue to assume that the original claim is garbage until some comes.

Where's your alternative behavior model? A brilliant know-it-all like you surely has one.

I don't have one since I've never been that interested in modeling animal behavior.

Obviously.

I decided that a lot of the bickering above borders on being useless. In practice an org has to buy into whatever estimation technique is being proposed and it's up to the decision makers in a given org whether they want no probability analysis, rough/approximate probability analysis, or usage of advanced/fancy techniques. I doubt most orgs will go with the high-end techniques regardless of the merit of doing such, such that whether the high-end is the "best" way or not is moot. Most will likely go for the rougher approximation for (what they see as) simplicity and avoiding training time's sake. I'll LetTheReaderDecide which level their org is likely to accept. -t

When there's significant money at stake over the outcome of a decision, you can bet organisations will use the techniques most likely to generate accurate results. The approach described above would be appropriate for a budget up to and including the price of a cheap lunch. It's unlikely to be used for anything more costly than that. What you call "advanced/fancy techniques" -- by which I presume you mean Bayes' Theorem and the like -- can be calculated on paper in seconds.

That's fine. LetTheReaderDecide if their org will accept that. I'm just a writer, not God.