Programmer Math Skills

On the MontyHallProblem page, someone said...

"The length of this page does nothing to restore my faith in the average coders' abilities in introductory mathematics. It would be funny if it weren't so disturbing."

That is undeservedly snide. Back in the 80s when this was reaching a peak of popularity in getting passed around, I saw a number of math professors at universities give "proofs" of the wrong answer.

One could certainly sneer at those profs, as well, but all of that misses the point: MontyHallProblem is well-known precisely because it is counter-intuitive. It's not that it is obviously hard and people's math skills just aren't up to it, it's that it deceptively looks obviously easy, but the most obvious answer, the one that most people come up with in a third of a second, just happens to be the wrong one.


You should try introductory Algebra:

Some time in 2001, I had an argument with a legacy system programmer; I claimed that I already had enough information to compute some of the values he was "refusing to let me have".

He kept insisting that it was *impossible* to compute the hidden values from the values and formulas I had, while I just quietly did a little junior high school algebra on his white board.

I handed him the formulas, which he tried out with a few numbers and was *amazed* that they worked.

(Needless to say, the payroll calculations in this system were in desperate need of "a little refactoring work." ;-)

This wasn't the first time:

Some time in the late 80's, a really bright programmer (in his own mind ;-) came up with a way to simplify about a dozen formulas our customer gave us down to about six formulas.

"That doesn't matter," I said because...

  1. I could reduce them all down to one formula if I really wanted, and
  2. these formulas don't meet the user's real business needs anyway.
First, we need to fix the process so they have formulas that meet their business needs. Then we can use mathematical techniques to simplify the formulas.


It is time to despair - I've met someone about to leave school who didn't even know what 'algebra' means!


Certain problems are inherently tricky because our hard-wired intuitions are imperfect, and logic isn't always enough to compensate. Another example is the BirthdayProblem. See also InevitableIllusions. The book isn't that great, but the idea is interesting.

These problems are of psychological interest, not mathematical interest. The point here is that the Monty Hall problem is trivially solvable with elementary probability theory (as is the birthday problem), the sort of introductory mathematics I believe most coders should be comfortable with. Obviously, this is not the case. The lesson is that there are certain types of problems which are easily solvable in the correct framework, but you will often screw up if you just 'think them through'.

It's not trivial if so many people get it wrong. The psychology can dominate the math. Using the MontyHallProblem wouldn't really give a fair assessment of someone's math skills. If you gave them a less psychologically tricky problem with the same level of math, that would give better results.

I have to disagree. As I said above, the problem may be of *psychological* interest, but it is mathematically trivial. Part of someone's math skills is the ability to analyse correctly situations where your intuition may mislead you. This is fundamental - without it, your analytical capabilities are quite weak, regardless of the formal machinery you may be able to use. So your claim above of an 'unfair assessment' is patently false. While I agree that to *only* ask psychologically difficult questions would be unfair, ability to deal with this sort of problem is necessary. It is important to note that we are not talking only about weird corner-cases here; while the phenomenon is easily illustrated by problems like "Monty Hall", it is in fact an extremely common effect, perhaps more likely to occur than not. That is the motivation for the original complaint that spawned this page: a large percentage of professional programmers today are severely lacking in elementary mathematical skills. This causes them to make (and often re-make, and re-make) easily avoidable errors. Every time I hear someone say "well, you don't really need to understand math to be a good programmer...", I know there is trouble on the horizon.

I have to disagree with your disagreement. The fulcrum of the MontyHallProblem is not psychological; it's mere trickery. It's not that the victim can't cope with the subtlety; it's that crucial information is simply left out. I don't think it's reasonable to call it a psychologically difficult problem when it's merely ill-formed. [Comments in bold to distinguish them from those of the plain and italic speakers.]

[The Monty Hall problem doesn't rely on tricky wording, but on people's false assumption that when all the (mutually exclusive) possibilities (superficially) appear independent, they really are so, and hence have equal probability. (Well yes, that's the assumption people make, but it's no less reasonable than assuming the opposite. The problem hinges on that ambiguity, because it doesn't state why or how Monty picks which door to show you.) This is a mathematical failing not a psychological one. The only psychological aspect is the way in which one may be distracted from seeing what is relevant. -- vk]

Agree with vk above. If you believe the Monty Hall problem is trickery, you simply do not understand it. In the usual (and correct) statement of the Monty-Hall problem, there is no ambiguity. No matter how many times you assert that the problem is ill-formed, the problem is well-formed. Monty opens a door *that does not contain the prize*. Thus a little thought and simple analysis will yield the correct probabilities.

Actually, one needs to know that Monty deliberately avoided disclosing the prize. If he opened a box without knowing if it contained the prize (and it happened to be empty), one has a different situation to analyse. With this clarified, one has the 'intended' Monty Hall problem, which still leads some to confusion and incorrect analysis. Thus, the ambiguity is a minor extra which isn't really relevant in this page. To me, the problem and its solution are immediately understood IF it's clear that Monty knows where the prize is and he deliberately avoids giving it to you. Without that ambiguity, the other logical errors that people tend to make in analyzing this problem seem... fairly obvious and blatant. [And if it's clear Monty doesn't know, Mark. Try your mental skills on the TwoEnvelopeProblem - without scrolling down or following the link given.]

Back to the same thing then. Is the host offering to switch envelopes because he's trying to help me? Does he know which one is the "good" one? In the simplest case, in which the host always offers to switch, doesn't know what's in the envelopes, and doesn't care one way or the other how much money I get, the mathematical problem is trivial - switching doesn't matter. But with those assumptions, the TwoEnvelopeProblem is isomorphic to the NotTheMontyHallProblem.

You're told the host offers the choice of boxes, so that's a given fact, and so the host's knowledge and motives in relation to that offer are irrelevant. The host does disclose correctly the amount in Box A, and I've amended "tells" to "shows" to clarify that. Does this disclosure help you choose? It's that question which makes this problem different from the NotTheMontyHallProblem.

"You don't need to know Monty's motivation to know that switching when you get shown the empty box is the correct strategy." That's exactly what you need to know. Without that bit of information, you're left to assume that Monty doesn't know any better than you do, and picking the empty box instead of the one with the prize in it was a result of luck, i.e., not a condition of the puzzle. "Let's assume that Monty opening the empty box IS the result of luck... you still have more information now than you did when you made your initial choice and can turn this additional information to your advantage, if you choose to do so." Yes, you have more information. It tells you that the probability that your originally chosen box has the prize has gone up from 1/3 to 1/2. However, the probability that the remaining box has the prize has also gone up from 1/3 to 1/2, so you gain nothing by switching.


The problem is not trivial, because it is psychologically difficult. The math is simple, and the problem is trivially solvable in the right framework. There is a difference. So I have to disagree with your 'fair assessment'; when someone makes a mistake on this (or a very similar problem), it shows that they either didn't think about it carefully, or they cannot think about it with mathematical clarity. If you tell them they made a mistake, they should be able to quickly find the problem by taking more care. If not, they were of the latter type, and they do not understand the math. Being able to apply the same maths in an even simpler situation does not save them here. The problem is trivially solvable with understanding of Bayes' rule, which is elementary probability theory. Psychology can never dominate mathematics, but can dominate people. This is tautological, but perhaps not obvious. If you go into any depth with mathematics, you quickly learn not to trust your instincts. Use your instinct, of course - it's very important. But never trust it completely...

I'd bet that if someone proposed the MontyHallProblem to you (or me), cleverly disguised as a casual question, with no mention of goats or doors, you (or I) would probably get it wrong. Now, if we sat down and did the math, we might realize we were wrong on our first instinctual guess. It's like if we were hard-wired to think 2+2=5. Sure it's obvious, but under normal circumstances, our (incorrect) instincts will seem even more obvious.

You would probably (but not certainly :) ) lose that bet, but I must confess I am somewhat a ringer in this context. My training is as a mathematician, not a programmer; so I tend to think about problems like this in terms of the mathematics first. In other words, by painful experience I have learned not to trust my instincts too much! Use them, but test them as well. Anyways, I wanted to make a point about all this, but it has been lost in the discussion I think. I'll try and come back when I have the time and refactor - but roughly speaking the point is this: All modeling situations have the possibility of 'fooling you' like the Monty Hall problem does. It happens more often that one might think. Therefore, it is imperative that you be able to formally test your first instinct, and catch yourself if you are falling into this sort of psychological trap. I can't even count the number of times that I have found bugs in code that boiled down to somebody making an incorrect assumption because it was 'obvious'. I was also lamenting the number of programmers I run into who are seemingly incapable of performing this type of analysis, even at the simplest levels. I don't think I am asking too much; clearly there are cases where you need a mathematical sophistication that is unreasonable to assume for the average coder - but on the other hand, all coders should have some basic skills in this area.


I solved the MontyHallProblem with a predictive theorem that demonstrates the effect for larger number of rooms, complete with a program demonstrating the effect. Trouble is, I don't think anyone understands what I wrote. So, is it that I am lacking math skills or communication skills? I don't even understand what I wrote years later. -- SunirShah

It's the sort of problem where you need to spot exactly where other people are going wrong, and explain the error(s) they're making. Note that the assumptions in the problem as to who knows what do matter (see the foot of the solution page). Probability questions are notorious for the ease with which they may not be specified with sufficient precision.


Let's not forget that most programmers' programming skills aren't all that great either. Due to the immaturity of our industry, we just don't know what the right things are to learn. We agree on what to teach mathematicians, we agree on what to teach engineers, we agree on what to teach business people, but there is very little agreement on what to teach software developers.

I wonder if it is truly "introductory" math skills that are being discussed here. When most college graduates have never been introduced to Bayes' theorem or some of the other concepts referenced, can such concepts really be considered "introductory"? Maybe they are introductory for mathematicians, but most of us are not mathematicians.

I have learned to deliberately avoid using abstract mathematics to solve problems when other types of solutions (simulations, measurements, etc.) are available. Even when I do solve a problem mathematically, I don't really believe it until I test it in the RealWorld. Why? Because of the numerous times that my math-based solutions have been wrong. I often leave out an important factor or make a small mistake in reasoning that leads to a very incorrect answer. This may be due to stupidity on my part, but I think it is smart that I have recognized this flaw in my intellect and taken steps to work around it. I test mathematical solutions with the same techniques I use to test my software, and for the same reasons.

Bayes theorem is elementary probability theory. That some college graduates have not been introduced to it is more a reflection of their respective degree programs than the level of the material. (I assume here you are talking about related degrees in C.S., engineering, etc.). I would be surprised to find that "most" have not seen it somewhere in their degree. I suppose you could ask the question "should a degree program in computer science (or whatever) included elementary probability theory?" to which my answer, at least, would be "certainly". It is important to note that simulations, measurement, etc. *are* mathematical skills, albeit of the applied variety.

"The theorem has led to considerable controversy among statisticians, who argue about the validity of ascribing a probability to a past event."


It is my contention that you go too far. Mathematics is not the real world. I program for the real world. Mathematics is a tool. I do not memorize the names of theorems and the solutions to counter-intuitive probabilistic puzzles. I also do not need a pointy headed ivory-tower mathematician to teach me to avoid depending too much upon my own intuition, or upon "common sense". Hell, just being a competent professional programmer, one learns, more than anything else, that things are not often what they seem.

Do you contend that you would hire or not hire a programmer, based on how well he performed on your little math puzzle thing?

Well, to be specific, I would not hire a programmer who does not display elementary mathematical competence and analytic capabilities. You are of course correct that Mathematics is not the real world. It is, however, often the best *model* of the real world to employ. Often, application of even very simple mathematics in the right place can save days or even weeks compared to flailing about cluelessly. While I wouldn't test an applicant on 'Monty Hall', it is not merely a trick question that should be ignored as having 'no real world consequences'. The real world is full of problems like Monty. So yes, I would expect my programmers to be able to correctly analyse this sort of problem. I'm not talking about ivory tower bullshit here, I'm talking about competence to tackle real world problems. I don't care if you can remember the name of a single theorem. I don't care if you need to go to a book every time you run into something a little beyond the pale; but you need to know what book(s) to look in, how to read them, and how to verify what you have found.

If someone were working for me as a professional programmer, I would expect them to be able to perform these sorts of real-world tasks with little difficulty:

... the list goes on.

These are mathematical skills. I don't care where you got them. It doesn't mean anything to me if you have a Ph.D in maths or picked it up along the way while doing other things. But if you don't have these basic skills (and a lot of programmers I have met don't), you aren't going to last. I couldn't afford that kind of drain on (real world) productivity.


<Stuff moved to MathForProgrammers>


I received a physics Ph.D. before switching to software development, and it's surprising how rarely I've found a chance to use any mathematics beyond arithmetic or basic algebra.

That surprises me too. We must do very different development. I rarely use anything but elementary mathematics, but I do regularly use pretty much all the areas mentioned above.

I have a math minor (computer science major), and find that it's rare to use anything more complex than algebra, while writing business systems. The systems themselves rarely go beyond multiplication, division, and a few percentages.

I actually found that studying OOP helped me with my math. I would even go so far as to say that anything beyond a basic understanding of math is overkill for programming.


And here I was despairing because my skills in such things as the LambdaCalculus, FirstOrderLogic, and other such and sundry are a bit weak...


<stuff moved to MathForProgrammers>


See also: ProgrammerLiteracy, ProgrammerLiteracyHistory


EditText of this page (last edited April 7, 2004) or FindPage with title or text search