Need To Have Certainty

There are things you don't program your way out of. Operating systems, distributed deadlock, mutual exclusion, distributed failure recovery, type correctness, ... . For these problems, you need to sit down and think, prove, imagine, use math, etc. There are problems where you NeedToHaveCertainty you are right before you start typing.... also, there are more problems you can ProgramYourWayOut of. --AlistairCockburn (excerpted from TypeInferenceStory)

I've broken out this section to collect some ideas. Why are these, or some other example problems, ones where you need to have certainty before you start typing? And what are the best ways to get that certainty? So, gang, type in some examples of problems and how they are addressed before you start typing.

ExtremeProgrammingChallengeFourteen shows one.

Not that you can't guess where I might be coming from, but I might be coming from thinking you need certainty when you're done, more than before you start. -- RonJeffries


I think Alistair is wrong, and I am going to tell a story to prove it. This story is about the one time in my life when I used heavy-duty math to solve a distributed deadlock problem.

I was consulting for the Cornell Campus Bookstore in 1981 or 1982. They had a minicomputer that did sales processing, inventory, ordering, accounting, and everything else that a campus bookstore might need. The cash registers were fairly intelligent for that era, and captured all the details of sales transactions and downloaded the data to the minicomputer at night. The cash registers were on some IBM-style network, and the minicomputer was weak on networking, so there was an Apple II that sat between the minicomputer and the network and translated between the two. The cash registers were downloaded by having the Apple II type on a terminal-line for hours.

One of my improvements to the system was to overlap download time and processing time. I changed the software so that instead of downloading and then processing the data, one process downloaded each register and another waited for data for a cash register to be downloaded and then processed it. Since it took about 75% as long to process the data from a register as to down-load it, this made the whole system 40% faster.

The problem was that things were now more complicated, and sometimes the process waiting for data to be downloaded would wait forever. I never could figure out what caused the problem, because often it was in the Apple II, and we had no source code for that program. So, I wrote a third process that acted like a watchdog. If the Apple II had gone more than a minute or two without producing data, it would be restarted.

This seemed pretty simple, but about the fourth time it failed I realized that I was not going to get it work just by programming. This was a difficult synchronization problem and would require some hard thinking. But my studies were in the theory of distributed systems, so I was up to it! I spent 4-6 hours pushing the math and came up with a solution which was nearly, but not quite, like my other solutions. I implemented it in a few minutes and that was the last version I wrote.

The moral of this story is that sometimes hard thinking and math is necessary. Your code won't work until you do it. However, you can retrofit the right answer. Usually problems like this are fairly localized. You can't afford to think hard about all problems, and most of them aren't worthy of that much thought. When you find out that a problem is hard, spend the effort to solve it, but it is best to assume that a problem is shallow, because most of them are.

People often say "you can't retrofit security". I don't believe it, because I have seen security retrofitted. Sure, it required rewriting a quarter of the system, but I didn't say that retrofitting security was easy, just that it was possible! The clients had said at the beginning "don't worry about security, we don't need it", but then changed their minds as clients usually do. There were a few things that, in hindsight, we should have done to make security easier to add later. But we didn't know this at the beginning of the project, so it probably wouldn't have made much difference. As it was, focusing on factoring resulted in a system in which we could retrofit security. Ignoring security meant we could learn enough about the problem to come up with a good design.

Thread-safety, performance, and security are three things that people claim you have to worry about early on, but I've seen all three retrofitted successfully. Of course, I've also seen people fail to retrofit them, but that was mostly caused by the fact that their software was hard to understand and change. If you make your software easy to understand, nearly any change is possible.

-- RalphJohnson


And it is some of these problems that get the AspectOrientedProgramming gang going: the ones which require pervasive work in a system. It is an area that I find fascinating, but I wonder whether half the solutions are more work than the direct approach. You can always bite the bullet and do the work. On the other hand, it would be great to catalog these aspects of systems and just see how hard change will be in a well-factored system if one of the aspects changes. -- MichaelFeathers


I think Alistair is wrong, and I am going to tell a story to prove it.

The moral of this story is that sometimes hard thinking and math is necessary. Your code won't work until you do it. However, you can retrofit the right answer.

I find it hard to see this as proving wrong what I read as the underlying message of Alistair's comment; all it disproves is a strong reading of the phrase NeedToHaveCertainty (wiki-ized after it was written).

There are problems that you're not going to solve without stopping and thinking about it. Not all problems are that way, by any means. (Just, for my money, the interesting ones). It would be silly to approach all programming problems as if they were such cases, so it's a good thing I don't see anybody standing up for that approach. On the other hand, it might be equally silly to pretend there are no such problems and never think ahead at all, and while it sometimes looks like some people are advocating that, I don't think they really are.

Any program can be retrofitted to do anything it could have been written to do originally; indeed, any program can be retrofitted into any other. Some changes are easier than others, and a well-factored program will be easier to change than one that isn't, most of the time. But it's also true that some thought up front, together with experience, can give some insight into the directions you are likely to need to go or the problems that are likely to arise. In a distributed system you can expect synchronization issues and are arguably wise to spend a little thought on where they might arise and how you might handle them. This simply means you are using informed judgment in deciding whether a solution, however simple, could possibly work. Which I think was Alistair's point in TypeInferenceStory in the first place. -- JimPerry


Actually, Alistair suggests that you have to think before you start typing, and of course it's hard to find the keyboard without thinking. Ralph's point suggests that even if you don't do "enough" thinking up front, you probably aren't doomed. There's a cost effect, of course.

My point, and I do have one, is that the mistake of thinking and planning too much is more common than the mistake of going ahead with too little thinking. XP's rules are there to balance the forces of evil that take us all off track. Germane to this topic are WorstThingsFirst, which causes up-front experimentation and thinking, and DoTheSimplestThingThatCouldPossiblyWork, which keeps us from putting in every bloody fantasy we have. -- RonJeffries


Alistair is perhaps the best one to say what he intended to suggest, as I seem to read what he said differently than you do.

As to which mistake is more common, I see a lot more evidence of too little thinking than of too much thinking (some, instead of thinking, jump straight to coding, others jump into some other sort of process), but I couldn't really speak to the statistics. I don't disagree with WorstThingsFirst or DoTheSimplestThingThatCouldPossiblyWork, I just think it's misleading to contrast those with design or thinking. To genuinely know what the worst thing is, or whether a thing could possibly work, you must think and design. Even to write unit tests to genuinely tell when it works requires thinking and design. -- JimPerry

Of course those are thinking and design. Couldn't agree more. They're XP's favorite forms of thinking and design, and we emphatically do not contrast them with thinking or design. If there's some place where we seem to have done that, please point it out and I'll fix it. Thanks! -- RonJeffries


I like Ralph's way of demonstrating my point by asserting he is going to demonstrate the opposite - it is a great rhetorical device :-). I also think it odd that the person who started this page cut off the last part of the sentence: "There are problems where you NeedToHaveCertainty you are right before you start typing... also, there are more problems you can ProgramYourWayOut of."

Only left out the second part to elicit thought on the first, where we DO need to have certainty up front, if such exist. I put it back. -- RonJeffries

Much of this page seems to me to be demonstrating agreement. Ralph so nicely says, " I implemented it in a few minutes and that was the last version I wrote... sometimes hard thinking and math is necessary. Your code won't work until you do it. However, you can retrofit the right answer. "

My experiences were similar. There was a telltale thought, "No way do I want to try to debug my way through that!" at which point I or we sat down with pencil and paper. There was a loop-unrolled line-drawing algorithm in assembler, an invert-and-multiply division algorithm, a multi-threaded mutual exclusion problem with unlocked relational databases, and a failure recovery in an on-line banking system. Where we did our work, we simply got it right the first time, as Ralph describes, and that was the end of it.

Where the disagreement resides, here, is what constitutes a thinking activity. Ron has broadened the scope of thinking activities to include typing in test cases, so he can safely claim that thinking happens all the time (while also claiming that there is usually too much thinking going on during a project). I vote with Jim that more damage is done from too little thinking than too much, and it seems that Ron votes the other way. Whichever way one votes about what constitutes a thinking activity, it seems to me that we have converged to (1) that some algorithms really need careful thought (you cannot ProgramYourWayOut), (2) that in very many situations you can ProgramYourWayOut, and (3) that new, subtle solutions can often be plugged right in.

-- AlistairCockburn


Thanks for sharpening my focus on an apparent (but non-) contradiction in what I'm saying, Alistair! See RecordThinkingInTests.


You're both right. It's a floor cleaner and a desert topping. No wait, that's something else. Um, give me a second here... Ok, here goes.

There is too much thinking and too little thinking going on. I've seen too much monkey-on-a-typewriter stuff where code was randomly added until something seemed worked. That monkey was not practicing XP. XP does not advocate typing without thinking.

On the other hand, people spend of lot of time debating what the computer might do without ever trying it out. People come to me all the time asking "What does language X do in situation Y?" I now tend to answer such questions by turning to my editor, typing in some code and running it. (Actually, they don't come around so much anymore. Can I presume they learned something? ;-)

It ain't a one-dimensional thing. It's not a question of doing too much or too little thinking in general. Instead, it's a series of little questions: Am I being effective and efficient in my thinking with what I'm doing right now?

If I aslkj aslkjaes weuh kdfio zdlkjl sjlk j, then I'm not being effective. (My fingers aren't too bright.) If I'm spending hours speculating about whether the lines and boxes on my whiteboard will work instead of taking 15 minutes to spike it or even implement and test the real thing, I'm not being efficient.

And note that XP does advocate stepping away from the keyboard to work things out from time to time, the usual practice being with CRC cards. I haven't had the opportunity yet to do PairProgramming, but it seems to me that it offers a very effective and efficient way to think, too.

-- KielHodges


All of which gets me to a problem I currently have, to wit, TeachingSimpleVsComplexSolutions ("duh, Alistair, so don't teach complex solutions!" I hear you cry). -- AlistairCockburn


"What does language X do in situation Y?" -

If you want to know what C++ will do with an expression like "a[i++] = i++;", you should not type it in to see. That would only tell you what your local compiler does. Other C++ compilers may do something different.

Maybe Smalltalk has fewer such problem. An analogous situation happens with 3rd party libraries. You rely on some behaviour, and it changes in the next release. This is a FearOfTheMachine? issue that lurks in the back of my mind when I'm reading this branch of XP.

If code is to capture design, it needs to express the ways in which the design is allowed to change. UnitTests, assertions and DesignByContract can help, but they have to be accepted by the people who are changing the code. You can't write your own UnitTests for things that 3rd parties are changing. Or rather, you can, but they can't constrain what the 3rd parties will do if they don't know about them.

-- DaveHarris

Actually, I code a lot of Java, some Perl and bit of a lot of things. Java is much more precisely specified (or constrained, depending on POV ;-) than C++. But really, I threw that phrase out there as an easily stated example. In hindsight, I blew it!

"What does language X do in situation Y?" definitely sounds like exploring the nooks and crannies of a language -- the precise places that gotchas lurk.

--KielHodges


I like Ralph's way of demonstrating my point by asserting he is going to demonstrate the opposite - it is a great rhetorical device :-).

I think Alistair missed the point Ralph was disagreeing with. I don't think anyone has been suggesting that these problems can be defeated without thinking (at least, no one here.) Rather, the point of contention seems to be the ...before you start typing. Ralph demonstrated that he had started typing, had made an implementation, then realized he needed to sit back and think. Of course, this could also be that we're understanding that fragment differently than Alistair intended.

Ralph's example seems to me to be a NeedToHaveCertainty before you're finished, not before you begin.

--EdGrimm

OK, there may be another interpretation to the story. My reading was, "I tried just typing in the obvious, but the (simple) solution didn't suit. To get a good solution, I had to have certainty before sitting down to type. So I did the math, etc. and got the solution that I needed." In my reading of the story, the fact that he had tried one solution and found it unsuitable does not alter the point of the story. I rechecked the top of this page and found, "''There are things you don't program your way out of. Operating systems, distributed deadlock, mutual exclusion, distributed failure recovery, type correctness, ... . For these problems, you need to sit down and think, prove, imagine, use math, etc. There are problems where you NeedToHaveCertainty you are right before you start typing." Unless Ralph or someone shows me a different interpretation of his story, I feel right about saying his story supports my statement. -- AlistairCockburn

It all depends what the point is. Phrases like "before sitting down to type" have many interpretations. It sounded to me like your point was that there were certain kinds of problems that you had better solve before you start the implementation of your system or you are going to be in trouble.

The point of my story was that you can usually solve those kinds of problems as you come to them. "So, don't worry about tomorrow. Tomorrow will take care of itself. Each day has enough trouble of its own."

In my opinion, high-powered theory should be used only for difficult problems. Most problems are not difficult. The best strategy is to assume all problems are easy unless you have some evidence to the contrary. Perhaps your evidence is hearsay (everyone says that you should know a lot about compiler theory before you build an optimizing compiler) or personal experience, or perhaps you find it out in the heat of development. Unless you know otherwise, assume the problem can be solved by programming. If it can't, you'll find out.

-- RalphJohnson

Some pals and I wrote the first commercial time-sharing optimizing FORTRAN compiler. We didn't know it was hard, so we just programmed it. ;-> --RonJeffries


Thanks, now I get two points from your story (aren't stories wonderful?!)... one that a person can often retrofit a sophisticated solution into a system containing a simple solution, and another that there are times when you need to sit down and think, prove, use math, etc. I hadn't realized the ambiguity in the phrases, "before sitting down to type", and "you can't program your way out of." -- AlistairCockburn


EditText of this page (last edited February 23, 2003) or FindPage with title or text search