Situation: You have a push-down stack for all your goals. When you hit an obstacle, you push "remove the obstacle" onto the stack. Then, when the obstacle is cleared, you pop the stack, and you are back at your original problem.
Problem: For some reason, the problems you push onto the stack keep getting bigger and bigger and bigger, dwarfing the original problem, each one dwarfing the one before it.
Forces that seem to make this problem necessary or inevitable:
I usually keep a push-down stack of all my goals. When I hit an obstacle, I push "remove the obstacle" onto the stack, then when that is solved, I pop the stack and I'm back at the original problem. I think lots of people do this, although few identify it explicitly in such terms.
Sometimes, though, it seems like the obstacles are always bigger than the original problem.
For example: I'll get it in my head that I should write a short story. Well, in order to do that, I have to have a general method of writing stories. And in order to get that, I have to have some kind of literary theory.
See? Developing a literary theory is a much higher-order thing than just writing a single short story. Although it is true that, if I have a literary theory worked out, and a general method, then writing the story ought to be easy.
I've been trying to write stories since 1988. In 1990 or so (I was in high school) I decided I needed a literary theory. Now I have one! It's good, it will work! But... I hope you understand that I am exhausted, that I have tried too many methods and theories, and they have all failed, so I am of two minds --this time it will work, but all precedent suggests the opposite. I have little energy for writing fiction lately, even though AFAIK my problems are basically solved.
A suggestion - DoTheSimplestThingThatCouldPossiblyWork, and write the story. As for a literary theory, YouArentGoingToNeedIt. Seriously, write first, edit, then worry about theory. The best writers have managed without theory more often than not. --PeteHardie
I agree with Pete. Before I was a software engineer, I wrote and orchestrated music for telelvision. Because of the consistently outrageous deadlines in that business, you simply don't have the luxury to make too grand of plans before you actually begin writing something. You just can't always wait for the perfect idea to come to you fully developed so that all you have to do is notate it. You must develop iteratively so you have something to show to the client when he calls two days after you've begun! That lesson (and many others) about process I brought to my software development, and I have never regretted taking that stance. --JohnPerkins
Another example: I'll get it in my head that I should write a Doom editor. I want my Doom editor to be programmable, so I should throw in a Scheme implementation. A Scheme implementation has to have a lexical analyzer so that it can read in Scheme data. I could just write switch statements inside cases inside switch statements, but that will be too hard to edit, and I don't want to use "lex" or "flex" for various reasons, so maybe I should write a lexical analyzer generator. But implementing all those algorithms is too much work; maybe a lexical analyzer editor would be a better idea. I should write it in Scheme (using an existing implementation) so that it's easy to debug. But it will need a file format which is compatible with C++, in which I want to write my Scheme implementation. I can't very well use Scheme data as my file format, because that creates a chicken-and-egg problem, so I'll have to use an easy-to-read binary file format. But Scheme doesn't directly support binary I/O. So, I'll have to write a binary file I/O library for Scheme so that the lexical analyzer editor I intend to write in Scheme can save a file that C++ can load, so that I can make a Scheme expression analyzer and then C++ can lexically analyze Scheme expressions so that I can write a Scheme implementation for my Doom editor.
It would be funny if it weren't so tragic. Talk about MyMindKeepsWandering.
What's scarier is that I seem to be working through these things and actually making progress. I should be done around 2009, when Volume 5 of Knuth comes out...
Help!!!
-- EdwardKiser (the deranged. It's 3 AM as I write this, so, grain of salt and all that.)
Edward, stop working alone. Get a partner, for goodness sake.
Oh come on; YouArentGoingToNeedIt!
Oh, I am doing something. I'm not stuck in AnalysisParalysis; the problem is that my code is so far removed from my original goals. I'm working on that binary file stuff for Scheme. It already works! I can also load these images from disk files, and save them to disk files. Not bad progress for two nights, but there's so much still to do! That's what's bothering me.
Wow. I thought *I* was bad.
Did you consider writing the lexer and parser in Scheme, and writing a Scheme program for dumping the parse tree as C++ code to represent it as pre-initialised data (ie, already loaded into your interpreter), so your Scheme interpretation could run the lexer and parser in itself?
Oh, yes you are suffering from AnalysisParalysis. It is because you haven't actually stated your true goals. Your goals aren't "write a short story" and "write a Doom editor". They are "write the perfect short story" and "write the ultimate Doom editor". Neither of which are actually attainable goals.
If you simply wrote a story or created a Doom editor, you would accomplish your stated goals. But you would fail to achieve what appear to be your real goals. Therefore, you're coming up with all this other fluff work you can use to occupy yourself.
If you want to write a story, write a story. If you want to write a Doom editor, write a Doom editor. Creating a literary theory or a Scheme implementation is nonsense.
I'm not EdwardKiser, so I can't speak for him, but his experience does sound familiar, and I think there's another possible goal you're missing: "start working on a Doom editor, see what comes of it, and maybe have some fun in the process". If he were being paid to produce a working Doom editor for a client, he would of course have to remain tightly focused on that goal. As it is, though, I see no reason for him not to shift his attention to other interesting goals that come up in the process of building an editor. Besides which, if all you want is a working Doom editor, it's easy enough to download one.
You make a good point, one that I would believe except that 1) he repeats the same process with his writing and 2) he seems bothered by it. If he were having fun, doing whatever interested him, why would he have created this "help me" kind of page in the first place?
And your last sentence supports my point, I think. If you only want a Doom editor, download one. If it absolutely _must_ be programmable in Scheme, use someone else's Scheme implementation. There's only a jillion of them on the net. Even if for some unfathomable reason he just _had_ to write his own Scheme, why in heaven's name does he have to re-write lex to do it? Don't like lex? There's dozens of other parser generators out there.
He's pursuing his stated goals in IMHO non-sensical, self-defeating ways. It sounds to me like he has a raging case of NotInventedHere-itis.
Anyway, if those are his real goals, he should drop all this extraneous claptrap and just write a story or a Doom editor. If he's really just having fun, he should stop whining about the long, hard road he's chosen for himself.
I don't know how he can tolerate whatever C++ compiler he's using; maybe that should be rewritten too. And that operating system -- whatever it is, you know it's a cobbled together set of inappropriate features; let's rewrite that too. And who could live with the inappropriate compromises made in the computer's hardware design or CPU; let's start over there too! ...and I never really liked CMOS; I'm sure I could come up with something better. etc... ;->
And don't forget that any half-decent Doom editor needs to be fully distributed and scalable. But all this COM, CORBA, RMI stuff - who could live with such hooey? I just know he's going to need a transaction monitor, and a database system too. But should it be an object database or a relational one? Hmm, he might have to solve that object-to-relational mapping problem once and for all.
Also, the editor really needs to be able to run on top of any architecture. And I don't mean that over-hyped WriteOnceAndPray? Java stuff, either. Running on Linux and Windows is fine, but there's probably a large market of people out there who want to collaboratively edit Doom levels on their WAP phones. Don't want to overlook that capability.
You have a point about the C++ compiler but, really, I don't see why he's willing to settle for C++ at all. Everbody knows it is full of inconsistencies and weirdness. Might as well go ahead and design the ultimate programming language while he's at it.
He's going to re-invent all of computer science just to write a Doom editor.
Hey, just use MicrosoftDotNet!
[Ummm, this is developing into a kind of self-perpetuating one-sided flame. First you put words in his mouth, then you flame him as if he'd actually said them. How long will this go on?]
I don't intend it as a flame, but rather a bit of satire. He says with a straight face that he intends to write a Doom editor, but that requires him to write a Scheme implementation (why?), which requires him to write a lex replacement (why?), which requires him to create some special binary file format (why?), which requires him to write a binary i/o library for Scheme (why?). Two wrongs don't make a right, but four "why's" do make for a case of AnalysisParalysis.
There are two ways to look at it. It's easy to move from, "if you can download a good editor, why write one?" to "if you're writing a doom editor, doesn't it have to be better than what you can download?" So suddenly you need lots of special, never-before-seen features to justify writing it in the first place.
There's really only one cure for analysis paralysis of the kind that is holding up our would-be Doom-editor programmer. When Linus Torvalds created Linux, he didn't create a complete operating system. Actually his first effort was a sort of minimal functional kernel, and he played a game of StoneSoup, and persuaded other people how nice it would be if there was a real shell for his kernel, and maybe a nice filesystem, and a few other tools, and pretty soon his plan for WorldDomination was in full swing. -- StevenNewton
Maybe the PushDownGoalStack is itself a mistake. The ExtremeProgramming solution, if I understand it correctly, would be based on a PriorityGoalQueue instead:
priority queue q = new priority queue sorted by "importance-units per complexity-unit" so greatest will be removed first; insert initial goal into q; while (q is not empty) { remove goal r from q; if r is not worth doing, break the while loop; if r is simple, do r, else { (r is complex) write a simple, bare-bones, end-to-end version of the solution to goal r which omits many features; insert all omitted features back into q; } } stop;How's this?
Edward, you need some bridging objects. A GoalStack should be considered as a goal pool to avoid these issues, or at least identify them quickly.
An alternative to this is SoftlySoftlyCatcheeMonkey. Nibble away at the problem from the edges. From whichever part of the edge is easiest right now, and has nothing to push onto the goal stack. As you go, you will find ways to nibble at most of the other edges without introducing new goals (due to discovering new approaches, applying knowledge "accidentally" gained over time, etc). Finally, you may be left with only one or two small things to push onto the goal stack ... and much to your surprise the job is soon done!
I wrote a context switcher. It operated on a list (a thread of contexts). One of the attributes of the list elements was the amount of time to spend in that thread. The list was extensible, so you could wind up with impractical amounts of (total) time. By adding a priority attribute and sorting the list accordingly I was able to implement a crude time-slicing method.
Eventually I had to add some math to predict saturation (accounting for context switching overheads) and this led to a modal task that allowed the manual reassignment of times and priorities for -- and the elimination of -- tasks.
This became necessary when it became evident that some tasks would never accomplish any useful work and that, to allow the system to get anything done, relative goal importances for the tasks had to be re-evaluated.
This was for a 286 system running a data collection server and database maintenance server in addition to ad-hoc queries and some reports, all running on DOS 3.x, so the stick-shift approach was what we needed.
We could not have met all the processing goals using the stack approach, because some of the processing was never "done" so waiting for "all the data" before updating the database or allowing queries would have failed.
Likewise, I believe that accomplishing business, project, or life goals depends on scheduling time slices for the things that must be done. The stack model has limited utility in the long term or in a broad scope. It's useful in confined situations, but in larger contexts you can achieve "stack overflow" failure.
When I read the title, I assumed it was an AntiPattern referring to poor management. (As above, it's actually more like a priority queue.) When being managed by non-technical clients in the past, I've had them give orders in the form of frequently changing "priority lists" with no technical linearity, based on their non-technical examination of the system. Among other things, this meant I never had time to finish or thoroughly test any part of the system until my boss stepped in and said I needed time to clean things up. --JesseMillikan