Zero One Infinity Rule

"Allow none of foo, one of foo, or any number of foo. ... The logic behind this rule is that there are often situations where it makes clear sense to allow one of something instead of none. However, if one decides to go further and allow N (for N > 1), then why not N+1? And if N+1, then why not N+2, and so on? Once above 1, there's no excuse not to allow any N; hence, infinity." -- From the JargonFile, at <http://www.catb.org/jargon/html/Z/Zero-One-Infinity-Rule.html>

This rule is due to early Dutch computer pioneer Professor WillemLouisVanDerPoel (designed early computers Testudo 1949, Zebra 1952); I don't know why the jargon file doesn't reflect this. -- DougMerritt

Is Professor WillemLouisVanDerPoel's work older than Wang's paradox? -- EpistrophyAdebiyi?

"The only reasonable numbers are zero, one and infinity." -- BruceMacLennan, Principles of Programming Languages (1987) (ISBN 0030617111 or ISBN 0195105834 )

See also <http://www.cs.utk.edu/~mclennan/Classes/365/principles.html>.


Zero-one-infinity is a design rule-of-thumb that is often ignored - usually to the detriment of the project.

Simply stated: You will either need zero of a thing, one of a thing, or an arbitrary number of the thing.

Programmers and architects ignore this at their own peril. Arbitrary fixed limits are a CodeSmell.

{{Saying you need one of zero - one - or arbitrary number of a thing is misleading for me. The statement misleads people to think of "one" as a number of items within the collection. Therefore they say this rule doesn't apply, for example, "do you think that because a bicycle has two wheels, I must be able to handle a vehicle with any number of wheels?"}}

{{Instead, one should look at the meaning of Zero, One and Infinity in this context as the logic to handle a condition. You either don't have to deal with it, or you have to deal with only one case, or you can apply it any number of times. (I really meant to explain more than this, but English is not my native language.)}}


Should two be included?

People keep forgetting poor old 2. Usually to do with 2 different things that need to work together (2 peers in peer-to-peer, 2 modems etc.)

In this context, peer-to-peer is one (per connection). A system can communicate with none, one, or many.

There are well-defined cardinalities other than zero, one, and arbitrary. There is a reason why all Indo-European languages have a concept of pairs or couples.

I would counter that these relationships are *not* arbitrary, and therefore need not invoke the rule.

Excellent rule of thumb. A slight variation "One, two, many." addresses a slightly different kind of CodeSmell, I think. Arbitrary limits are a problem on the high end. But I think the "two" special case often stays at "two", while any other counting number usually means "some number." If not now, we end up discovering another case, another need, another customer, or another way to slice the problem sometime later.

The "two" special case shows up a lot in human language and in coding: "This" and "Not this" or "This" and "The other", or "One way" and "The other way.". The logic to deal with binary choices is pretty simple. And a lot of things stay at binary choices. We have the handler for an authorized user, and one for an unauthorized user. "Not this" and "This" is an instance of "zero or one occurrences of this"; "authorised user" and "unauthorised user" soon becomes "unauthorised user", "authorised user", "superuser", "administrator"...

Thinking in terms of dealing with a collection is different from thinking in terms of dealing with two. Even the simple truth logic changes. With exactly two things, "not this" means "the other." With more than two things "not this" doesn't tell you for sure which other (assuming a closed system). Knowing "not this" you actually have less actionable knowledge. When you have more than two of something, often you want to handle it differently from the two case - the implementation changes.

There's a whole lot of code in the world that started as binary choices, and shouldn't have. In that sense "two" is "arbitrary" and "ZeroOneInfinity" holds. On the other hand, a lot of stuff that is "two" will always stay "two" and doesn't deserve the work (or the overhead) to deal with "arbitrary."

I think it's worth calling a lot of attention to the "two" to "we're really not sure how many" switch, so you don't get stacks of nested "if" statements, for example. Sort of a flag for: "You probably want to handle this differently." -- JamesBullock

I think the cases you put forward for "two" are best solved with binary 0/1 logic instead of really using 1/2. But that's just my 2c. --AnonymousDonor

"one category / two categories" => "no distinction / one distinction"

See also TwoIsAnImpossibleNumber.

I think the discussion in this section provides a few good indications that "Zero One Infinity" should be, instead, "Zero One All". In many cases, 2 *is* all. Although, in many cases people also have a flawed notion of what All may be. Consider opening a file - naively, there may be 2 cases; the open-file operation returns null or it returns an open file handle. In reality, though, there is a perfectly possible 3d alternative - that the function call never returns at all. Maybe the program received a signal during the call, or perhaps a memory allocation error occurred, or perhaps the file is on a network volume and the network's down, and the operation is hung indefinitely. So, perhaps the true "All" is covered by the 2 cases of the if statement, plus a separate timout mechanism, plus 3 or 4 different exception handlers, giving precisely 7 alternatives which truly do cover all possible outcomes.

Hence the Unix idiom of returning 0 for successful completion, and a nonzero integer for an error state. Without ZeroOneInfinity, the convention may well have been "0 = failure, 1 = success". Fortunately the "return value" was interpreted as "error code", 0 came to mean "no error", leaving the other integers available for distinguishing the "infinite" different ways a given program can fail.


There are numerous times where programming calls for a specific number, and none of 0, 1, or infinity will work. That means someone has to decide what that number is. At one end of the spectrum is the programmer - via a symbolic constant. At the other end of the spectrum the end-user specifies it in a friendly UserInterface. In between, you have install time settings, ConfigurationFile? settings, etc.

The point isn't that you should never use N != {0,1,oo}. Of course arbitrary limits exist in practical programs. The point is that every time you do use some other integer, you should have a good reason -- you should never accept such a limit without thinking about it.

Of course, you can't support an infinite number of everything, etc. Every time I want to count something, I have to make tradeoffs. If I allow the counter to sit in some machine-specified register size, I can gain speed by losing flexibility. Perhaps these are the right decisions, perhaps not. The question is: are you aware you just made a tradeoff? Did you think about the design implications of that tradeoff down the road? If you are making an arbitrary fixed size decision, have you isolated it? Can you globally change the fixed size easily? At all? (Example: a collection of people indexed by an integer (members of a social networking site, say). Is that a 32-bit integer key? No, this collection would never contain the entire population of the planet...)

So if you ever find yourself saying 'well, 15 ought to be enough', smack yourself upside the head, then sit down and do a proper analysis of the problem. Anytime your system imposes an arbitrary limit, it is a design flaw. Non-arbitrary limits are part of the real world. :) That is why the above says '...ignore at their peril', and not '...don't ever do it...'

How good is a good reason? Let's analyze this case: I make a module that contains a dynamically self-allocating vector class. I don't want it to reallocate its buffer any time a value is added, but to reallocate itself, say, once per 30 allocations (that's the problem: why exactly 30). The number 30 shouldn't be in a ConfigurationFile?, because it's a low-level implementation detail. It should be obvious that this is an arbitrary constant, but you'll still have to use many such things on your programs. Smacking your head won't help. Therefore, the rule of ZeroOneInfinity is very nice theoretically, but practically it has only a little use, and only on obvious cases, where the number should disappear, or be moved to a configuration file. -- AmirLivne

In this example the choice of '30' doesn't affect the interface to the vector class (although it does have other problems; see below). The ZeroOneInfinityRule applies to both implementations and interfaces, but it is arguably more important for interfaces. -- DavidSarahHopwood

If you submitted code to me with some test on alloc_count < 30 and no comment, it would immediately be rejected. If you are going to use the number 30, you have to justify *why*. In your code, there must be justification, including empirical measurement, analysis, etc., for all of the MagicNumbers. It is as simple as that. If you aren't using one of {0,1,inf}, you must justify the number you are using. The justification could be memory size, runtime efficiency, file system constraints, whatever. But it should never, never, be "seems to work for me". Also, as the above comment suggests, hard-coded constants should be replaced with macros, or better yet, const values, with suggestive names.

The magic number 30 here is an odor given off by an algorithmic problem. A dynamically self-reallocating vector class which reallocates after every 30 allocations will perform O(n^2) element-copy operations as the vector grows to size n. Hence the usual DoubleAfterFull algorithm which results in O(n) element-copies. You can write doubling without any MagicNumbers, or use 2. Is 2 better than 30 because it is smaller?

I think we should make difference between numbers that put real constraints on how the system works and numbers that affect practical measures such as performance. For constraints, {0,1,oo} is a good rule. However, many performance-related numbers are defined by "not too low but not too high". For these, it is IMO acceptable to just put some number in (and give it an appropriate name). The optimal value of many of these depends on the environment and often it is overkill to write code to dynamically adjust these numbers based on the environment. -- PanuKalliokoski

Similar argument from an AnonymousDonor: For tuned implementations, N may be applicable. Going from 1 to 2 is the most profitable unrolling. Most networks don't deal well with infinite-sized packets, and 1-sized is almost as bad. (Consider wiki-ing at a byte-per-packet. Even that has an N: bit-per-packet would avoid the grouping by 8.)

Many people solve this paradox by giving symbolic names to MagicNumbers, i.e. by using NamedConstants. So, instead of "if (allocation_count < 30)", consider "if (allocation_count < MAX_ALLOCATIONS_BEFORE_REALLOCATION)". No big deal.

I understand that symbolic names are useful, and I use them myself. But it is still a "magic number". No matter how you disguise it, it still has a hard-coded value, whose only justification is that it WorksForMe, maybe even without checking if this value is better than other possible values. Therefore it is bad according to the ZeroOneInfinityRule, but you certainly have some arbitrary hard-coded values in many programs, and the ZeroOneInfinityRule cannot (and doesn't) work. -- AmirLivne

See above for why you are wrong about it not working. Zero one infinity works well for us, because there are no MagicNumbers in our code. If there is a hard constant in our code, it is justified. Period. The rule that makes this work is the zero, one, infinity rule. Why are you having trouble understanding this application of the rule?


I wonder how much NumberPollution we actually endure in our programs and what percentage of it is unjustifiable? -- WardCunningham

Allow 12% of unjustified numbers or MAX_UNJUSTIFIED_NUMBERS ;-)


It is an interesting question. In my current work, I do a lot of numerics. So there are all sorts of boundary conditions etc. which just happen to be the integral of something-or-other, epsilons, deltas etc. to deal with rounding error, and other things of this sort. In a previous incarnation as a systems programmer there were many system-related constants (inode sizes, buffer sizes, address spaces etc.).

Arbitrary limitations in OperatingSystems (file sizes, number of FileDescriptor?s, threads, etc.), are definitely violations of the ZeroOneInfinityRule; the fact that they occur in an OS doesn't excuse them. -- DavidSarahHopwood

In this work I have seen various codebases ranging from very maintainable (magic numbers wrapped in macros, and discussed where they are defined), to the oh-my-good-lord-i-have-to-fix-that? Thinking back on this, I see that my experience spans two areas that are quite different from the average programming task (insomuch as that makes sense), so how this relates on other work I am not sure.

This doesn't change the rules, it's just that this page is talking about magic numbers in regard to resources in particular. One cannot change the immutable fact that "2" is the only even PrimeNumber, so certain kinds of code might conceivably use the named constant EVEN_PRIME quite a lot. One can, however, always avoid allocating exactly/no more than 2 of a resource. That's what ZeroOneInfinity is all about.


Language support

How about a different twist on this discussion? We don't like MagicNumbers, because as the code evolves, the code breaks when limits are exceeded. But this phenomenon is not limited to numeric literals. What about when we code a local variable:

  int i;

Isn't this a hard-coded numeric literal in disguise? We are saying that 2**32-1 is all the larger that i will ever need to be. That's a magic number. But we routinely do this in most cases because we know that that magic number is sufficiently large that the risk of exceeding it is not worth the cost of using a infinite precision datatype. We don't run any tests to support this in many cases, nor do we "justify" it with a comment.

What you are talking about here is a design limitation of the C and C++ languages, which is not the same thing at all. Properly written C code should verifiably never overflow your i. In many cases this is obvious, and no comment is needed. However (and poorly written C code often has this problem), if the boundary cases are not obvious, yes you should describe why a signed int is a valid assumption, and yes you should test this.

{{Why do we still suffer such languages to exist? The above are all examples of the type 'integer'. As in, mathematical integer. You know, 0,1,2... infinity. Many languages have native support for such a type (though no database that I know of). Why aren't we moving to make life easier on ourselves? -- AnonymousDonor}}

Because generalized arithmetic is far too slow for many things.

Is this really a good argument for typical uses of limited-precision integers, or are most such uses instances of PrematureOptimization?

At some point you have to address the actual constraints of the machine you are using, or you have to accept the performance hit from avoiding this. Lisp does this in a sensible manner; many languages don't. Even in this case, the programmer must be aware of what is going on, unless it is acceptable in that particular application to have performance numbers fall off a cliff sometimes :)

Why do we still suffer languages that have no usable support for ArbitraryPrecisionIntegers? to exist, though? I can see the rationale for supporting both limited-precision and arbitrary-precision types, but there are many languages that don't provide adequate support for arbitrary precision at all. Library-only support such as BigInteger in Java doesn't count: no BigInteger literals, can't use BigIntegers to index an array, no OperatorOverloading (still in Java 1.5!), etc. etc. -- DavidSarahHopwood

--Python python python python--Ruby ruby ruby ruby-- BOTH HAVE AUTOMATIC BigInts.


A nice idea, but outside of the SmalltalkLanguage, one is often forced to put some limit on things:

Dynamic allocation is certainly used outside of the SmalltalkLanguage. COBOL programmers working on main frames have to work with static allocations, and estimate how much space is meaningful in different constructs (since dynamic allocation is black magic in that world), but they'll never allow 32k lines in a GL transaction, since that means they'll allocate enough memory for 32k lines for every GL transaction, which is unreasonable. Thus they try to get closer to a realistic maximum, and in some cases real life exceeds their wildest dreams, and things break. I can't think of any modern programming language that lacks convenient DynamicAllocation? and the ability to work with meaningful AbstractDataTypes. C++ std::vector, Python list, etc etc will make it a moot point how many lines there might be in a GL transaction. For dates and time spans, I'd suggest using types or classes written for the specific needs of such entities, rather than worrying about byte counts. A UserInterface designer will obviously layout his windows so that they are optimal for "reasonable" data, but neither user interface nor program logic need to break when things go out of ordinary ranges. -- MagnusLyckaa

See also FixedQuantityOverflowBug.


Ultimately, it is a software engineering issue. Tradeoffs are made which take into account the cost of implementing with various limit values (including infinity) as well as likelihood of future changes making your guess wrong. Also incorporated is the cost of testing alternatives.

For instance, if the cost of validating a magic number choice is very high, but the cost of choosing a wildly large limit value is relatively low, then do so and move on. -- JohnVriezen

This is implicit in the original discussion. Nobody claimed that your imposed limits need to be optimal. They just need to be correct. If you can easily prove that a value will never be higher than N, fine. Use N. You may have an intuition that it is actually N/2, but you are not sure how to show it. If it is taking too much time to prove that out, forget it. Just use N. If that use becomes an issue later, revisit it. It is all very simple really. The only problem is that so many times you will see code where the value isn't N/2, it isn't N, it is some randomly picked M that was pulled out of the air because the programmer (who hopefully doesn't work with you anymore) thought it looked good.

In many cases, the cost of infinite is too high, the cost of an suitably large value is cheap enough and we needn't give it another thought.

Many people seem to have trouble with the apparently clear discussion beginning this page. Obviously nobody is claiming you ought to support 'infinite' amounts of everything, nor does that even make sense. I agree that using a 'suitably large value' is often appropriate, but wonder about the "needn't give it another thought" comment. The point is that you should never, *never*, put an arbitrary limit in your system. The key here is *arbitrary*, not *limit*. In your case above, using integer values in C is often well justified for two reasons: efficiency and the fact that it is much more difficult to work with arbitrary integers in C. However, it is incompetent for a programmer to not be aware of cases where an int counter may overflow, and handle this appropriately. Similarly, 'suitably large' means this value makes sense because of such-and-such (analysis, testing, system limits, whatever). It does *not* mean "I guess this is big enough, and it seems to work on the simple cases I have run". This isn't rocket science; it is a basic competence issue for programmers.

{{Another case where a hidden magic number comes into play (albeit a little more bizarre) is a programmer's boundary for how large a method should be before it ought to be refactored into two or more methods. This is an arbitrary value (fuzzy as well), but just like more typical magic values, its choice will impact the likelihood of errors surfacing in the code base. Magic numbers lead to undiscovered bugs, excessively long methods lead to undiscovered bugs. Both are based on arbitrary limits a programmer imposed - in the first case, usually too small, in second case too large.}}


I find that if you graph the quantity usages in a histogram, it often looks something like:

  1st: ***********************************************************************
  2nd: ***************************
  3rd: ***
  4th: *
  5th:

In other words, having just 2 "slots" may be sufficient for most cases. If hard-wiring just two slots covers most and if there are other ways to deal with the exceptions (such as a large note section), then why bother creating all the CrudScreens and related UI clutter associated with open-ended quantities?

This is Benford's Law: http://en.wikipedia.org/wiki/Benford%27s_law


I worked on some medical software that dealt with breast cancer. It had a hardwired limit that no breast could have more than four lesions. This was for a screening/monitoring programme, and if more than four were detected then things were clearly systemically wrong and well out of the software's scope.

If it keeps the software cheaper, I can see a justification as long as it can detect and clearly warn the user if the limit is reached rather than miscount or go bizerk. The message can suggest to the the practitioner to send the patient to a specialist or have a custom exam. In other words, "fail right".


  Night then day then night
  ZeroOneInfinity
  The stages of life
....

 if (person.daysAlive > threshold) {person.purpose=WORM_FOOD;}


A cheat: To prevent people from criticizing your magic numbers, make them deep magic numbers instead. Using the nearest power of 2 (1,2,4,8,16,32,64,128...) as your magical number makes people think there is some deeper magic going on, and they'll be less likely to criticize. *innocent look*

As an added bonus, they'll probably spend hours trying to figure out what that deeper magic is. :-)

That is the opposite of a trick that works. The most suspicious magic numbers of all are the round ones, in any base, but especially base 2 and base 10. PrimeNumbers, on the other hand, are essential for many algorithms to work correctly (e.g. many forms of hashing, with a prime table size). So use numbers like 131 and 6983 :-)

[Different poster than above:] I don't know, it seems like allocating things by prime numbers would make it obvious that there's either extremely deep magic going on, or the number really is arbitrary. One might litter the code with Fibonacci or Leonardo numbers, since any whole number can be expressed as a sum of non-repeating Leonardo or Fibonacci numbers, then add some comments to reinforce the illusion that you're doing magical partitioning/allocation for performance reasons. The Leonardo numbers would be especially useful for this purpose, since they're used in SmoothSort because of this very property. Personally, though, I would use powers of two, but rather than let the coders try to work out the deeper meaning on their own, I'd pepper the code with comments like

     long long int identifiers[256]; // Optimized for cache efficiency--any more starts thrashing.

or

     const size_t MAX_KST = 63; // may need to be set to 31 on 32-bit OSs

though the best way to freak people out is probably this...
     template <T> inline
     void k_unsafe_r(T[], size_t sz);
     // DO NOT USE for sz > 64 as this will cause buffer overflow
     // and would not be performant for large numbers even if a dynamically-allocated
     // array was practical here.
     // Checks for sz > 64 are disabled for non-debug builds.
     // For reasoning and justification see [server]/[Official-sounding, nonexistent spec].txt

FYI (because I had to look it up): Leonardo(n) = Leonardo(n-1) + Leonardo(n-2) + 1 = Fibonacci(n-1) + Fibonacci(n+2) - 1 = 2 Fibonacci(n+1) - 1


From a strict mathematical viewpoint, infinity is not a number in the sense meant in some of the quotes above, and therefore those quotes don't make real mathematical sense. I'm splitting hairs here obviously, but in practice a lot of people easily confuse 'finite but unbounded' with 'infinite'. They are not the same (although granted that 'zero, one and a finite but unbounded number' doesn't have quite the same ring to it...)

Not to mention that there are different 'infinities'. But as you noted, this pedantry is not useful. The zero-one-infinity rule is a good encapsulation of an idea, and wandering off into a discussion of the nature of numbers would be counter-productive.

[Mention of hyperreal and surreal numbers deleted as off-topic.]


Let me start off by saying that of course the limitations of the technology should never occur in design.

No, the zero-one-infinity "rule" is bad design for the simple reason that software should be designed for common cases and common cases are almost never zero, one or infinity! Common cases are 2, 3, 5, or 23.

Let's say you're designing an improvement over WIMP. One of the improvements you come up with is to have a stack of selections so that you can make a new selection of objects without clobbering the old selection of objects. So far so good. How deep should the stack be? Going by the zero-one-infinity rule, it should be infinitely deep. But that's stupid.

And there's many reasons why it's stupid. Chief among them are that:

Now you've massively complicated the UI in order to provide one lousy improvement.

In fact, the appropriate size of such a stack should be determined empirically and will probably turn out to be around 3-5. And neither 3, 4 nor 5 are "zero, one, or infinity".

This is hardly the only counterexample. Another counterexample is how many generations of objects in a graph you should display around the center. The correct answer is 2. Not 1, not infinity, and not variable, but precisely two.

Another example is how many objects in one directory should you display on screen at any one time. The answer is not "infinity using scroll bars" . The answer is however many will comfortably fit screen at any one time and that's it.

If there are more than that many objects in a directory then obviously they were never meant to be viewable by human beings.

Any time the software has to deal with actual human beings, reality grinds the zero-one-infinity rule into the dust.

You are completely missing the point of this RuleOfThumb. It simply means to not put an arbitrary limit on your collections.

-Indeed. Whilst your UI will want to limit the number of items shown (and forget the old ones), this limitation should exist in the UI code only, and the list must be able to hold any number of items. And as for your idea of getting rid of scroll bars - have you ever used a general-purpose computer, or do you only view mastered, pre-chewed content?-)

This is indeed a perfect example of ignoring the rule and getting bitten as a result. Okay: So Thou Shalt Not Show more than ten items in a directory. The idea might be fine in 1993, but two years later it's a bug. If you're prescient enough to predict that 3 will be the maximum in 1998, then Thou Shalt Not Show more than three items in a directory. In 1997? Or do you accept what the rule says, ditch the insistence of an arbitrary limit, and show however many will fit?

Besides, what about all those other items in the directory that don't fit? Bad person! No file listing for you!

No, these "counterexamples" are all exhibits of why the rule exists. They're all bad ideas.

Perhaps the OP misunderstood the idea of 'Infinity' to mean "actually infinity" rather than "as close to infinity as we can get." Sometimes it's impractical to support infinity X from the get-go, but then you have to be able to allow arbitrary numbers/amounts of X with ease. Preferably, it should be easy for your customers as well. See ApproximationOfInfinity.


Arbitrary limits will almost invariably be exceeded when real-world conditions change. No matter how much the manager insists your whuzzit app only needs ten slots for dangling rods today, it will need eleven slots tomorrow.

I think confusion is arising here because of failure to distinguish between internal programming limits (such as only providing ten slots instead of an unlimited number) vs. reasonable and desirable user interface limits such as only displaying ten records at a time (from a database of many) via a pagination mechanism because simultaneously displaying all of them would be unmanageable. The former is a programming issue, subject to the ZeroOneInfinityRule. The latter is an interaction design decision, subject (no doubt) to other rules. The ZeroOneInfinityRule says that programming need not and should not be constrained or dictated by the interaction design, and will be more robust, reliable and maintainable if engineered to trivially accommodate changes to interaction design driven by changes to requirements.

In short, the ZeroOneInfinityRule is about good software engineering practice. This may be, and probably is, entirely distinct from good interaction design. The two are complementary, but do not share the same rules. In a sense, this may be like the distinction between designing a building to be ergonomic and pleasing to the eye vs. constructing the building to withstand hurricanes and earthquakes. -- DaveVoorhis

Agreed. To make it really simple, though, it's about resources, and only about resources, not about any old thing. If you put 8 registers on a cpu, you will turn out to need 10 (provably, in this case) for many programs. If you make a chip with a hardware stack that is 1K deep, you will quickly run into programs that need a deeper stack. If you allow for a max of 640K of RAM, you will quickly find that some programs need more.

Same thing in software, of course. If a directory can hold more than 1 file, then there should be no limit to how many it could hold (this was violated by many past operating systems, perhaps most recently by the Atari ST). If a phonebook app allows you to save 2 phone numbers for someone, then there should be no hardwired limit to the total (which, as you and others say above, is a different question than how the UI displays things). A lot of developers might think that the latter is silly, but I've known people with 10 phone numbers, and I needed to store all of them. It's rare, but if you need to do it, you don't care that it's rare.

So the rule of thumb is, as soon as you transition to a plurality of a resource, then one should attempt to have no set limit on how many are available. This is not always possible, but it's a decent rule of thumb. -- DougMerritt''

The problem with your arguments is this, any distinction between a "resource" and the UI is completely arbitrary as JayOsako suspects in ApplicationsAreLanguages. So the mere fact of arguing that it's okay to limit UIs is conceding the point that placing arbitrary limitations on resources can improve the design of software. There is no difference between a UI and a resource since a UI is just a different kind of resource.

The presentation of a resource may be very distinct from its internal representation. This concept is the foundation of ModelViewController. For example, a database table - the Model - may contain 'n' address records, but the View might reasonably only display ten of them. This is a place where ZeroOneInfinityRule is relevant -- initially it may seem reasonable to allow only ten address records at the Model level, but inevitably there will come a need for eleven in the View and (hence) eleven in the Model. Therefore, it makes sense to engineer the Model with as few arbitrary restrictions as possible, even though the View will only (initially) present at most ten. When (inevitably) the View needs to present eleven addresses, the Model is already capable of it and requires no changes. Sweet! Furthermore, the reality of modern software is that it's often easier to implement persistence mechanisms or data structures that do not have fixed limits, rather than use (say) a fixed size array. -- DaveVoorhis

For instance, what if you settle on dividing CPU time at the microsecond level? Well, as soon as you do that, someone will have a need to divide it at the tenth of microsecond level. Are you really supposed to divide CPU time at a tenth of a microsecond?

Eh? I suspect this is a strawman, but since I build business applications where "CPU time at a tenth of a microsecond" is never an issue, I shall claim ignorance here and skip a response. -- DV

The other problem is the counterexample running the other way. The fact that whatever reasonable assumptions you need to make to design a UI even halfway intelligently, these assumption will always be "provably" violated. If you only provide a selection stack 3 deep, I can show you someone who needs one 4 deep for perfectly legitimate reasons. You only provide display for 50 objects, I can show you someone who absolutely needs to display 51.

Sorry, I do not follow. Here, it seems you are agreeing with the point made by a number of us, yet you seem to be disagreeing with its generalization, which is the ZeroOneInfinityRule. In fact, what point are you trying to make? That we should stop using (relatively) unlimited data structures and storage mechanisms and switch to fixed-length arrays? Twenty years ago, we wrote a lot of code like that. It was bad. -- DV

The reason for the ZeroOneInfinity rule is because software far outlives its requirements and its requirements appear to change all the time.

With all due respect, this is clearly a comment made by someone who clearly has not spent any significant time designing and developing software for real users. You are writing about an ideal world that does not exist. I can't even begin to count the number of times I've been presented with an "oh, by the way..." by a user which changes the requirements - and hence the design - despite best efforts to avoid such scenarios. Sometimes this occurs when I am the user, because I am human and sometimes fail to remember or even anticipate my own needs. -- DV

So when you make a compromise for the sake of a well-designed UI, this is backed up by massive amounts HCI research. But when you make a compromise for the sake of technology, this is backed up only by a programmer's blind prejudices.

Not at all. The ZeroOneInfinityRule is all about not making arbitrary (i.e., "blind" and prejudicial) choices. It does not affect the UI. Let me emphasize that: IT IS NOT ABOUT THE UI! It is only relevant to the software engineer. It should not and does not concern you at all, since it does not impact on your domain. A programmer can obey the ZeroOneInfinityRule and you would never even be aware of it. It exists at a level that you need not see. Impose UI limitations or not as you see fit. Adherence to ZeroOneInfinityRule will mean the programmer can accommodate your requirements, whatever they might be. It makes the programmer's life easier, but has no effect on yours, except to make your life easier by allowing the programmer to spend more time implementing new requirements and less time coding to overcome (or fail gracefully at) fixed limits. All good, yes? -- DV

When you understand why the zero-one-infinity rule exists in the first place, it's obvious that there's bound to be limitless counterexamples to it. It's obvious that the rule is completely useless ... except as a tool to counter human stupidity. -- RK

Utterly baffling. The ZeroOneInfinityRule is so obviously of value -- and something I've seen so frequently causing negative impact due to failure to recognize or apply it -- that I can't believe you're arguing against it. Is this a troll of some kind? -- DaveVoorhis

[No, no, he's not trolling, he is just thinking of issues very different than the issues you and I have in mind here. Richard, I really think there is a misunderstanding involved here, perhaps because what has been said has been said imperfectly; but I promise that if we figured out the right way to say it, it's a certainty that it's just sanity, not something controversial in its roots. That hypothetical sane phrasing would include making clear the areas in which it does not and should not apply, for there are many such. The infamous 640K RAM limit should serve as an example of when the rule does apply, no matter how many places it does not apply. -- DougMerritt]

A general guideline might be that the ZeroOneInfinityRule should prevent you from building in arbitrary restrictions in the program, not taking them out. The presence or absence of a visible selection stack is not an arbitrary decision (presumably), and the (visible) size of the stack is determined by the need to be visible. Ditto the choice of a flat vs. hierarchical presentation, one of the examples used in TheInmatesAreRunningTheAsylum. The 640K limit is completely arbitrary, however, as are things like function size limits in compilers and recursion depth limits in some language runtimes. If these are needed for hardware reasons, they should at least be configurable so that the software can grow as the hardware gets better.

(I disagree with RK about scrollbars, BTW. I think they're an ugly solution, but if you restrict directories to containing only as many objects as will fit on the screen, you're going to have some pissed off users when their latest novel contains one more chapter than will conveniently fit on the screen. And in any case, this doesn't invalidate the ZeroOneInfinityRule, because screens get bigger, and the programmer better not hardcode the number of objects that will fit onto the screen.) -- JonathanTang

Despite DV's utterly ridiculous confusion of interaction design with interface design. Despite his confusion, it's clearly a matter of design. And what I have to say is this.

Take a look at ThreeStagesOfKnowledge or ThreeLevelsOfAudience and you'll be able to appreciate when I say that I operate at the concept level. I don't go in for rules. But I've come up with lots of heuristics for large systems design, heuristics which work damnit. So given all this, I'm going to judge the zero-one-infinity "rule" on a number of levels.

On the concept level, the "rule" is utterly irrelevant, I am so far beyond it that it's not even a consideration. DV says it's "so clearly of value".

On the heuristics level, which differs from the rules level in the precise sense that a heuristic is a rule that requires interpretation, you have to compare zero-one-infinity with other heuristics of good design. Things like encapsulation, information hiding, orthogonality, et cetera. And compared to these much more valuable heuristics, zero-one-infinity comes off looking all the poorer.

On the rule level, well I've already given a whole class of counterexamples to the "rule". And I've outlined the conditions under which entire new classes of counterexamples could be discovered. As a rule, it pretty much sucks. And the reason it sucks is because its content is so poor. All it says is to use OrderedCollection and String instead of Array and *char.

 -- RK

I appreciate that you operate at the conceptual level, and find rules inherently objectionable. Fair enough. However, the ZeroOneInfinityRule is not an absolute, nor would any good software engineer treat it as such. It is merely a convenient mnemonic for certain programming practices - which are amply and well (though perhaps not perfectly) described above - that any programmer of a certain competence will recognize and regard as inherently good, or at least preferable to the alternative. It is not idolatry; it is simply shorthand for avoiding a certain category of (amateur?) blunder.

Indeed, I ran across a nearly perfect example of it today when preparing materials for a class - I found a script (the language is immaterial) in which a 6x6 matrix was implemented as six named arrays. In other words, int a[6], int b[6], ... int f[6]. I suspect the original author did not recognize the matrix-ness of the problem, or maybe six and only six was reasonable for his or her original application - I know not. It was not reasonable for my intended example, which demanded a 10x10 matrix. I changed the six arrays to a single data structure in which the dimensions were parameterized, and thus resolved the fixedness of the "matrix" dimensions, improved reusability, and shortened and clarified the code. In other words, I applied the ZeroOneInfinityRule and gained flexibility and elegance.

This code is intended as an example for teaching purposes, and the ZeroOneInfinityRule will provide a handy mnemonic for my students to remember the improvement and a general category of similar improvements and problem avoidance. In short, it will help them remember the concept, and the better students will recognize that it is merely a handle for a concept and not an absolute.

As for my failure to distinguish interaction design from interface design, I plead guilty. I've always assumed that the interaction occurs through the interface, therefore the interface is the physical manifestation of the interaction design. If I've got it wrong, then you have my apologies. I am a mere programmer, not even a very good one, and I am old and growing senile and often make mistakes. If you'd be so kind, would you point me (yet again, perhaps?!) at a definitive reference that distinguishes interaction design from interface design, that I may be enlightened and hopefully not make such an offensive mistake again?

-- DaveVoorhis

A mnemonic for what? "Use the right abstractions", "Make sure to generalize" or more simply "Don't do stupid stuff"?

Your use of the term "concept" would make me laugh if it were the first time I've seen it this badly abused.

What "concept" is it that you're trying to teach them? To generalize? Generalization is a concept but I'm pretty sure it's more general, and much more meaningless, than what you're trying to convey. Because if it were what you're trying to convey then you'd be earning scorn in heaps for defending "zero-one-infinity" as a mnemonic for a concept that can be described by a single word. But to cut it short, there is no "concept" that you're teaching them with that example, just some amorphous idea and a not so good meta-heuristic.

In fact, the only way you can defend teaching zero-one-infinity is precisely as an absolute and inviolable rule.

One that they need to internalize while your students, rule-users all presumably, are still quite young. This isn't bad, it's just the way the world works that rule-users need simple rules to make sense of the world even when those rules are somewhat wrong. Your, and others', trying to make of the zero-one-infinity rule something more than a stepping stone for raw beginners is what draws my attention.

Now moving onto more interesting matters, most texts about interaction design distinguish it from mere interface design either explicitly or implicitly. The only reason TheInmatesAreRunningTheAsylum doesn't is because it barely even mentions software design. AboutFace does so explicitly somewhere in the introduction. Not that that's definitive but certainly the word 'interaction' in the term InteractionDesign was never meant to be restricted to HumanComputerInteraction. And why would it? UserInterface Design would have been much less ambiguous. As for definitive, I'm an interaction designer and I'm telling you that interaction design is never, has never, and will never be restricted to things' interactions with human beings. That's as definitive as you need to get.

Interaction design applies to all things which interact. The moment you have two or more things in a system, you've got interactions and the system is under the purview of interaction design, at least in a trivial sense. -- RK

Re: "Your, and others', trying to make of the zero-one-infinity rule something more than a stepping stone for raw beginners is what draws my attention." I agree that it should only be such a thing, and that any experienced architect who is unaware of such issues is, not to put too fine a point on it, a retard. The problem is that such retards exist in vast abundance, even when they otherwise exhibit promising signs of being intelligent.

The 640K RAM limit in PCs in the 1980s and early 1990s under all versions of DOS and all early versions of Windows is a prime example that I hope you managed to avoid; it was extraordinarily painful, and the contorted "extended memory" hacks (I think there were 4 standard ones that shipped with early Windows, and more from third parties) were just bizarre - and unreliable, and didn't completely work, and had quirks, and I have the battle scars to prove it, so I don't want to hear any DOS/Windows (e.g.) 3.1 apologists telling me everything was fine back then - one was lucky if it worked out, but the mechanisms were highly buggy and broke often.

Anyway, the ZeroOneInfinityRule is a very handy way of telling people "don't be a retard" on the subject of putting completely arbitrary limits on resources. That doesn't mean that it has any value for anyone who already knows better, and in particular, there's no reason to assume that it has any value for you, since you try to avoid idiotic design like the plague.

Does that help? -- DougMerritt

At least it gets us off of spurious epistemological arguments and back onto the subject :).


Every counterexample I have seen is ridiculous and simply because the presenter does not understand the rule, let's look through some.

"Another counterexample is how many generations of objects in a graph you should display around the center. The correct answer is 2. Not 1, not infinity, and not variable, but precisely two." Why is it two? If you can't explain why, then your argument means nothing because it is not precisely two. If you can, then that is precisely why choosing the limit of 2 _is NOT arbitrary, and as such, this rule does not apply_.

"Another example is how many objects in one directory should you display on screen at any one time. The answer is not "infinity using scroll bars" . The answer is however many will comfortably fit screen at any one time and that's it." How many is that? What arbitrary number defines how many fit in the screen? No number defines this, as there are infinitely many possible amounts depending on screen size, font size, the size of the container they are being displayed in, if the user is zooming in, etc, etc, etc, etc. So only allow the number of items that will comfortably fit in YOUR screen is ridiculous. Once again, by the time you explain how to get that number, you've explained why the number you are using _is NOT arbitrary, and as such, this rule does not apply_.

"If there are more than that many objects in a directory then obviously they were never meant to be viewable by human beings." I can only assume this was a joke.

"Any time the software has to deal with actual human beings, reality grinds the zero-one-infinity rule into the dust." Reality never has and never will grind this rule into the dust. The real world imposes many limitations we see as arbitrary, but reality isn't what we are saying should follow this rule of thumb. When the arbitrary limits applied by the real world are forced upon the programmer, it is not the programmer who is not following the rule of thumb, but the real world.

"Well I've already given a whole class of counterexamples to the "rule"." Yet none counter it in any way.

"And I've outlined the conditions under which entire new classes of counterexamples could be discovered." I see no such outline, not even one I disagree with, please show me this outline.

"All it says is to use OrderedCollection and String instead of Array and *char."" This is obviously not all it says. Please read the rule at the top, it means what it says. What you said is simply a way to begin using it.

The reason I viewed this page was because I ran into a problem that I thought forced me to impose an arbitrary limit. It turns out the answer is obvious, I shouldn't impose an arbitrary limit, but let limit be the maximum value that can be held in the variable type I am using. Which variable type is a little tougher of a choice, but comes down not to arbitrarily choosing. I won't allow up to long double because the likelihood of going over 1000 is incredibly low. At this point, it is a flexibility vs. space trade-off, what I choose is not arbitrary but determined by the amount of flexibility I want to allow.

This is the most difficult example I've come across, but still no arbitrary selection is required.

-- PhyloGenesis?


Perhaps this won't be useful, but it looks like the basis of this rule is that computers deal with scalars (single values), collections (multiple values), or nothing else at all. This rule just wants you to be cognizant of this fact when writing code that deals with storage of values in any way. It's not saying that weeks can't be 7 days long or that wood can't be cut in 2x4 blocks.


If Foo can't exist, that's zero. If Foo can exist, that's one. Beyond that, "How many Foo should we allow?" is the wrong question. A better question would be "How many Foo could there possibly be?" (with the emphasis on "possibly", which in turns makes it necessary to be very clear about what a Foo actually is so that you can accurately count them); if you can't give a firm answer, don't just pull a number out of your butt.

It can be framed from the user's perspective: "How many tables can I have in my database?". If there is an arbitrary limit it should be so frickin' high that you always have enough tables for what you're doing, or you're doing something very wrong.


I propose: ZeroOneConstantVariable?, going from YouArentGonnaNeedIt to YouAreGonnaNeedIt.

Or: NoneFixedVariable?. Fixed: I know exactly how much I need; usually this is one, but for bicycles and gonads it's two, integers, its 32 or 64 (bits), etc. These are still really "one" because each element in the collection is used differently. Variable: I don't care how much I need--each element in the collection is used the same way.


ZeroOneInfinityRule for communication: Zero is Notepad, One is Instant Messaging, and Infinity is IRC.


One other way of looking at this is that entities in your schema, class diagram, or whatever should either have no line between them (Zero), a 1:1 relation (One), or a 1:* relation (Infinity).

I suppose a *:* relation is the same order of infinity 1:*, isn't it... In the sense that ℵ₀×ℵ₀=1×ℵ₀

You're an idiot. Multiple inheritance is an extremely useful feature.


RefactorMe: This page and TwoIsAnImpossibleNumber are about the same thing.

See Also: TailWagsDog, YouArentGonnaNeedIt, MagicNumber, NamedConstants, FixedQuantityOverflowBug.


JulyZeroFive (repeated interest; was created much earlier)

CategoryCodingIssues CategoryDesignIssues


EditText of this page (last edited August 5, 2014) or FindPage with title or text search