NULLcounter arguments to WhyNumberingShouldStartAtOne
"Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration." -- Stan Kelly-Bootle
Here is EwDijkstra's justification (from 20 years ago):
Given four possible ways of representing the sequence 2, 3, ..., 12:
a) 2 <= i < 13 b) 1 < i <= 12 c) 2 <= i <= 12 d) 1 < i < 13Dijkstra rejects b) and d) on the following grounds: "There is a smallest natural number. Exclusion of the lower bound... forces for a subsequence starting at the smallest natural number the lower bound as mentioned into the realm of the unnatural numbers. That is ugly." In other words, the sequence 0, 1, 2 would have to be written as -1 < i ...
Dijkstra further rejects c) for: "inclusion of the upper bound would then force the latter to be unnatural by the time the sequence has shrunk to the empty one." That is, we would have to write 0 <= i <= -1.
Dijkstra further observes that in an environment where all four options were provided to programmers, use of notations other than a) were linked to more bugs.
Now if a) is the accepted form, there are two ways to deal with a sequence of length N:
e) 1 <= i < N + 1 f) 0 <= i < Nof which Dijkstra selects f) as the more elegant, stating: "an element's ordinal ... equals the number of elements preceding it in the sequence."
You can read the whole thing for yourself at http://www.cs.utexas.edu/users/EWD/ewd08xx/EWD831.PDF.
Not totally convincing. In particular, the argument relating to the empty sequence seems weak, and the final 'benefit' of a subscript equalling the number of preceding elements appears no more useful than the subscript equalling the number of elements up to and including it. -- VickiKerr
Yes. Empty sequences are perverse anyway. So the fact that their representation may be "perverse" is only natural. ;->
I reject (a) using an argument similar to his - in that the top element may be forced to be invalid. March has the range of days 1..31. So '1 <= day < 32'? 32 is *not* a valid day of month. '1 <= day <= 31' would be more appropriate.
This argument above seems compelling. When people in "natural language" speak of a range of selections from modular choices (days of the week, days of the month, months in a year) they always use form c). We say "from Tuesday through Friday". So ingrained is this that when a person says, "from Tuesday until Friday" it is still understood that they mean Tuesday to Friday inclusive of both Tuesday and Friday. If Dijkstra's form generates less bugs in programming (is there actual evidence for this?) then that's great for programming. Writing books that start at "Chapter Zero" sacrifices clarity for obtuse dweeb cuteness points (which possibly has greater value for hardcore f) aficionados).
To answer your question, no there isn't any evidence that Dijkstra's form generates fewer bugs in programming.
However: The days of a month are (arbitrary) ordinals (first, second, third...). Dijkstra's argument does/can not apply to ordinals. His whole argument is based on counting, not ordering. See OrdinalsAndCardinals
(c) would be a relatively good choice if array indexing began at 1. Then the sequence of valid array indices would be 1 <= i <= 15 for 15 element arrays, 1 <= i <= 0 for 0 element arrays. But, people's intuition breaks with sequences like 13 <= i <= 27: guess how many elements that contains? 15, of course...
If indexing begins at 1, then 0 < i <= 15 (b) is a good choice for the same reasons that (a) is a good choice when indexing begins at 0.
I vote for option d). I don't see that 0 < i < 0 for the empty set is less clear than Dijkstra's option f) of 0 <= i < 0. -- AndyPierce
The problem with choice (d) is that the empty set would then be representable by both 0 < i < 0 and 0 < i < 1. This non-unique representation of a unique entity could be argued to be ugly.
On the other hand, with option (a), the empty set can be represented by '0 <= i < 0' or '1 <= i < 1' or '2 <= i < 2' or even '100 <= i < 3' for that matter.
Which is still true of (d): 0 < i < 0, 1 < i < 1, 100 < i < 3, or whatever. I think the point was that (d) has an *additional* ambiguity over (a). For any fixed lower bound, there are two valid representations of the empty set.
Question for the voter-on-d's - what about the original problem with sequences starting at 0; I have to agree with Dijkstra that -1 < i < N is inelegant (and problematic if you are working with signed values ;)
ItDepends applies here. If you number real objects (and avoid abstractions), numbering begins at 1. Zero Objects is an abstraction. If you number in the abstract, the numbering starts with zero and goes both ways (in a signed numbering system) in an unsigned system numbering starts with zero and numbers toward infinity or a limit using integer or non-integer numeric systems. There are other dependencies to numerous to continue including starting anywhere and using whatever incrementing, directional, non-directional, combinational or whatever numbering scheme you desire. (Where do you start with the temperature absolute zero? ItDepends on the scale, F, C or R). [-273<0<roomtemperature C][0<273<roomtemperature R]
Not, I think, the best example because absolute zero is absolute zero (hence the name), and is a physically derived constant. Any time you actually care about this value it is probably silly to use units other than kelvin, but that is another point. Changing the units you are counting in may *hide* what you are doing, but doesn't actually change the way you are counting. In other words it is F, C, or R, that represent a shift, not the other way around. This will be true of any affine transformation of your space.
If this were so, then tell me what my thermostat should be reading in kelvin for a comfortable temperature? How we number is more often relative, not absolute. Tradition, practice and practicality dictate our daily use of numbers, not the scientific mathematical exactness you describe.
That's not what I said. I said that if you *actually care what zero is*, then you should use K. Otherwise use degrees C, of course :) (so your answer is about 25, or 280 K if for some strange reason you wanted it...)
Some more nice properties of n <= x < m, or [n,m), notation:
Yeah, instead of fence post errors, you make bounds errors.
Numbering should start from 1 and go up to and including the last element of the sequence. It's the only system that makes sense and doesn't cause any kind of confusion.
Nothing stops anyone from writing "from 1 to 0" if you want the empty set but don't want to use unnatural numbers as indices. The pseudo-aesthetic and pseudo-logical arguments on this page are also complete bullshit.
The ideal scheme should be easy to learn and easy to remember. That usually means symmetry. In other words <= and <=, because < and < is completely unnatural to most people. Using "(1 to: 89) do: aBlock" or "57 timesRepeat: aBlock" is perfectly natural.
This diatribe misses rather spectacularly Stephan's argument just above. People who allegedly will have trouble remembering m <= i < n because of its asymmetry, are also likely to have trouble remembering n - m + 1. A perfect example of that is every day life situations when many people have trouble remembering that from say 17th to 24th of August including both ends there are exactly 8 days and not 7.
I have yet to write n - m + 1 even a single time. And why should I? My programming language isn't broken so it implements size() for me. On the other hand, I have to deal with (1 to: n) and n timesRepeat: on a frequent basis. I can't be bothered to have to deal with ugly, unnatural indices that people justify based on obsolete, broken languages.
No you don't. If you had empirical evidence that one was better than another, you wouldn't be resorting to pseudo-aesthetics. And I do say pseudo since there's nothing aesthetic about [n,m). Just like your reasoning above about avoiding fence post errors is pseudo rational, and not rational at all.
Since all options can be made to work, the argument can be only empirical based on the perceived elegance of the notation, and also based on the perceived likelihood that folks are going to have more or less errors when choosing one versus the other. There's also a long established wisdom in mathematical practice that the uglier a notation is, the more likely it is that people will make errors while using it. The errors made by people when counting date intervals closed on both ends are notorious and are so far the only empirical evidence presented.
Yes, so people who aren't entirely stupid will optimize the notation for the overwhelmingly common case of iteration and not the obscure case of reimplementing size. What can possibly be more elegant than writing '4 to: 59' to include both 4 and 59? If you had to write '4 upToButNotIncluding: 60' you would quickly be driven insane. Your psychotic focus on "errors made by people when counting" really doesn't matter since people shouldn't count.
There is a name for "why numbering should start at zero"; optimizing local effects at the expense of global effects. It's one of those things that happen when people are narrow-minded retards that can't see beyond their personal horizon. In this case, they focus psychotically on fence post errors during counting, and they forget that numbering also applies to bounds errors during iteration.
And all this time I though it was because of PointerArithmetic. -- WillGray
Coinciding with the anonymous lo<=x<=hi fan, I first want to make clear that I do NOT coincide with the "idiot,stupid,retard" language used there.
However, starting at 0 (and always, even if reality starts way above 1) is a reminescence of that macroASM called C. Nowadays, compilers should be able to cope with any bounds (and they were before, see FORTRAN, BASIC, Pascal, etc.). Hence, to a large extent, the fondness with [0:N) or [0:N-1] stems from the widely use of an outdated (for things other than tweaking the system directly) programming language. A week is [MO-SU] and not really [MO-MO) (some want [SU-SA], OK). The directly understandable way to access the range 3..13 would be FOR I = 3 TO 13 (as has in fact always been, except C and succesors). A point is made with containers and cutting ranges, however: FOR (iter=first, iter<=last, iter++) (that is [first,last] as range) seem to me perfectly understandable, also [3:mid][mid+1,13] makes clear that we do NOT index an infinitely thin "cut" between elements, but those elements themselves. -- Michael Vielhaber
But what do "MO" and "SU" mean? If XX means "XX at 00:00 (or 12:00 AM, which causes even more confusion: http://en.wikipedia.org/wiki/12-hour_clock#Confusion_at_noon_and_midnight )", then a "[MO, next MO)" is a week, but "[MO, following SU]" is only 6 days.
Starting at zero is simply more consistent, and you need zero math to demonstrate this.
So let's count with three digits from 001 until 999: while counting like this, please tell me why the first digit for units is different than for hundreds and tens? Because when you have 001, you have 1 unit, 0 tens, and 0 hundreds. When you have 111, you have 1 unit, 1 ten, and 1 hundred. It's very consistent. With 0 based numbering 000 represents 1 unit, 0 tens, and 0 hundreds, while 009 represents 0 units, 1 ten, and 0 hundreds -- how does that make sense?
Which digit do you use to number the first hour in the day? I use "12" for the first hour in a day. So you start counting at 12? That is even better.
Why is year 1950 in the 20th century? Why are people so confused that the 21th century has not yet begun in January 2000? Because they shot themselves in the foot numbering from year 1. Because enumerating spans is counterintuitive to measuring them. Enumerations traditionally start at one, such as 'the first person in line', but measurements start at zero, such as 'there are 0.23 gallons of milk in my fridge'. The first year of an infant's life ends, and the second year starts, when the baby turns one. Also, the way we use dates now doesn't inherently remind us that '2000' stands for something like '2000th year of our lord', which ends with 'the lord' turning 2000. (Of course, it is now 'Common Era'). By getting rid of traditionally starting enumerations at one, you make enumerations and measurements consistent, getting rid of spurious -1 / +1 in programs. This is the whole of point of counting from zero: getting rid of tradition to simplify programs.
We use ten different digits: please give me the first one and the last one. Respectively: Right index finger, Left Thumb. etc. See Stefan Ram's "Numbering Starts with Zero" web page for more non-nerdy, non-mathematical examples: http://www.purl.org/stefan_ram/pub/zero
Let's look at the different operations that are generally performed on sequences. We have the choice of half-open [a,b) ranges (type 1), half-closed (a,b] ranges (type 2), fully-closed [a,b] ranges (type 3), and fully-open ranges (a,b) (type 4).
Firstly, what are the tightest possible invariants? This is essentially a question about the representation of empty.
1) a <= b, empty when a = b 2) a <= b, empty when a = b 3) a-1 <= b, empty when a-1 = b 4) a < b, empty when a+1 = bHalf-open and half-closed are clear winners on simplicity here, with no +/- 1 needed in either.
The same pattern appears for determining the size, since it's a superset of determining empty:
1) b-a 2) b-a 3) b-a+1 4) b-a-1The simplest mutating operation, adding or removing elements, is the same kind of addition or subtraction for all types, except that removing needs to check the operation is legal, which requires comparing to the size.
What about all of our favourite divide-and-conquer algorithms, though? Non-linear traversal is often important for efficiency. The natural split, <a,b> giving <a,k> and <k,b>, for each works as follows:
1) No repeats, nothing missing 2) No repeats, nothing missing 3) The split element is in both 4) The split element is in neitherBut which one's best?
A top-down merge sort wants to still represent the whole range with no overlap, so options a and b are clearly the most natural. For instance, a.inplace_merge(i, j, k) will merge the sorted subranges [i,j) and [j,k), giving the sorted subrange [i,k).
Binary search seems like it would want fully-open ranges, since if it were the correct one you could just return the result immediately. If comparisons are cheap, though, it's faster on average to only check for found at the very end, so half-open or half-closed ranges are again the right choice. Example: after testing p(k), you want to continue searching in [k, j) if the test passed, meaning k is a valid result, or in [i,k) if it didn't, making it clear that k is not a valid option.
A partition on an arbitrary boolean predicate has no overlap and considers all elements, so half-open or half-closed ranges are again best. Example: You partition the range of elements at [i,j), resulting in the midpoint k, telling you that your predicate holds for the elements now at positions [i,k) and does not for those at [k,j).
The inplace partition inside quicksort, however, would be happiest with fully-open ranges. This would split the elements (i,j) into the subsequences at (i,k) and (k,j), neither of which contains the element that was correctly-positioned at k.
Recursing down a Binary Search Tree can also be though of as a split. If no duplicates are allowed, then the left and right children of a node with value x have values in (-inf, x) and (x, inf), so fully-open seems most natural. With duplicates allowed it's less nice, but the one type you really don't want to think in is fully-closed ranges, since if the children are [?, x] on one side and [x, ?] on the other, you have no way of knowing which side to traverse in order to find them.
So I'll claim that off-by-one errors are more likely caused by using the wrong kind of range than by any particular range type being less intuitive than another. That said, I can't think of any algorithm that would be most natural with fully-closed ranges.
So, since I've never heard anyone advocate half-closed or fully-open ranges, it seems like half-closed ones are the right choice.
Someone said above that "Since all options can be made to work, the argument can be only [...]". That, however, is only true since we're dealing in integers (or naturals). What if, instead, we look at uncountable reals or even countable rationals? Now it's impossible to specify the same set with different range types (since you can't add a "one over infinity" fudge factor).
Consider these 4 different sets:
1) [a,c) 2) (a,c] 3) [a,c] 4) (a,c)What if we want to split them at b, where a < b < c:
1) [a,b) & [b,c) -- ok, that works 2) (a,b] & (b,c] -- that works too 3) [a,b] & [b,c] -- the union is correct, but the intersection isn't empty 4) (a,b) & (b,c) -- oops, that misses the bThe one advantage that type 3 has over the others is that it can represent the degenerate case where the range contains exactly one element. This could, however, also be seen as a liability, as it creates a distinction between zero-size empty ranges and zero-size non-empty ranges.
Consider the Dirac Delta function to explore some consequences of these different kinds of ranges. It's total integral is 1, but its value is 0 everywhere except the origin. (Of course that means it's not a Real function.) One of the properties of integrals is that the integral from a to c should be equal to the sum of the integrals from a to b and from b to c. If we choose a=-1, b=0, and c=1, then this property only holds if the integral used ranges of type 1 or 2. With the overlap from type 3, the sum gives 2, and the hole from type 4 means the sum gives 0.
Comment on the distinction between OrdinalsAndCardinals moved there.
'Cos GisueppePeano? was a badass: http://en.m.wikipedia.org/wiki/Peano_axioms -- PissedOffOfTumbridgeWells?