Cardinality Enforced In Code

Cardinality that is enforced in code (example SingletonPattern) is arguably a modern variant of 'hard coding'. It can be a nasty code breaker because it enforces assumptions about the future in code. The idea here is to do something like ensure that COM1: and COM2: are only being used by one client apiece or having client access serialized by an intermediary. Sadly, there are also COMs 3, 4, 5, 6, ... The most I have seen on an actual device is eight COM ports. However, in the absense of compelling reasons otherwise I would not enforce any upward limit in code. You might well find that you want to reuse that thing against an unbounded number of virtualized ports across an enterprise network, for instance. Sure, you would likely have code changes anyway if things changed that much, but why burden yourself with an extra book-keeping problem?

It is reasonable, for the most part, not to waste time coding for 'extras' you anticipate in the future. However, the corrolary to that is to also not waste time coding for 'limits' in the future. The world of code is rife with examples of code breaking as soon as cardinality assumptions are overthrown. How often does this happen in practice? I should say that it happens more often than not.

Expect cardinality to change.

"But I KNOW an integer is only 32 bits." Hmmmm.

�Prediction is very difficult, especially about the future.� � Niels Bohr (attributed to many others, most notably Yogi Berra)

[GuyWhoReopenedTheSingletonCanOfWorms]


However, there are always exceptions. A Boolean class needs exactly two values, true and false, and may sometimes be most efficiently with only two instances, one carrying each value.


A bignum library might be best implemented in some language with a bignum class with two subclasses, a finite bignum class and a special-case class, the latter with only three instances: +INF, -INF, and NaN. (These values would be needed to implement the semantics of a "giant double" or other IEEE FP type with an overgrown mantissa.) Making the special case values instances of a separate subclass lets you use polymorphism instead of funky flags and tests in all the arithmetic methods, and put the (maybe memory-intensive) representation in only the finite-real class. You might even special-case zero, too, while you're at it. You can also now supply multiple finite-real implementations, such as to use different algorithms at different sizes, without each one having to replicate the handling of the special-case values (and thus violate OnceAndOnlyOnce). The special case values surely warrant global constants. They may warrant limiting to one instance, so that testing for them can be done with an object-identity operator instead of a method call. If the language supports multiple dispatch, so much the better since you can jump to different implementations based on both arguments to an arithmetic function. A sensible compiler implementation boils the calls down to a jump-table to the appropriate method body, which may be more efficient than lots of explicit testing -- and replicating lots of explicit testing in every arithmetic method is sure to be error prone anyway.


Do either of the above examples warrant restricting instance creation?


Look at it this way. Restricting instance creation loses you nothing, not even flexibility; you'll never want another infinity value or boolean value unless you're no longer implementing IEEE FP math or a boolean logic, and then you want a different class or group of classes altogether. Restricting instance creation does gain you efficiency: you can test for equality with a low level pointer comparison or equivalent atomic test and avoid function call overhead, and you avoid cluttering up memory with wasteful duplicates of the special-case values. Look at it this way: sure, your calculation is all b0rked if NaNs? propagate through everything, but at least they don't take up any memory, and your failed calculation runs very quickly because pretty soon it isn't doing any bignum math. (And of course the NaN object's class is the perfect place to implement NaN trapping of some sort. If you want exceptions thrown instead of NaNs? silently spreading in the system, and NaN is a polymorphic singleton, you can start the system up in a configuration that instantiates an exception-throwing NaN instead of a normal NaN. The calculation then produces an exception-throwing NaN as soon as it produces a NaN, and then when the NaN is used again, boom! This is in contrast to having a global flag throw_exception_on_nan which, besides being an icky global variable, has to be tested whenever a NaN arises in a calculation. Polymorphism rolls it into the same jump-tabling noted above. Of course, either way makes the "do NaNs? throw exceptions" setting global. Making it localized might be done by making it a bignum constructor argument, and making it inherited from the receiver to the return value of any function that produces more bignums. Like NaNs?, the setting would propagate. The operations that can produce a NaN without a preexisting NaN would throw or not depending. Or you can use polymorphism again, and have a finite bignum class that throws and another that doesn't...


EditText of this page (last edited January 23, 2007) or FindPage with title or text search