"Treat every problem as if it can be solved with ridiculous simplicity. The time you save on the 98% of problems for which this is true will give you ridiculous resources to apply to the other 2%." -- DrPaulMacCready
[Thanks to the JohnBrewersXpFaq for the reference.]
Ah, if only people believed it. I believe it, but few other people in my organization do. They'll always ask the question, How do you know it's 98% and 2%? -- JbRainsberger
- Any advice for those of us who do believe this intellectually, but frequently fail to practice what we preach?
A: Suppose it's 80/20. Say the ridiculously simple answer is 10x simpler, and when you blow it, you have to do it over. OK: .80/10 + .2/10 (to do the hard ones wrong the first time) + .2*1 (to do the hard ones over "the right way") = .08 + .02 + .2 = 0.30. In other words, you'll get the work done in 30% of the time -- over three times as fast. Now how many managers don't believe in the 80/20 rule? ;->
(Also, 2/3 of the remaining work you do is rework, so if only you could learn from your mistakes, it would go faster, wouldn't it? This makes "3 times faster" a pessimistic estimate. ;-)
For those who couldn't follow the math, here's another way of saying the same thing.
Scenario A: Try every problem the easy way (cost = $1.00), hoping you're right. Scenario B: Try every problem the more thorough, complex way (cost = $10), to make sure it's done right the first time.
Scenario A
tryWithEasyCost = $1
everyProblemRate = 100%
doneCost = $0
easySuccessRate = 80%
easyFailRate = 100% - easySuccessRate = 20%
expectedCost = tryWithEasyCost * everyProblemRate + doneCost * easySuccessRate + retryWithHardCost * easyFailRate
= $1 * 100% + $0 * 80% + $10 * 20%
= $3
Scenario B
tryWithHardCost = $10
everyProblemRate = 100%
expectedCost = tryWithHardCost * everyProblemRate
= $10 * 100%
= $10
In general, if we assume the 80/20 rule, then we break even when easyCost = 80% * hardCost. So if the easy way is at least 20% cheaper, then go for it.
Without disagreeing with this (I don't), it's nevertheless interesting to note that Scenario B is simpler than Scenario A. Not more efficient, not preferable, not anything... but simpler. --CarlManaster
If the EightyTwentyRule is universally true, that 20% which didn't get done can be solved with just the 80% of the original resources. Then you can apply the same rule to the remaining part, meaning 80% of 20% gets solved with 20% of 80% of the resources. That means you solve 16% of the original problem with 16% of the original resources. The total work done now is 96% and the total remaining resources are 64%.
Apply the same 80/20 rules again and you can solve 80% of 4% (that's 3.2%) by using 20% of 64% (that's 12.8%), so 99.2% or the work can be solved by using 51.2% of the resources. You have almost half the resources left to solve less than 1% of the problem, and only by applying the 80/20 rule. -- GuillermoSchwarz
This is the RISC principle! ReducedInstructionSetComputer
It's also the idea behind ThinLocks and OptimisticLocking. Stretching it slightly, it's similar to medical triage.
Moved from RidiculousSimplicityDemandsRidiculousResources, due to a misunderstanding on the part of SamuelFalvo?
A far, far better example is ForthLanguage. Unlike BF, it's actually a real programming language, being used for real-world applications today (see http://www.forth.com/resources/appNotes/index.html). How is it simpler?
- No lexer -- everything in the input stream is delimited by whitespace. Period. That eliminates a significant amount of complexity.
- That sounds an awfully lot like a lexer to me. Tokens and whitespace... and comments.
- A lexer does more than just tokenize, it also classifies. Forth's, err, "lexer" (WORD and PARSE) doesn't classify -- it merely tokenizes. As for comments, the \ word is a word like any other, as is (. The difference is it is \ and ( which parses the input stream as part of their behavior. For example:
: ( [char] ) parse 2drop ; immediate
: \ source >in ! drop ; immediate
--SamuelFalvo
?
- No parser -- everything is executed in the order that it's seen. This presupposes ReversePolishNotation, but more importantly, it makes for an incredibly powerful HomoIconicLanguage? in the right hands. See http://www.falvotech.com/blog/index.php?/archives/314-A-New-Forth-Unit-Testing-DSL.html for an example usage of a very convenient DSL for unit testing in Forth. Links to the DSL's implementation exist on the page too.
- No types -- everything ultimately is expressed in terms of "characters" (more accurately, bytes) and "cells". This places a stronger burden on the programmer to write more modular software, but the lack of a type system produces a significantly smaller language interpreter/compiler. For example, an unembellished Forth compiler can fit in about 10 or so lines of well-structured program source.
Without a lexer, parser, or type system, writing a compiler for Forth is rediculously simple, and consumes rediculously little resources:
32 constant bl
\ This is a functional Forth compiler, albeit a very simple one, written itself
\ in Forth. This will compile to no more than a kilobyte on most 32-bit
\ implementations of Forth.
\ ] starts the compilation, while [ ends it. This convention follows from how
\ one switches in and out of compile-mode from inside a colon-definition.
\ : seven [ 3 4 + ] literal ;
\ is exactly the same as:
\ : seven 7 ;
: validate c@ bl xor abort" Word not in dictionary, and not a valid number!" ;
: try-number 0. rot count >number drop validate ['] push-literal compile, , ;
: ] begin bl word find dup 0<
if drop execute ( it's an immediate word )
else if compile, ( it's a normal, every-day word )
else try-number
then then
again ;
: [ r> drop ; immediate
Embellishing it with additional features is similarly easy. Whole industries and books are written on compilers and compiler theory. Not to diminish the value of their research, it stands to reason that if folks followed a ForthValue
? system, a lot of people would be without a job (at least, in this domain).
- Until ANSI, no dynamic memory management. Even with ANSI, it's used amazingly rarely. Forth developers have learned to manage their memory using pools of various kinds over the years, and to exploit the dictionary itself to reclaim resources in a stack-like manner. Decades later, folks in the FunctionalLanguage community have rediscovered this trick. MlKit performs static analysis of how your program manages memory, trying to allocate objects in a stack-arranged regions, all for reducing the need for garbage collecting. That way, entire families of objects are disposed of all at once, simply by popping the region stack. Haskell's monads also show similar memory usage patterns (but, I'm not sure if its allocator tries as hard as MlKit's.) I've blogged a bit about it here: http://www.falvotech.com/blog/index.php?/archives/34-Musings-on-the-ForthStack-Memory-Management-Model.html
- See also AntiCreation where I am fed up with the heap and/or creating and freeing everything (reference counted heap works okay too).
- No built-in strings (this is implied by no data types). Most parsing work is done by references to the contents of in-memory buffers, so Forth code tends to not have explicit string support. Some Forth environments do define a library for strings, however, and if you really need them, there are libraries available which provide them. Most folks tend to roll their own, per-project implementation of strings.
- Dual-stack architecture eliminates the need for execution frames. This makes Forth code faster than comparably factored C code, because you don't have to constantly propagate values from an outer context to an inner context all the time. It also provides the programmer opportunities for things like structured returns, something which Haskell's GHC compiler rediscovers when handling functions which return "into" a case statement.
From all this, you get a functional (as in
working, not as in referentially transparent) language compiler that is small enough to run even on deep-embedded platforms, where most applications compiles to mere kilobytes of memory, is interactive, is expandable and malleable enough to support more advanced features given sufficient machine resources (e.g., don't expect an entire IDE to run on an ATmega chip, but such Forth IDEs do exist on Windows), and whose performance is often compared to C (anywhere from factor-of-two-slower to as-fast-as-optimized, depending on how much cash you spend on the compiler; either way, the range is quite admirable).
Not bad, if I do say so myself.
--SamuelFalvo?
So, I suppose we can therefore conclude that ForthLanguage must not be ridiculously simple. It's just 'appropriately' simple. (NoTrueScotsman.)
- Just what is it you're looking for? If ForthLanguage doesn't represent rediculous simplicity with tangible benefits (which, if you recall, was what the original poster was looking for), what does? --SamuelFalvo?
- Untyped lambda calculus. Or perhaps the S and K combinators, or the One Instruction Set Machine. I'm fairly certain it can't qualify as 'ridiculously simple' if it isn't too simple for you to easily think of complex ideas in the language.
- Ahhh, you're right -- I had to re-read the OP's question and first response before I caught the demands rediculous resources. In that case, then, you're correct -- ForthLanguage doesn't qualify. For some reason, I was thinking that this page was RidiculousSimplicityGivesRidiculousResources. --SamuelFalvo?
See
DoMoreWithLess,
RidiculousSimplicityDemandsRidiculousResources
CategoryQuote
See
BetterSorryThanSafe