Self Discipline Discussion

(Based on discussion started under ObjectsHaveFailed)

{Appealing to SelfDiscipline and prudence is reasonable once you're stuck with an unsuitable tool, but should be an appeal of last resort. A language should appeal to LazinessImpatienceHubris, even encourage it, supporting programmers in achieving "good" results in a very wide range of use-cases even without SelfDiscipline or wisdom. ExploratoryProgramming, iterating via progressive refactoring from 'get it done quick' solutions by programmers who are initially clueless about which abstractions they need (and, likely, initially clueless about which abstractions are available) should be the 'default' mode for development. The LanguageDesigner, not the developer, is the one who should step up and be 'prudent'.}

No tool can fix problems caused by lack of SelfDiscipline (or management-forced rush-jobs). I will agree some offer techniques that help code stand up slightly better under such chaos, but only slightly. And perhaps with other side-effects. Putting up thick walls and lots of border guards often just encourages screwy work-arounds, and slows progress to boot. -t

{You make some erroneous assumptions. First, the absence of SelfDiscipline is not "chaos". The absence of SelfDiscipline is rule by LazinessImpatienceHubris and other 'immediate' short-term incentives and vices - instant gratification rather than wisdom, rules, and foresight. These are predictable forces, and may therefore be shaped and leveraged by tools and rules to achieve order. Second, while raising 'barriers' against 'bad behavior' ("thick walls and border guards") is a viable approach to guiding and shaping vice, it is hardly the only option. One may favor the carrot over the stick, providing short-term incentives for step-wise progression towards doing the RightThing (such as reducing code, better performance, more generic code; consider the story at http://www.vetta.org/2008/05/scipy-the-embarrassing-way-to-code/). The idea isn't to get the RightThing immediately, but rather to maximize incentives to improve over the lifetime of a project, to make regression more difficult than progress (i.e. via compatibility, test failures, etc.), and to minimize barriers for progress. For problems anticipated or encountered in many projects, one may also refactor solutions into language primitives or composable libraries and minimize disincentives against that form of reuse (via resolving security, concurrency, too-much-indirection, and bloat issues). As is, for example, it is typical for C/C++ programmers to avoid use of powerful libraries like BoostLibraries because of concerns unrelated to their quality.}

{You need to kill your preconception that guidance outside SelfDiscipline means painful BondageAndDiscipline. Instead, ask yourself how you might harness LazinessImpatienceHubris to drive program development in the same manner that Capitalism harnesses greed to drive the economy. That harness doesn't need to be a painful one. SelfDiscipline, by comparison, can also be very painful (because resisting temptation is painful), as can be extracting the bullet from one's foot after SelfDiscipline fails.}

But building libraries and conventions to better fit the domain is a form of SelfDiscipline. One could keep doing it the long way forever and ever. I suspect this is going to turn into a LaynesLaw battle over "discipline". The "embarassing" article shows SelfDiscipline in deciding to learn and practice CollectionOrientedProgramming techniques. You haven't given any examples that are not tied to SelfDiscipline in some way. -t

Also note that "clever" shortcuts are sometimes difficult for other programmers to follow and/or change to fit new requirements. Raw iteration tends to be more flexible than CollectionOrientedProgramming because one can usually diddle data at a finer level, for example. That doesn't mean that I'm against COP by any stretch, but explaining here that the trade-offs are not always simple to ascertain. The EightyTwentyRule is always lurking about ready to bite our higher abstractions in neck. The "embarrassing" article smells a bit in that source-code size is the authors only metric. Readability is not given any mention (as I remember). He hasn't saved total time if the next programmer to read it has to read and learn math matrix API's to figure it out and spend more time than he would have taken to read the 10 original pages. Source code size has to be weighed against other factors, including readability by non-original-authors. -t

{The question of SelfDiscipline vs. LazinessImpatienceHubris is principally one of motivation, not of outcome, though motivation is often a signficant factor in outcome. The "embarrassing" article does involve an author who is committed to learning, but it is important to note that the development itself was iterative and motivated at each individual stage by short-term incentives such as eliminating bugs and improving performance. Further, the education itself was also motivated by short-term results; that is important, since if a language and IDE can achieve both iterative progress and iterative education without ever involving SelfDiscipline, then programming can reach a wider audience. I would consider it the IDE's responsibility to help teach new developers what a 'shortcut' is doing, to explore code both interactively and by observation and ProgressiveDisclosure. And, to the contrary of what you say, it is of total benefit if persons reading the code go the slightly extra step to learn some matrix library APIs, since now they are educated in a tool they are likely to reuse in the future, as opposed to wasting their time educating themselves in eight-pages of application-specific code that will never be reused. (I am assuming that one tends to write code on the same subjects one reads, whether that reading occur via peer-review or maintenance programming, but I do not believe this an unreasonable assumption.)}

{Lazy and impatient people don't like "doing it the long way forever and ever", and will naturally seek (or develop) 'shortcuts'. In a poorly designed language, the 'shortcut' variation reached by the lazy fellow is likely a crap-shoot with regards to long-term viability - it is often the case in ExploratoryProgramming that you reach YouCantGetThereFromHere positions that might have been avoided with excellent foresight. To 'leverage' LazinessImpatienceHubris in development means designing the language and IDE such that (a) lazy and impatient programmers tend to avoid YouCantGetThereFromHere 'traps' (i.e. ideally it takes special or obvious effort to trap oneself or paint oneself into a corner, even if not 'foolproof'), and (b) changes and shortcuts tend to be progressive (i.e. new abstractions) and encouraged (i.e. no performance penalty for abstraction) rather than regressive or discouraged.}

{Also, I'm hardly saying be rid of SelfDiscipline (and prudence, foresight) entirely, but rather to marginalize it - to make SelfDiscipline something only needed by the elite 'guru' programmers who develop new protocols and create major new library APIs, and to ensure that even guru programmers can take off their guru hats most of the time and just code away with little thought towards the future. SelfDiscipline may offer some advantages, but should not be required for success. I'm considered a guru programmer among my coworkers, but that's a heavy hat to wear.}

You seem to be confusing SelfDiscipline with BigDesignUpFront. Discipline can be incremental. And finding shortcuts to avoid work could also be a form of discipline. Avoiding work and discipline are not necessarily mutually-exclusive. Channeling one emotion or irritation to avoid another can be "discipline". And the person making better and/or domain-specific API's for other developers is using his/her discipline to perhaps compensate for others' lack of. Thus, discipline is still happening, just not evenly. Good code without at least a fair level of discipline is just not likely in my opinion.

{SelfDiscipline does not mean BigDesignUpFront, but BigDesignUpFront requires considerable SelfDiscipline. I am not confusing them, but rather have been using SelfDiscipline as shorthand to refer to the sort of methodical 'prudence' you suggested earlier: "comment well but not redundantly, break things up into functions or units where it's prudent, document the interfaces and dependencies well, choose clean design over fads, and try to spend some up-front design time to keep it simple but easy-to-change rather than just code the first design that comes to mind". By avoiding SelfDiscipline, I'm saying do the opposite: "don't bother with comments except to organize your own thoughts, break things up primarily where you feel doing so makes things easier on you, and do the first easy thing that comes to mind that will possibly work - you can always revert or repair it later". These are essentially DoTheSimplestThingThatCouldPossiblyWork where 'simplest' is the greedy 'whatever is easiest for me' interpretation, but without the hardy RefactorMercilessly as SelfDiscipline (instead, refactor when doing so makes things easier on you, too). However, that approach won't work with just any LanguageDesign - the language itself must be designed so this lazy approach to programming can bridge its way out of any corners it paints itself into. Ideally, the IDE ought to also support cross-project refactoring, which can eliminate once-per-app BoilerPlateCode and can allow the occasional refactoring by one individual to improve many independent projects. As an aside (though, as you noted, this whole discussion is an aside relative to the topic), I'm of the somewhat controversial opinion that 'cold comments' and 'dead documentation' are mostly harmful - they're invariably incorrect, out-of-date, and operate as crutches in a manner that prevent development of more powerful utilities for expressing intent and avoiding semantic noise. Despite the controversy of that opinion, there are many who share it. Of course, I'm still not talking about APIs, but rather about application and implementation code. Documentation for APIs and protocols for use by other developers is a different issue.}

{I do agree with your thought that SystemsSoftware developers and LanguageDesigners may be using SelfDiscipline to compensate for the lack thereof on the part of users. However, you should acknowledge the huge leverage factor. By proverb, an ounce of prevention may save a pound of cure. In LanguageDesign, the factor is much greater than a factor of sixteen; a few hours of design with prudent foresight may save several million hours of work-arounds, debugging, BoilerPlateCode, and so on. If a language becomes a popular one, a responsible LanguageDesigner can claim billion-dollar mistakes - and be thought optimistic.}

I'd like to see a specific example.

{A specific claim for a billion-dollar mistake: Tony Hoare claims Null References (http://qconlondon.com/london-2009/presentation/Null+References:+The+Billion+Dollar+Mistake). But, by even a rough, informal mathematical analysis, if your language doesn't have billion-dollar design mistakes it's likely an unpopular language. Popular languages are used by several million programmers, and any language-design error that averages a week out of each programmer's time is easily a billion-dollar mistake. Other than separating 'nullable' references, a few more errors that could have been relatively easily resolved by language design include one-off array-indexing errors, better supporting domain-values via numbers-with-units (i.e. so users can add 12 feet to 2.4 meters and have a sensible outcome), and memory-safe array access. That's even before considering higher-level design failures - the sort that has programmers writing boiler-plate code or otherwise working around the language rather than with it.}

I disagree that those are a "huge leverage factor". Annoyances that occasionally snap back, sure, but not huge. Besides, every language has LanguageGotchas. And usually there's trade-offs for things such as memory-safe array access. And I'm not sure adding tons of features to a language is always a good thing. I know you believe in that, but I don't want to repeat another one of those GodLanguage debates here. Thus, I'm bailing out here.

{If you write something, and therefore a thousand people don't need to write something, that's a 1000:1 leverage factor. That's huge, and yet that's a small fraction of what LanguageDesign has the potential to provide. As far as your "there are usually trade-offs", sure, but there is also progress - that is, this is NOT a ZeroSumGame despite your almost religious faith in 'sacrifice'. The best things one can 'sacrifice' are one's bad habits, and the only real trade-off is the loss of familiarity. The same is true in LanguageDesign.}

I agree that domain-specific libraries can be quite helpful. Language design? Well, we'll have to part ways there. Languages tend to be a subjective thing. Customizing a language to fit a developer's head or group of developers may provide some productivity increase, but nobody knows how to do this effectively yet.

{That line of bullshit? Find some people who can program HTTP servers and browsers productively in BrainfuckLanguage or InterCal, and maybe I'll believe you. It is easy to prove there are objectively 'harmful' ways to design languages, because one can intentionally make the programmer's life harder. The ability to intentionally design a language that forces programmers to jump through extra hoops to write non-trivial programs is a formal proof (by construction) that programming languages are objective things and that LanguageDesign this is not a ZeroSumGame. You've seen that proof before. It seems you can't be convinced by such proofs because they are too obvious and appear in your mind 'contrived'... but if you actually grokked logic then you'd appreciate such obvious and simple proofs, and you'd know that being 'contrived' doesn't even slightly hurt proof-by-construction. Since you've seen that formal proof before, but persist in your bullshit line of reasoning anyway, you are a crank - a man who can't be turned. You may continue with faith in your 'subjectivity' theories, Mr. TopMind, but your argument remains unconvincing.}

Can you prove that there's no alien mind that is capable of being productive using BrainfuckLanguage? No you can't. QED. And take your vast rudeness and shove it up your bleeding ass you arrogant bastard!

{If you can't reply with anything more intelligent than HandWaving MentalMasturbation about 'alien' minds and a sputtering tantrum about rudeness, perhaps you should not reply at all.}

{I have no need to concern myself with your MentalMasturbation about 'alien' minds and whatever magical properties you imagine they'll possess. Your "language properties are subjective because it's all about fitting developers' heads" line of thought requires an insane assumption that developers' heads vary so widely that one cannot identify any broad truths about them whatsoever, and that is simply a lie. I say it's a lie because even you know it isn't true, but you resort to that sophistry and HandWaving MentalMasturbation anyway when you are unwilling to grant reasonable points. The human mind of a software developer has objective properties - its basis in pattern-matching and associative memory and the errors that entails, its limited knowledge about how APIs are implemented, its limited knowledge of software requirements at development time, its need to depend on assumptions, its ability to exhaust and err making even simple typographical mistakes, its very limited ability to recall the exact details of code written or read more than a year ago and the implication on long software projects, etc. Languages are not 'subjective' things because, frankly, human minds are not 'subjective' things. Nor, for that matter, is the subject matter - computation - which follows a variety of mathematical and physical laws.}

Aaaah, so human psychology matters now, eh???? If there are "mathematical and physical laws" that prove your GodLanguage is the one and true way, then produce your proofs or shut the hell up about them. Don't tell me, there is no mathematical proof, but rather indirect round-about deductive Rube Goldberg Sherlock reasoning that leads to your grand vision.

{Mathematical and physical laws and recognizing human limits (which is not 'human psychology' except where the two coincide) are very good for telling you what is not a solution, but they are not so useful for proving something is a "one and true way". Further, I've never claimed a 'GodLanguage' or a "one and true way", so halt with your StrawMan sophistry. To recognize languages as objective doesn't require knowing a 'one and true way' language; for proof of objectivity, it is entirely sufficient to identify - or even contrive - objectively 'wrong' languages. And I suspect that any sound or cogent line of reasoning seems to Mr. TopMind like "indirect round-about deductive Rube Goldberg Sherlock reasoning" if he doesn't already agree with the conclusion.}

I've agreed before that "psychology" may not be the best word, but I cannot find a better word at this point. It's a combination of physiology, psychology, and biology. How about WetWare?

{The connotations of 'psychology' are troublesome, but there are places where the sprawling field of psychology overlaps something useful for LanguageDesign. Many limits named above, though, would also be limits in hardware - such as limited or unreliable memory, use of associative memory, and limited advanced knowledge of requirements or implementation of external modules. I think knowing what developers can do, and what they can do easily is important. I suppose 'WetWare' can work.}

And a language with an objective flaw is not necessarily a useless language. The value of a language comes from a lot of different factors, and having a low number of flaws is only part of the load. Almost any non-trivial software these days has at least one flaw.

{I've never said languages with objective flaws are 'useless'. What I have said is that they are 'objective'. Don't mistake the two. A language isn't necessarily 'useless' in an absolute sense, but it certainly may be 'more or less useful' than established languages. I'll note that being less useful than established languages is 'useless' in the practical sense: any new language must compete with the incumbent languages, and should have significant advantages to pay for the effort of establishing the language. The important technical advantages are the ones that feed into that leverage factor a good LanguageDesign should offer. Of course, a fist-full of money from a titan like Sun or Google or Microsoft is another sort of advantage that can pay for establishing the language.}

{Consider writing a simple C/C++ program that can achieve an 'observer' pattern over simple expressions like 'a + b * c'. Such patterns would be useful for high-performance EventDrivenProgramming. The procedural approach is to simply 'poll' the expression - repeatedly re-compute it and report any changes from the last computation. Polling has bad engineering properties (high latency, huge amount of wasted re-computes - wasted energy, wasted time - even when there is no change). Thus programmers are pressured to find better solutions, such setting up signals to let them know when to re-compute. This itself is troublesome because one must associate signals with the correct expressions... i.e. knowing to hook a, b, and c is not easy and a change in the expression requires changing other code as well. Even SelfDiscipline by skilled programmers is insufficient to avoid errors when changing one expression requires also changing another one, and this is even more of a problem in refactoring. OOP can take some advantage by reifying the concept of observable expressions, and representing 'a + b * c' as an object graph; this would allow the objects themselves to manage the invalidation process and cause re-computes at need. However, at this point the programmer is inventing an entirely new programming language atop the old one - i.e. each 'operator' in the expression needs a new object constructor (usually a new class) unless already expressed as a FunctorObject. Making all this work with other frameworks (such as concurrency, or persistence, or SqLite) is no easy feat - and generally not something SelfDiscipline can help with. When programmers are forced to reinvent one language inside another, they themselves are forced to do the work of LanguageDesigners - a field in which most are (reasonably) uneducated and inexperienced. A naive implementation of the reactive program is likely to have 'glitches' akin to (x*(2+x)) with x going from 3 to 4 and returning 15,18,24 or 15,20,24 (one sub-expression changing before the other) rather than an atomic change from 15 to 24.}

{This sort of pattern comes up again, and again, and again in software development, and is not domain-specific. Reactive programming is by no means 'specific' to any particular domain. ReinventingTheDatabaseInApplication is one re-invention you observe because it's an area to ride your HobbyHorse. As a LanguageDesigner, I observe many more problems of this nature (configurable processing pipelines, synchronization and concurrency, self-healing or resilience, GracefulDegradation, DependencyInjection, PolicyInjection, ProcessAccounting, partial-failure and error handling, all those things you derisively associate with a 'GodLanguage'). Providing the necessary features at the language layer, such that the features are supported across frameworks and libraries, and such that fewer frameworks are required, is the best way (of the ways I know) to support them, to avoid their repeated reinvention, to achieve the huge leverage factor that comes from LanguageDesign. More relevantly to discussion, SelfDiscipline almost never helps with problems of this nature: with SelfDiscipline alone, you're still forced to re-invent everything once per application because you need to reinvent it in a peculiar way that will work with the frameworks and libraries and concurrency model and persistence model and all the similar 'reinventions' that are also specific to your application. You can use a database in combination with procedural if the API developers were using the same tool, perhaps, but your ODBC isn't going to much help with using a database in combination with reactive programming - not without forcing changes to the API itself.}

{The argument that language properties are 'subjective' - that this reinvention is 'subjective' - is nonsense. Might as well claim that data-base implementations are subjective, too, so you might as well re-invent relations in BrainfuckLanguage on a per-application basis.}

I'm not sure what this is tied to, but it sounds like a misunderstanding. I was probably talking about benefits, not implementation.

{I spake also of benefits. Language support to avoid per-application reinvention and re-implementation of KeyLanguageFeatures was the specific benefit mentioned. To say languages benefits are subjective is the same as saying all its features and support is subjective - i.e. that there is no objective distinction between a language that natively supports relations, and a language that doesn't even readily allow you to re-use a framework that implements relations from one application to another (BrainfuckLanguage has that problem, because it can't readily abstract anything). Saying "Language benefits are subjective" implies non-sense, and therefore must be wrong.}


Do you really need to know context? More importantly, should you need to know context? (page anchor: usage context)

Without knowing more about the context of usage/need of this expression example, I cannot really comment. For occasional small-scale use, polling may be plenty sufficient and thus there's no need to try to invent a generic dynamic expression evaluator for those few cases. -t

You only need to "know more about context" to know whether the flawed solution is still viable. That you need to know the context in which the polling solution is used becomes, in part, what makes it a "flawed" solution, TopMind. If you need to know the context of an implementation, then you also cannot reuse the code in a new context without first knowing its implementation. If you need to know the implementation before you can reuse code, that simply doesn't scale from a productivity standpoint - it would mean that the only libraries you can use are the ones you know inside and out.

It is true that scale is important to PerformanceRisk? of certain designs, but one should note that, while the polling approach has trouble scaling up, the reactive approach has very little trouble scaling down. This little observation is important, because it means that, if the reactive approach were provided from the beginning, programmers could freely use it from the first moment they see a need rather than slowly 'reinventing' it in application - a painful process that requires whole-program transforms - as they discover ever growing need or utility in reactivity and partial-recomputes, i.e. as the expressions get larger or more expensive, as the conditional filters on the expressions grow more complex, as the singular 'a+b*c' grows into a thousand similar instances. If the language avoids gotchas on this particular area - i.e. avoids seemingly 'arbitrary' restrictions on the use of reactivity - then the lazy way to program (use the most readily available tool to do the job) also becomes the wise way to program, and no SelfDiscipline or foresight is necessary (except by the LanguageDeveloper?, of course).

By comparison, you might consider an issue nearer and dearer to your own heart: tables (or relations) vs. arrays. Similar to reactivity, arrays work well enough at a small-scale while ignoring those pesky concurrency concerns. Thus, "without knowing more about context of the usage/need", you should - by your own logic - "not comment" that tables/relations might make better language primitives. Of course, I don't buy your logic; I think relations/sets would make fine language primitives - objectively better than arrays. As with reactivity, sets scale down well enough; indeed, one could support fixed-sized or pre-allocated sets just as easily as one can support fixed-sized or pre-allocated vectors, and the implementation itself can decide the 'intelligent' cut-offs for beginning to add indices, and a compiler could even implement sets integer-indexed relations atop arrays. The "trade-off"? Well, one exists: one needs a smarter compiler (one supporting a few specialized optimizations) to achieve the same performance as native arrays in the same situations one would use native arrays. But the payoff is huge: programmers could lazily use sets/relations and wouldn't need to "know more about context of the usage/need" to make it a prudent, wise, SelfDisciplined decision. They wouldn't need to re-invent maps and indices atop arrays. The lazy solution would be the correct one in the vast majority of contexts.

Your fundamental argument above amounts to: "If X is not needed in some situations, then there is no need to invent X at the language layer". Your exact words were "for occasional small-scale use, polling may be plenty sufficient and thus there's no need to try to invent a generic dynamic expression evaluator for those few cases", but the context of this discussion is LanguageDesign. That's simply flawed, it will lead you to a TuringTarpit language that has no features whatsoever. Relevantly, you should concern yourself with the applications where X is needed or useful - or at least not harmful. That's where the leverage advantage comes from, and it often only takes a few dozen uses for a language feature to pay for its 'trade-offs'.

Not all trade-offs are acceptable, but in my experience as a LanguageDesigner I'll say with confidence that with enough search and thought and experiment, you can discover a 'trade-off' that is a 'trade-up' by almost every metric... the design-equivalent of sacrificing a bad habit.

For the 'reactivity' case, for example, the trade-off is to enforce a certain form of SeparateIoFromCalculation that avoids SideEffects inside the expressions. This allowing arbitrary degrees of (a) caching, (b) data-parallelism, (c) partial-evaluation, and (d) sub-expression abstraction, refactoring, and reuse. Yet this separation doesn't actually 'hurt' anything - it doesn't prevent making the expressions themselves react to external side-effects or introducing side-effects as reactions to changes. LazinessImpatienceHubris will follow the encouraged discipline if only because that's the laziest, fastest way to achieve a working, high-performance program - separation will be easier to code than explicitly weaving side-effects with reactions.

This is exactly the sort of trade-off that is ultimately a 'trade-up' for productivity: the resulting code that eschews interweave if SideEffects will be reusable in more contexts, will require less knowledge of the implementation or usage context, and will have good engineering properties (scalable performance, suitable for automated safety analysis, suitable for refactoring and abstraction, etc.) - and all of this comes from being 'lazy' in a context where laziness is supported and rewarded, and the laziness itself supports productivity further since programmers don't need to know usage context, don't need to stop and think ahead or be especially 'prudent'.

Only the LanguageDesigner bears the massive burden of 'foresight', but even that 'burden' can be eliminated: LanguageDesign itself can be turned into an iterative process, where one mines frameworks and SoftwareDesignPatterns from established languages to know which features people 'need' based on the simple fact that they have needed it often in the past. Once the need for foresight is eliminated, the LanguageDesigner only needs SelfDiscipline and rigor to figure out how to achieve the features with acceptable trade-offs.


SelfDiscipline + Unix is GoodEnough for TopMind (page-anchor: unix context)

Your solution(s) is a TechniqueWithManyPrerequisites, resembling a GodLanguage. Mine is incremental: use unix-style interfaces to tie existing languages to various tools and services. You need to demonstrate your technique in production for roughly a decade before it's worth considering. The unix philosophy of tool/service interconnection has been around for almost half a century. (A dedicated TOP language would be nice, but one can get decent TOP without it.) -t

First, incremental approaches are over-rated unless you can control for CodeOwnership issues. In particular, they leave behind a long history of BackwardsCompatibility problems. Effectively, all the problems with older approaches are additive, but the benefits of older approaches are marginal.

Second, unix PipesAndFilters and EverythingIsa File approach was a partial success, but at the same time a magnificent failure - one that effectively illustrated a need for richer language-like features by its failure.

The unix approach suffered difficulties when dealing with:

Often it proves easier to re-implement whatever features the other process was supposed to provide, turning applications into ever-larger monoliths. Worse, this is a self-perpetuating cycle, since no performance-conscious programmer wants to invoke a monolith from a small application just to get a small subset of its features. The above forces have pressured libraries, as opposed to InterProcessCommunication, into the position of the most prominent and popular form of program extension.

You're imagining that the unix-style plus a heavy helping of SelfDiscipline, is GoodEnough. This makes me think you haven't noticed the unix approach failing, haven't recognized the clues identifying failure: the heavy use of DLLs, the fact that socket connections cannot be easily described in shell-scripts, the ever growing dependence on frameworks.

If the above problems could be solved, the multi-process approach would certainly be viable. However, I think it important to recognize that language-based solutions and new-OS-based-solutions are equivalent by almost every metric (LanguageIsAnOs, or close-enough). No matter what, a TechniqueWithManyPrerequisites will be required.

You only talk in generalized claims and convoluted innuendos. I want to see specific scenarios of failure or problems. Semi-realistic code samples with sample input and the specific line numbers where a specifically described problem occurs if applicable. If you don't wish to provide such, then I am done here and will LetTheReaderDecide. RaceTheDamnedCar or I leave. Unix has proven successful in the marketplace and is more popular with techies than Microsoft. You need more than vague innuendos to dethrone it. -t

Your demand is naive, like asking me to prove why a forest is dying by pointing at a specific piece of bark and refusing to recognize more subtle forces like climate change, pollution, or infestation by non-native parasites. There is no specific code or line number that demonstrates why the old unix glue-able applications approach has fallen out of favor; there are simply a large number of emergent forces and cycles that make programmers reluctant to favor it - despite its advantages - and many of those forces were described above. The proof of failure is in the application libraries, and in the heavy use of dynamic libraries and plugins instead of multi-process designs. You're free to peruse these libraries, if you wish - online resources include Freshmeat, Sourceforge, the Ubuntu app guide, etc.

And this is not "dethroning" Unix, but rather recognizing that the old 'unix' approach to application design fell off the throne a long time ago. I'd honestly like to get glue-able apps back on the throne, but I'm not the sort of delusional moron who pretends that the failure was due entirely to a 'fad'. If you ask 'why' the unix approach failed, and search for an answer, you'll find a number of issues that it failed to address - many of which are named above. You can interview developers, ask why they chose not to use the PipesAndFilters style for their app, or why they used a DLL-plugin style instead, and so on. Having been one such developer, most of my answers would be from the above list.

If you continue to speak in terms of generalities, there will be no real communication between us. If you want practitioners to care about your pet solutions, you must find a way to demonstrate them being better. Can we at least focus on the code creation and maintenance aspect? Can you show a scenario where Unix would require more code and/or more code changes than your pet solution?

I have not been promoting a pet solution. I have been demoting the Unix solution. There is a difference. When I'm attempting to make people care about my specific solution - which would be totally inappropriate for this page - I'll be sure to provide examples.

If you can show noticeable improvement measurements in say 5 to 10 different factors, then your claim may at least deserve to spark further inquiry. Same with your forest example.

And I never said to always use PipesAndFilters for everything. But for language-neutrality, they are hard to beat. Re-inventing every service for every app language is not economical. It is a huge violation of OnceAndOnlyOnce. Related: WhereToImplementPattern.

Yes, PipesAndFilters have lots of nice properties. Now, if only they weren't a failure for reasons conjectured above...

I wish to see a semi-realistic UseCase/scenario of failure. The above is vague and indirect and non-specific. This is not an unrealistic request. Most people want to see real-world problems, not just chalkboard problems.

Sorry, TopMind, but that is an unrealistic request. For four reasons: First, realistic scenarios don't fit into a WikiWiki discussion - only ArgumentByLabToy or simplified example, plus generalization by logic, fits into a debate. Second, you have this strange definition of 'realistic scenario' that excludes things that don't involve report generation and business applications, both of which are outside my expertise. Third, realistic scenarios tend to be complex and messy to the degree that the issues relevant to the debate are obscured by issues irrelevant to the debate, thus attempting to demonstrate anything by the use of realistic scenarios strikes me as unrealistic. Fourth, realistic scenarios are precisely the set of scenarios that are easy to achieve with the tools that exist today, which makes them a remarkably poor vector for demonstrating the weaknesses of a given approach in achieving a desirable set of features... i.e. it is largely an "unrealistic scenario" to use Unix PipesAndFilters to build a GUI because of the weaknesses listed above. If you think a "realistic scenario" will be relevant to defending you in the debate, I invite you to find one and bring it in - find an example of PipesAndFilters used for GUI to demonstrate that the weakness doesn't exist. At the very least, you can learn how unwieldy "realistic examples" happen to be.

Even the most complex problem-area I can describe to someone if I simply try hard enough. It is doable, it just takes articulative mind power and patience to pull it off. And while it's true one may not be able to describe an entire application, it is usually possible to describe specific problem scenarios/areas. If it's beyond description, then it's also likely beyond rational mental analysis, and thus is merely an emotion-based feeling in your head which is cherry-picking pet factors out of a big sea of factors unconsciously. -t

You make a serious error if you equate "describing a problem area" to demonstrating a property. The examples you've offered in the past of your oh-so-superior articulation essentially rely upon the audience accepting you at your word as opposed to looking for any possible reason to reject your examples and explanations (i.e. "why do I need tables? I could easily use arrays for all the examples you've listed in the WikiWiki so far... they're so small, no indexing needed!" - strikes me as no less reasonable than your "but we could use polling here... it's just a single case!"). How many people in your audience demand you find a definition before moving on, then demand you find an "algorithm" for your definitions, then use "but it could be done another way" (in some arbitrary context) to reject your simple examples, or reject them as not being "realistic" because they don't involve their favorite problem domain, and so on? If you are oh so magnificent in your articulation, then please explain, with realistic examples how to deal with an audience that has an agenda in rejecting everything you say, so that I know how to deal with you. Perhaps I should emulate your behavior for a while - which will require that I act stupendously uneducated - so that you can see what it's like deal with you. Or, perhaps, I should ask you to prove your mettle on comp.object, where (similar to WikiWiki) most everyone thinks you're a troll. Anyhow, the simple examples and lists I present are very subject to mental analysis. Of course, since you brought up Unix methodologies, I do assume a certain level of practitioner's knowledge in Unix - i.e. basic knowledge about how Unix PipesAndFilters hook up under-the-hood, basic knowledge about how sockets work, and so on, basic knowledge on each of the synchronization types named above. If you don't already know those things, you really shouldn't be discussing the 'merits' of Unix from a developer's standpoint.

These specific examples should be obvious to anyone who understands Unix. For example, if you consider which files are global resources you could quickly identify anything in the /etc and /usr/share directories, including the password file and a large number of configuration files. Many non-trivial components end up using global resources, often for both input and output. If you also know what 'multi-instance concurrency' means - which follows its denotation very simply: multiple instances of a process (i.e. for multi-users) - and you also grok the various concurrency issues (race conditions, time reversals, etc.), then no more needs to be said.

As far as pipe-based GUI's, it could be done if decent language and conventions are created.

Where's your realistic example? And if you want to defend that Unix approach works, then you need Unix PipesAndFilters, TopMind. Resorting to "creating a decent language" sounds like something I've said. Yes, with a decent language, we could make GUIs out of PipesAndFilters again. Now you just need to perform and justify a RequirementsAnalysis for this 'decent language'.

So far everyone tries to make app-language-specific GUI engines out of historical habit. People tend to copy what they know already and our GUI habits were born of Parc Place SmallTalk. I may present a draft desktop/crud-friendly markup language one of these days. I've been studying desk-top GUI idioms looking for common patterns such that I can factor them into fewer idioms that are sufficiently close to actual instances. -t

If you look into history, you'll find that some of the older forms - especially alert boxes - have indeed been used as elements of a pipeline. Problem was dealing with style-issues and such... i.e. you need to push all the GUI IO back through a central locus in order to consistently apply style and internationationalization and accessibility transforms, and to deal with modalities, and so on.

I see nothing wrong with pushing many of those to the server. It's better OnceAndOnlyOnce to upgrade a central server than 3,000 clients with 20 different browser versions/brands. And does it make sense for the client to download 100 language mappings and then just use one?

What exactly are you pushing to the server, and how are you achieving Unix PipesAndFilters while doing it? (This discussion, if you've forgotten, is about why the Unix approach failed, not about the best way to do GUIs.)

As far as GUI's, are you saying that a markup-over-HTTP approach cannot be made to make effective GUI's because of some universal flaw, or merely that it has not been demonstrated as possible so far? -t

I have said nothing about the markup-over-HTTP approach, and I especially haven't said anything about universal flaws (only about Unix flaws, which you'd know if you were paying even the slightest bit of attention...). However, I would say that the design of modern web-servers, and HTTP itself, is a major departure from the original Unix philosophies and design methodologies (such as PipesAndFilters). For example, you cannot take arbitrary process that needs HTTP input, and use a script to hook it to a process that provides an HTTP service. Instead, you need expensive and round-about approaches, usually going through a global DomainNameService?. Even CommonGatewayInterface itself broke the mold of EverythingIsa File, since it needs to treat processes as files (which Unix never did natively, though PlanNine made a reasonable attempt).

Indeed, for GUIs I believe that markup-over-HTTP has already been proven to make effective GUIs (WebApplications?). I think we can do much better, but the fact that we could do well at all is itself an indicator of various flaws in the ApplicationsAndLibraries? approach to software composition.

Programming with HtmlDomJsCss stack is butt-ugly compared to dedicated desktop GUI tools. So far, improved deploy-ability is why they are often favored over desktop apps, not the GUI development itself. Most who used VB, Delphi, PowerBuilder, etc. to build custom biz apps will generally agree. I believe the situation could be improved if we dump the e-brochure-based GUI paradigm that HtmlDomJsCss is built around. -t


NovemberZeroNine

CategoryHumanFactors, CategoryUnix


EditText of this page (last edited December 12, 2014) or FindPage with title or text search