(Based on discussion started under ObjectsHaveFailed)
{Appealing to SelfDiscipline and prudence is reasonable once you're stuck with an unsuitable tool, but should be an appeal of last resort. A language should appeal to LazinessImpatienceHubris, even encourage it, supporting programmers in achieving "good" results in a very wide range of use-cases even without SelfDiscipline or wisdom. ExploratoryProgramming, iterating via progressive refactoring from 'get it done quick' solutions by programmers who are initially clueless about which abstractions they need (and, likely, initially clueless about which abstractions are available) should be the 'default' mode for development. The LanguageDesigner, not the developer, is the one who should step up and be 'prudent'.}
No tool can fix problems caused by lack of SelfDiscipline (or management-forced rush-jobs). I will agree some offer techniques that help code stand up slightly better under such chaos, but only slightly. And perhaps with other side-effects. Putting up thick walls and lots of border guards often just encourages screwy work-arounds, and slows progress to boot. -t
{You make some erroneous assumptions. First, the absence of SelfDiscipline is not "chaos". The absence of SelfDiscipline is rule by LazinessImpatienceHubris and other 'immediate' short-term incentives and vices - instant gratification rather than wisdom, rules, and foresight. These are predictable forces, and may therefore be shaped and leveraged by tools and rules to achieve order. Second, while raising 'barriers' against 'bad behavior' ("thick walls and border guards") is a viable approach to guiding and shaping vice, it is hardly the only option. One may favor the carrot over the stick, providing short-term incentives for step-wise progression towards doing the RightThing (such as reducing code, better performance, more generic code; consider the story at http://www.vetta.org/2008/05/scipy-the-embarrassing-way-to-code/). The idea isn't to get the RightThing immediately, but rather to maximize incentives to improve over the lifetime of a project, to make regression more difficult than progress (i.e. via compatibility, test failures, etc.), and to minimize barriers for progress. For problems anticipated or encountered in many projects, one may also refactor solutions into language primitives or composable libraries and minimize disincentives against that form of reuse (via resolving security, concurrency, too-much-indirection, and bloat issues). As is, for example, it is typical for C/C++ programmers to avoid use of powerful libraries like BoostLibraries because of concerns unrelated to their quality.}
{You need to kill your preconception that guidance outside SelfDiscipline means painful BondageAndDiscipline. Instead, ask yourself how you might harness LazinessImpatienceHubris to drive program development in the same manner that Capitalism harnesses greed to drive the economy. That harness doesn't need to be a painful one. SelfDiscipline, by comparison, can also be very painful (because resisting temptation is painful), as can be extracting the bullet from one's foot after SelfDiscipline fails.}
But building libraries and conventions to better fit the domain is a form of SelfDiscipline. One could keep doing it the long way forever and ever. I suspect this is going to turn into a LaynesLaw battle over "discipline". The "embarassing" article shows SelfDiscipline in deciding to learn and practice CollectionOrientedProgramming techniques. You haven't given any examples that are not tied to SelfDiscipline in some way. -t
Also note that "clever" shortcuts are sometimes difficult for other programmers to follow and/or change to fit new requirements. Raw iteration tends to be more flexible than CollectionOrientedProgramming because one can usually diddle data at a finer level, for example. That doesn't mean that I'm against COP by any stretch, but explaining here that the trade-offs are not always simple to ascertain. The EightyTwentyRule is always lurking about ready to bite our higher abstractions in neck. The "embarrassing" article smells a bit in that source-code size is the authors only metric. Readability is not given any mention (as I remember). He hasn't saved total time if the next programmer to read it has to read and learn math matrix API's to figure it out and spend more time than he would have taken to read the 10 original pages. Source code size has to be weighed against other factors, including readability by non-original-authors. -t
{The question of SelfDiscipline vs. LazinessImpatienceHubris is principally one of motivation, not of outcome, though motivation is often a signficant factor in outcome. The "embarrassing" article does involve an author who is committed to learning, but it is important to note that the development itself was iterative and motivated at each individual stage by short-term incentives such as eliminating bugs and improving performance. Further, the education itself was also motivated by short-term results; that is important, since if a language and IDE can achieve both iterative progress and iterative education without ever involving SelfDiscipline, then programming can reach a wider audience. I would consider it the IDE's responsibility to help teach new developers what a 'shortcut' is doing, to explore code both interactively and by observation and ProgressiveDisclosure. And, to the contrary of what you say, it is of total benefit if persons reading the code go the slightly extra step to learn some matrix library APIs, since now they are educated in a tool they are likely to reuse in the future, as opposed to wasting their time educating themselves in eight-pages of application-specific code that will never be reused. (I am assuming that one tends to write code on the same subjects one reads, whether that reading occur via peer-review or maintenance programming, but I do not believe this an unreasonable assumption.)}
- While it may perhaps be a good idea for the future reader to learn some matrix math and related API's (insufficient info, but not challenged here for the sake of argument), one should bet on what is more likely in the future, not what should be. You pick stocks based on what you expect to actually happen, not what a company "should" do. There's been times when I was in a hurry to fix code and rather than try to figure out something that's convoluted, I deleted it and replaced it with my own code. At least my own code gives me a sense of how long it will take. A known quantity. Solving others' mysteries is an unknown quantity, and bosses like fuzzy estimates. Thus, I avoid going into a dark cave and carved out my own instead. I believe it was a rational decision. (PredictabilityVsPerformance)
- {It is likely that a person who needs to read matrix math code will also be writing matrix math code - not merely 'should' be. Further, use of the matrix library is likely to more directly express what one is doing to someone who understands the domain; the extra eight pages of code would mostly be useful for someone who doesn't understand the math or its purpose. As to deleting and rewriting code, that is reasonable - I have heard from my SoftwareEngineering pals that changing 25% of lines of an unfamiliar codebase in a significant manner (not refactoring) is (very roughly) as expensive as rewrite from scratch due to the cost in figuring out what the code is doing.}
- You gave insufficient info to judge the likelihood. I agree that if one does X often, then it makes sense to invest time in learning and installing X-centric libraries and tools. That should go without saying. But, I'm not sure we should pack a language up-front with bajillion idioms just in case we may use a few. It will likely make it slow and confusing. -t
- {Can you provide some specific examples where supporting more idioms makes the language slow and confusing? Last I looked, BrainfuckLanguage was a lot slower and a lot more difficult to write for even simple problems than was CeeLanguage, despite having far fewer idioms. This certainly seems to be a counter-point to your generalization. Which examples are you generalizing from?}
- Java libraries tend to be convoluted compared to what they could be. For example: JavaIoClassesAreImpossibleToUnderstand. The GUI components are also bureaucratic. They appear to be designed for tool makers more than tool users.
- {I do not see the connection between this example and your earlier statement. First, Java's support for IO is far from FirstClass, so pointing to it as an example of a "language supported idiom" is dubious at best. Second, saying that the quality of the Java standard interfaces is poor (a point that I'll grant) does not imply users would necessarily be any less confused in their absence. Third, you've made no argument whatsoever that these Java IO idioms are slowing the language down. Care to explain how this example serves as evidence for your case?}
{Lazy and impatient people don't like "doing it the long way forever and ever", and will naturally seek (or develop) 'shortcuts'. In a poorly designed language, the 'shortcut' variation reached by the lazy fellow is likely a crap-shoot with regards to long-term viability - it is often the case in
ExploratoryProgramming that you reach
YouCantGetThereFromHere positions that might have been avoided with excellent foresight. To 'leverage'
LazinessImpatienceHubris in development means designing the language and IDE such that (a) lazy and impatient programmers tend to avoid
YouCantGetThereFromHere 'traps' (i.e. ideally it takes special or obvious effort to trap oneself or paint oneself into a corner, even if not 'foolproof'), and (b) changes and shortcuts tend to be progressive (i.e. new abstractions) and encouraged (i.e. no performance penalty for abstraction) rather than regressive or discouraged.}
{Also, I'm hardly saying be rid of SelfDiscipline (and prudence, foresight) entirely, but rather to marginalize it - to make SelfDiscipline something only needed by the elite 'guru' programmers who develop new protocols and create major new library APIs, and to ensure that even guru programmers can take off their guru hats most of the time and just code away with little thought towards the future. SelfDiscipline may offer some advantages, but should not be required for success. I'm considered a guru programmer among my coworkers, but that's a heavy hat to wear.}
You seem to be confusing SelfDiscipline with BigDesignUpFront. Discipline can be incremental. And finding shortcuts to avoid work could also be a form of discipline. Avoiding work and discipline are not necessarily mutually-exclusive. Channeling one emotion or irritation to avoid another can be "discipline". And the person making better and/or domain-specific API's for other developers is using his/her discipline to perhaps compensate for others' lack of. Thus, discipline is still happening, just not evenly. Good code without at least a fair level of discipline is just not likely in my opinion.
{SelfDiscipline does not mean BigDesignUpFront, but BigDesignUpFront requires considerable SelfDiscipline. I am not confusing them, but rather have been using SelfDiscipline as shorthand to refer to the sort of methodical 'prudence' you suggested earlier: "comment well but not redundantly, break things up into functions or units where it's prudent, document the interfaces and dependencies well, choose clean design over fads, and try to spend some up-front design time to keep it simple but easy-to-change rather than just code the first design that comes to mind". By avoiding SelfDiscipline, I'm saying do the opposite: "don't bother with comments except to organize your own thoughts, break things up primarily where you feel doing so makes things easier on you, and do the first easy thing that comes to mind that will possibly work - you can always revert or repair it later". These are essentially DoTheSimplestThingThatCouldPossiblyWork where 'simplest' is the greedy 'whatever is easiest for me' interpretation, but without the hardy RefactorMercilessly as SelfDiscipline (instead, refactor when doing so makes things easier on you, too). However, that approach won't work with just any LanguageDesign - the language itself must be designed so this lazy approach to programming can bridge its way out of any corners it paints itself into. Ideally, the IDE ought to also support cross-project refactoring, which can eliminate once-per-app BoilerPlateCode and can allow the occasional refactoring by one individual to improve many independent projects. As an aside (though, as you noted, this whole discussion is an aside relative to the topic), I'm of the somewhat controversial opinion that 'cold comments' and 'dead documentation' are mostly harmful - they're invariably incorrect, out-of-date, and operate as crutches in a manner that prevent development of more powerful utilities for expressing intent and avoiding semantic noise. Despite the controversy of that opinion, there are many who share it. Of course, I'm still not talking about APIs, but rather about application and implementation code. Documentation for APIs and protocols for use by other developers is a different issue.}
{I do agree with your thought that SystemsSoftware developers and LanguageDesigners may be using SelfDiscipline to compensate for the lack thereof on the part of users. However, you should acknowledge the huge leverage factor. By proverb, an ounce of prevention may save a pound of cure. In LanguageDesign, the factor is much greater than a factor of sixteen; a few hours of design with prudent foresight may save several million hours of work-arounds, debugging, BoilerPlateCode, and so on. If a language becomes a popular one, a responsible LanguageDesigner can claim billion-dollar mistakes - and be thought optimistic.}
I'd like to see a specific example.
{A specific claim for a billion-dollar mistake: Tony Hoare claims Null References (http://qconlondon.com/london-2009/presentation/Null+References:+The+Billion+Dollar+Mistake). But, by even a rough, informal mathematical analysis, if your language doesn't have billion-dollar design mistakes it's likely an unpopular language. Popular languages are used by several million programmers, and any language-design error that averages a week out of each programmer's time is easily a billion-dollar mistake. Other than separating 'nullable' references, a few more errors that could have been relatively easily resolved by language design include one-off array-indexing errors, better supporting domain-values via numbers-with-units (i.e. so users can add 12 feet to 2.4 meters and have a sensible outcome), and memory-safe array access. That's even before considering higher-level design failures - the sort that has programmers writing boiler-plate code or otherwise working around the language rather than with it.}
I disagree that those are a "huge leverage factor". Annoyances that occasionally snap back, sure, but not huge. Besides, every language has LanguageGotchas. And usually there's trade-offs for things such as memory-safe array access. And I'm not sure adding tons of features to a language is always a good thing. I know you believe in that, but I don't want to repeat another one of those GodLanguage debates here. Thus, I'm bailing out here.
{If you write something, and therefore a thousand people don't need to write something, that's a 1000:1 leverage factor. That's huge, and yet that's a small fraction of what LanguageDesign has the potential to provide. As far as your "there are usually trade-offs", sure, but there is also progress - that is, this is NOT a ZeroSumGame despite your almost religious faith in 'sacrifice'. The best things one can 'sacrifice' are one's bad habits, and the only real trade-off is the loss of familiarity. The same is true in LanguageDesign.}
I agree that domain-specific libraries can be quite helpful. Language design? Well, we'll have to part ways there. Languages tend to be a subjective thing. Customizing a language to fit a developer's head or group of developers may provide some productivity increase, but nobody knows how to do this effectively yet.
{That line of bullshit? Find some people who can program HTTP servers and browsers productively in BrainfuckLanguage or InterCal, and maybe I'll believe you. It is easy to prove there are objectively 'harmful' ways to design languages, because one can intentionally make the programmer's life harder. The ability to intentionally design a language that forces programmers to jump through extra hoops to write non-trivial programs is a formal proof (by construction) that programming languages are objective things and that LanguageDesign this is not a ZeroSumGame. You've seen that proof before. It seems you can't be convinced by such proofs because they are too obvious and appear in your mind 'contrived'... but if you actually grokked logic then you'd appreciate such obvious and simple proofs, and you'd know that being 'contrived' doesn't even slightly hurt proof-by-construction. Since you've seen that formal proof before, but persist in your bullshit line of reasoning anyway, you are a crank - a man who can't be turned. You may continue with faith in your 'subjectivity' theories, Mr. TopMind, but your argument remains unconvincing.}
Can you prove that there's no alien mind that is capable of being productive using BrainfuckLanguage? No you can't. QED. And take your vast rudeness and shove it up your bleeding ass you arrogant bastard!
{If you can't reply with anything more intelligent than HandWaving MentalMasturbation about 'alien' minds and a sputtering tantrum about rudeness, perhaps you should not reply at all.}
- You started the rudeness, delusional asshole! If rudeness is a reason not to talk, then shut the hell up! And I am not waving my hand, but rather my middle finger, you blind arrogant bastard!
- {If you cannot contribute something useful and germane, that is reason to keep your mouth shut. Whether your delivery be rude, polite, arrogant, or anything else, at least have something intelligent to deliver.}
- If you followed the same rule you'd exclude your rude, useless, repetitious, and vague insults from your text. You are a hypocrite.
- {That is untrue. Note the 'That line of bullshit' paragraph includes more than insults; it further contains discussion of exactly why languages are objective things, and why 'trade-offs' doesn't mean 'trading down'.}
- So if I mix in retaliatory rudeness among technical content, it's then "okay"?
- {Yes. Neither the initial rudeness nor the retaliation is 'nice', of course, but doing at least that much ensures the discussion remains on-topic and prevents it from devolving into a pointless meta-discussion on who was 'rude' first. If we're going to be mean to one another, we should at least do so productively.}
- If you don't want that, then simply Don't Be Rude First. KISS problem-solving. At least try.
{I have no need to concern myself with your
MentalMasturbation about 'alien' minds and whatever magical properties you imagine they'll possess. Your "language properties are subjective because it's all about fitting developers' heads" line of thought requires an insane assumption that developers' heads vary so widely that one cannot identify any broad truths about them whatsoever, and that is simply a lie. I say it's a
lie because even you know it isn't true, but you resort to that sophistry and
HandWaving MentalMasturbation anyway when you are unwilling to grant reasonable points. The human mind of a software developer has objective properties - its basis in pattern-matching and associative memory and the errors that entails, its limited knowledge about how APIs are implemented, its limited knowledge of software requirements at development time, its need to depend on assumptions, its ability to exhaust and err making even simple typographical mistakes, its very limited ability to recall the exact details of code written or read more than a year ago and the implication on long software projects, etc. Languages are not 'subjective' things because, frankly, human minds are not 'subjective' things. Nor, for that matter, is the subject matter - computation - which follows a variety of mathematical and physical laws.}
Aaaah, so human psychology matters now, eh???? If there are "mathematical and physical laws" that prove your GodLanguage is the one and true way, then produce your proofs or shut the hell up about them. Don't tell me, there is no mathematical proof, but rather indirect round-about deductive Rube Goldberg Sherlock reasoning that leads to your grand vision.
{Mathematical and physical laws and recognizing human limits (which is not 'human psychology' except where the two coincide) are very good for telling you what is not a solution, but they are not so useful for proving something is a "one and true way". Further, I've never claimed a 'GodLanguage' or a "one and true way", so halt with your StrawMan sophistry. To recognize languages as objective doesn't require knowing a 'one and true way' language; for proof of objectivity, it is entirely sufficient to identify - or even contrive - objectively 'wrong' languages. And I suspect that any sound or cogent line of reasoning seems to Mr. TopMind like "indirect round-about deductive Rube Goldberg Sherlock reasoning" if he doesn't already agree with the conclusion.}
I've agreed before that "psychology" may not be the best word, but I cannot find a better word at this point. It's a combination of physiology, psychology, and biology. How about WetWare?
{The connotations of 'psychology' are troublesome, but there are places where the sprawling field of psychology overlaps something useful for LanguageDesign. Many limits named above, though, would also be limits in hardware - such as limited or unreliable memory, use of associative memory, and limited advanced knowledge of requirements or implementation of external modules. I think knowing what developers can do, and what they can do easily is important. I suppose 'WetWare' can work.}
And a language with an objective flaw is not necessarily a useless language. The value of a language comes from a lot of different factors, and having a low number of flaws is only part of the load. Almost any non-trivial software these days has at least one flaw.
{I've never said languages with objective flaws are 'useless'. What I have said is that they are 'objective'. Don't mistake the two. A language isn't necessarily 'useless' in an absolute sense, but it certainly may be 'more or less useful' than established languages. I'll note that being less useful than established languages is 'useless' in the practical sense: any new language must compete with the incumbent languages, and should have significant advantages to pay for the effort of establishing the language. The important technical advantages are the ones that feed into that leverage factor a good LanguageDesign should offer. Of course, a fist-full of money from a titan like Sun or Google or Microsoft is another sort of advantage that can pay for establishing the language.}
- We seem to be talking past each other again. Can you agree to this?: Objective flaws in a language can impact the productivity and usefulness of the language on a continuum between very small and very large impact. Our disagreements are probably around the magnitude of the impact per given flaw. -t
- {I'll agree to that, except that I expect our disagreements are more heavily around the alleged trade-offs for eliminating flaws. You seem to be under the impression that every trade-off is also a long-term, technical cost, but I believe that many trade-offs come in the form of paying up-front - at the point of LanguageDesign and implementation. We also seem to disagree on how much an advantage a language needs to overcome the incumbent, as evidenced in GuiMachineLanguage arguments. (Or perhaps you just don't recognize that any language change whatsoever is already a "TechniqueWithManyPrerequisites" and subject to MindOverhaulEconomics and FutureDiscounting and all that jazz.)}
{Consider writing a simple C/C++ program that can achieve an 'observer' pattern over simple expressions like 'a + b * c'. Such patterns would be useful for high-performance
EventDrivenProgramming. The procedural approach is to simply 'poll' the expression - repeatedly re-compute it and report any changes from the last computation. Polling has bad engineering properties (high latency, huge amount of wasted re-computes - wasted energy, wasted time - even when there is no change). Thus programmers are pressured to find better solutions, such setting up signals to let them know when to re-compute. This itself is troublesome because one must associate signals with the correct expressions... i.e. knowing to hook a, b, and c is not easy and a change in the expression requires changing other code as well. Even
SelfDiscipline by skilled programmers is insufficient to avoid errors when changing one expression requires also changing another one, and this is even more of a problem in refactoring. OOP can take some advantage by
reifying the concept of observable expressions, and representing 'a + b * c' as an object graph; this would allow the objects themselves to manage the invalidation process and cause re-computes at need. However, at this point the programmer is inventing an entirely new programming language atop the old one - i.e. each 'operator' in the expression needs a new object constructor (usually a new class) unless already expressed as a
FunctorObject. Making all this work with other frameworks (such as concurrency, or persistence, or
SqLite) is no easy feat - and generally
not something SelfDiscipline can help with. When programmers are forced to reinvent one language inside another, they themselves are forced to do the work of
LanguageDesigners - a field in which most are (reasonably) uneducated and inexperienced. A naive implementation of the reactive program is likely to have 'glitches' akin to (x*(2+x)) with x going from 3 to 4 and returning 15,18,24 or 15,20,24 (one sub-expression changing before the other) rather than an atomic change from 15 to 24.}
- Without knowing more about the context of usage/need of this expression example, I cannot really comment. For occasional small-scale use, polling may be plenty sufficient and thus there's no need to try to invent a generic dynamic expression evaluator for those few cases. -t
- {Discussion continued at (page anchor: usage context)}
{This sort of pattern comes up again, and again, and again in software development, and is
not domain-specific. Reactive programming is by no means 'specific' to any particular domain.
ReinventingTheDatabaseInApplication is one re-invention you observe because it's an area to ride your
HobbyHorse. As a
LanguageDesigner, I observe many more problems of this nature (configurable processing pipelines, synchronization and concurrency, self-healing or resilience,
GracefulDegradation,
DependencyInjection,
PolicyInjection,
ProcessAccounting, partial-failure and error handling, all those things you derisively associate with a '
GodLanguage'). Providing the necessary features at the language layer, such that the features are supported
across frameworks and libraries, and such that fewer frameworks are required, is the best way (of the ways I know) to support them, to
avoid their repeated reinvention, to achieve the huge leverage factor that comes from
LanguageDesign. More relevantly to discussion,
SelfDiscipline almost never helps with problems of this nature: with
SelfDiscipline alone, you're still
forced to re-invent everything once per application because you need to reinvent it in a peculiar way that will work with the frameworks and libraries and concurrency model and persistence model and all the similar 'reinventions' that are also specific to your application. You can use a database in combination with procedural if the API developers were using the same tool, perhaps, but your ODBC isn't going to much help with using a database in combination with reactive programming - not without forcing changes to the API itself.}
- Your solution(s) is a TechniqueWithManyPrerequisites, resembling a GodLanguage. Mine is incremental: use unix-style interfaces to tie existing languages to various tools and services. You need to demonstrate your technique in production for roughly a decade before it's worth considering. The unix philosophy of tool/service interconnection has been around for almost half a century. (A dedicated TOP language would be nice, but one can get decent TOP without it.) -t
- {This is incorrect. A language solution (even if implemented monolithically) can still be adopted incrementally, not much different than adding ScriptingLanguage plugins to existing applications in order to support AlternateHardAndSoftLayers. Further, existing tools and services can be supported as SecondClass citizens within the language, as is common to any ForeignFunctionInterface. Finally, your approach is simply not a solution; as noted above, the "problem" with simply tying multiple independent frameworks/protocols/etc. together is that tools often cannot work with one another; i.e. each and every one of them assumes 'context' about usage and to some degree these contexts will be contradictory or contra-indicated. For unix-style multi-process comms in particular, I continue the discussion at (page-anchor: unix context).}
{The argument that language properties are 'subjective' - that this reinvention is 'subjective' - is nonsense. Might as well claim that data-base implementations are subjective, too, so you might as well re-invent relations in
BrainfuckLanguage on a per-application basis.}
I'm not sure what this is tied to, but it sounds like a misunderstanding. I was probably talking about benefits, not implementation.
{I spake also of benefits. Language support to avoid per-application reinvention and re-implementation of KeyLanguageFeatures was the specific benefit mentioned. To say languages benefits are subjective is the same as saying all its features and support is subjective - i.e. that there is no objective distinction between a language that natively supports relations, and a language that doesn't even readily allow you to re-use a framework that implements relations from one application to another (BrainfuckLanguage has that problem, because it can't readily abstract anything). Saying "Language benefits are subjective" implies non-sense, and therefore must be wrong.}
Do you really need to know context? More importantly, should you need to know context? (page anchor: usage context)
Without knowing more about the context of usage/need of this expression example, I cannot really comment. For occasional small-scale use, polling may be plenty sufficient and thus there's no need to try to invent a generic dynamic expression evaluator for those few cases. -t
You only need to "know more about context" to know whether the flawed solution is still viable. That you need to know the context in which the polling solution is used becomes, in part, what makes it a "flawed" solution, TopMind. If you need to know the context of an implementation, then you also cannot reuse the code in a new context without first knowing its implementation. If you need to know the implementation before you can reuse code, that simply doesn't scale from a productivity standpoint - it would mean that the only libraries you can use are the ones you know inside and out.
- Who said anything about making the expression evaluator for "reuse"? I generally wait until I see or know about a repeated need for it before I start to considering how to generic-tize something. I generally agree with YagNi, although do believe in planning for likely future expansion. You seem eager to add GoldPlating and/or BigDesignUpFront. -t
- I apologize for not making the relevant connection. The ability to reuse code has further implications than simple reuse of code. In particular: (1) if you can reuse code in a different context, then you don't need to change the code when the context changes; that is, the code will be more resilient under change patterns. (2) if you can reuse code in a wider range of contexts, then you don't need to know or grok the context while initially writing the code.
- By comparison, TopMind, your approach forces each programmer to have more knowledge, perform more analysis up front, to "know more about the context of usage/need", and ultimately produces inferior results - less reusable, therefore less resilient against change patterns. Does my approach require some design up front? Yes, but only by the LanguageDesigner, who must have the minimal intelligence to see an uber-common ObserverPattern and recognize a need for it as a standard feature via standard library or primitive (MissingFeatureSmell, DesignPatternsAreMissingLanguageFeatures). That isn't GoldPlating or GodLanguage or any other ridiculous exaggeration, TopMind; that's simply recognizing a need and filling it. The result is simplified support for reactive code that allows your typical lazy, impatient programmer to DoTheSimplestThingThatCouldPossiblyWork and have that also be the RightThing with no requirement to "know more about the context of usage/need" or any of that BigDesignUpFront. To me, you are the one who is eagerly promoting BigDesignUpFront. You even said so: "try to spend some up-front design time to keep it simple but easy-to-change", said you. You want every programmer to do BDUF, just so the LanguageDesigner can be a blind moron. I think you're being hypocritical because in GuiMachineLanguage or when it comes to your HobbyHorse, you're invariably singing a different song. Even pointing at unix-style is a contradiction, since unix-style (PipesAndFilters, Sockets) was also the result of BDUF by unix designers in order to make things easier for users. I'd favor reducing the need for up-front design as much as possible, even if it means the LanguageDesigner needs the rather minimal intellect to mine frameworks and SoftwareDesignPatterns for MissingLanguageFeature?s.
- There's too many generalities here. I don't disagree with a lot of what you said. The problem is the application. There's some specific questions that need to be answered per case, such those found in WhereToImplementPattern. -t
It is true that scale is important to PerformanceRisk
? of certain designs, but one should note that,
while the polling approach has trouble scaling up, the reactive approach has very little trouble scaling down. This little observation is important, because it means that, if the reactive approach were provided from the beginning, programmers could freely use it from the first moment they see a need rather than slowly 'reinventing' it in application - a painful process that requires whole-program transforms - as they discover ever growing need or utility in reactivity and partial-recomputes, i.e. as the expressions get larger or more expensive, as the conditional filters on the expressions grow more complex, as the singular 'a+b*c' grows into a thousand similar instances. If the language avoids gotchas on this particular area - i.e. avoids seemingly 'arbitrary' restrictions on the use of reactivity - then the
lazy way to program (use the most readily available tool to do the job) also becomes the
wise way to program, and no
SelfDiscipline or foresight is necessary (except by the LanguageDeveloper
?, of course).
By comparison, you might consider an issue nearer and dearer to your own heart: tables (or relations) vs. arrays. Similar to reactivity, arrays work well enough at a small-scale while ignoring those pesky concurrency concerns. Thus, "without knowing more about context of the usage/need", you should - by your own logic - "not comment" that tables/relations might make better language primitives. Of course, I don't buy your logic; I think relations/sets would make fine language primitives - objectively better than arrays. As with reactivity, sets scale down well enough; indeed, one could support fixed-sized or pre-allocated sets just as easily as one can support fixed-sized or pre-allocated vectors, and the implementation itself can decide the 'intelligent' cut-offs for beginning to add indices, and a compiler could even implement sets integer-indexed relations atop arrays. The "trade-off"? Well, one exists: one needs a smarter compiler (one supporting a few specialized optimizations) to achieve the same performance as native arrays in the same situations one would use native arrays. But the payoff is huge: programmers could lazily use sets/relations and wouldn't need to "know more about context of the usage/need" to make it a prudent, wise, SelfDisciplined decision. They wouldn't need to re-invent maps and indices atop arrays. The lazy solution would be the correct one in the vast majority of contexts.
Your fundamental argument above amounts to: "If X is not needed in some situations, then there is no need to invent X at the language layer". Your exact words were "for occasional small-scale use, polling may be plenty sufficient and thus there's no need to try to invent a generic dynamic expression evaluator for those few cases", but the context of this discussion is LanguageDesign. That's simply flawed, it will lead you to a TuringTarpit language that has no features whatsoever. Relevantly, you should concern yourself with the applications where X is needed or useful - or at least not harmful. That's where the leverage advantage comes from, and it often only takes a few dozen uses for a language feature to pay for its 'trade-offs'.
- You didn't say the expression evaluator was about language design. You said it was a C/C++ application, an existing language. But why add something to the language unless we know there's a common need?
- I apologize for the lack of clarity there, but that paragraph transitioned into LanguageDesign as of the sentence: "However, at this point the programmer is inventing an entirely new programming language atop the old one." That is, the FrameworkIsLanguage or ApiIsLanguage angle.
- And as far as "Why add something to the language unless we know there's a common need?" I totally agree. But in context, your argument essentially amounts to "If I don't need X for occasional small-scale use, then there isn't a common need", which is simply untrue - and illogical. I don't need a car to walk to my grocery store, but that doesn't mean I don't need a car for other common destinations, and it doesn't mean I don't benefit from driving a car to the grocery store. I don't need reactive programming for "occasional, small-scale use", but that doesn't mean I don't need reactive programming for other common uses, and it doesn't mean I don't benefit from using reactive programming for "occasional, small-scale use". It doesn't matter that you can name a few situations where one could get by with a less suitable tool; to be logical, your complaint needs to be that the more powerful tool is very rarely beneficial. The common need for reactive programming is justified by its common use in programming and software architecture - the common use of ObserverPattern, PublishSubscribeModel, DataflowProgramming (akin to spreadsheets), and 'data-binding' designs (i.e. the bottom-right corner is a function of the top-left corner - very common to GUIs and SceneGraph descriptions).
- Putting the right tools in the right layer in the right hands for the right shop in the right industry is indeed a complex balancing act. The devil's in the details. My suggestion at least makes services available relatively early after the need is identified. "Hard-wired" integration may be a second step and a worthy step, but you seem to want it as a first step. -t
- Not quite. I do not promote the idea of hard-wiring 'new' ideas into a LanguageDesign unless the language itself is intended for LabToy? use. Java made that mistake several times (the most outstanding being checked exceptions - see CheckedExceptionsAreOfDubiousValue). However, I fully promote hard-wiring 'proven' ideas into the language standard libs or primitives - the ideas that have repeatedly been reinvented via frameworks and SoftwareDesignPatterns and various language pre-processors (macros, for example)... that is, wherever one sees a lot of BoilerPlateCode or hacks to work around the missing feature missing. Indeed, failure to provide these simply guarantees they'll be reinvented the hard way. And, despite your doubts, hard-wiring in intelligent support for FunctionalReactiveProgramming and EventDrivenProgramming and PublishSubscribeModel (to access external data and event sources) is a fully proven requirement. Even your applications would benefit from it - unless you can honestly tell me that there is no business demand for 'reports' and forms that stay updated in real-time. Besides, with LanguageDesign, how much do we really need a new language that rehashes all the problems of the old one?
- Please clarify "fully proven requirement". I don't doubt that certain niches make heavy use of them. The issue is whether they should be hardwired into every app language. As far as "updated in real time", a "refresh" button and/or auto-polling has been plenty sufficient the few times I've had to implement such. And when I leave, the new guy can understand it without reading about a convoluted framework. I believe a sound cost/benefit analysis will show me correct. And even MS building Excel may not be enough times to justify adding it to C++, since only say 1/1000 of every C++ app needs it also. For the other 999, it's just a thicker book and a bigger index to learn to visually ignore in the right spots. Fat books are often not a productivity improvement. I'd put the break-even point at very roughly 10%. If we start looking at common idioms, my wish-list based on observed frequency doesn't look anything like yours. -t
- [Horse 'n' buggy's good enough! Don't need those damn, whuzzit, "car" things, 'specially as only say 1/1000 drivers can learn to work those overly complex pedals and steering wheel. Roughly 87% of Top's statistics are made up.]
- A lot of the extra stuff in cars is to simplify the driver's interface, not complicate it. For example, automatic rain-detection windshield wipers: the driver doesn't have to learn and fuss to find the windshield wiper switch. Automatic transmission is probably the poster boy of this concept. Modern cars are far easier to drive than the Model-T. One doesn't even have to use a 2nd foot these days. -t
- [Sorry, Mr. Point. Top was just here a moment ago. He seems to have missed you.]
- Training a horse is not an easy task. A car is far easier to use for a newbie than a horse-drawn cart. And faster. Let alone the "manual garbage collection" all over the driveway. Complaints from Luddites are about learning new things, not effort to use.
- And I'm not "making up statistics". I trying to give an estimated sense of proportion to help communicate what we expect to fix versus what we expect complicate in exchange. If you disagree with my guestimate, then simply supply your own. If the diff is an issue, then we'll dig into it and explore it more. Your suggestions appear to solve obscure problems to me. I'm trying to understand why or where you think they would apply. If our perception of proportions are so widely off, then we've found a possible reasons for a our difference. I'm offended by the implied accusation of "making up statistics". I'm just trying to discover our differences. That's all, no conspiracy. -t
- ["...1/1000 of every C++ app needs it also.", "...break-even point at very roughly 10%." Sounds like made-up statistics to me.]
- You snipped the context. Should I pull a "you" and assume purposeful malice? Or is it just accidental sloth? "Say" implies its a rough estimate or example. Removing that changes the meaning. Please be more careful when rudely framing me for shit.
- Sorry, TopMind, but attempting to weasel out of a made-up statistic by asserting that it was expressed as a rough estimate isn't a legit form of argument. Otherwise one might assert that, "say," only 1/1000 apps ever need a database so one really doesn't need a DBMS anyways...
- If you agree that frequency of need is a factor in determining what to build in, then you to have to have some kind of estimate in your head to judge whether something is sufficiently common. Because I stated my estimate and you didn't somehow makes me sinister? I suppose EVERYTHING I do will be viewed through Sinister-Colored Glasses when dealing with biased goons. It's like being fucking married; but a different way of getting fucked.
- You said that frequency is a factor, then you 'invented' a dubious and exaggerated frequency that would support your argument. Malicious or not, that's dishonest. If you are going to make up an estimate, the only honest approach is to be clear and forward about exactly how you made it up.
- I did NOT do anything dishonest, you lying drama queeeen! It's clear to normal people, of which you are NOT, that it's a rough estimate. And you are almost NEVER clear about where your assessments come from, flaming hypocrite!
- Sorry, I do not believe your 1/1000 figure was a disciplined, fair, and honest 'rough estimate'. It's a number you invented on the spot with very little thought to support your position. And you called a "rough estimate" to give it a ring of legitimacy without inviting too much attention.
- Well, what you "believe" is wrong. If I really wanted to do what you accused me of, I would have left out the word "say". Besides, why not check first BEFORE accusing me of malice. Guilty-Until-Proven-Innocent is from the barbaric days. First ask, "do you mean that figure to be a careful estimate?" THEN kick me about for being evil if your assumption bears out. Tell me, why would somebody include the word "say" if they WANTED to paint it with legitimacy? It makes no sense. You just think so weird both technically and socially. You don't act human. Are you an AI experiment? If so, it needs some serious tuning. It may pass the Turing Mental Institution Test, but not much else. Or perhaps you are inventing crimes merely to yank my chain and watch me explode in frustration, sort of like a behavioral arsonist.
- [If all you did was misuse "say" (it means 'for example') and assert the occasional fictional statistic for rhetorical emphasis, I suppose we wouldn't mind. It's the multitude of intellectual sins that irritate us. The worst is the arrogant obliviousness with which you assert nonsense in absolute ignorance of being ignorant.]
- And all those other accusations are JUST LIKE this one: wrong, exaggerated, and/or personal opinion mistaken for universal truths. You've got nuttin' real on me, Pardner. Stab and kill me with real logic and real science and real substance, or at least something very close to it, not hallucinations produced from your bias against me.
- Cranks, especially of your species (Ferrous Cranus), are well armored against stabbing deaths via science and logic. You don't even recognize when you've been stabbed. As far as having "nothing real on you", that's seriously one of your problems: if you aren't going to present 'real' figures (or at least justifiable figures) you should present no figures at all. If you aren't going to present 'real' arguments (as opposed to dubious and indefensible assertions about your personal WetWare) then you're just wasting our time.
- Projection. If I claim objective superiority, I show semi-realistic scenarios that demonstrate it rather than make excuses as if I'm above science and road-testing. If I say that tables have some nice properties over arrays, I can show several change scenarios where tables need less lines of code changed, for example. That's something concrete that one can analyze directly, it's not just brochure-talk. It's accepted as a rational technique by most human beings. Anybody against RaceTheDamnedCar is a crank in my book, or at least will have big problems selling their silver bullets. Anyhow, given a rough estimate shows my thought process for communication purposes. You gave an example of Excel, but I pointed out that there are a good many non-spreadsheet applications written per spreadsheet and that catering to that small percent is not worth it. General words like "very few" seemed unconvincing to you before. Thus, perhaps if we looked at specific number, it would help you clarify why you felt "very few" is not sufficient. You are difficult to communicate with, so I'm trying to "tease out" your thought processes by presenting scenarios (ratios). Perhaps I could have approached it differently, but simply state so with an example alternative and move on rather than outright accuse me of intentional malice. You say, here's what wrong, here's how you fix it. Simple. No need to accuse me of dastardly deeds. You are too eager to accuse. Point out the specific problem. Focus on the specific problem. Show specifically where and specifically why it's a problem. Then show an example of how to fix it that's as specific as possible (if applicable). Fix, don't accuse. Fix, don't accuse. Fix, don't accuse. Fix, don't accuse. Fix, don't accuse. Fix, don't accuse. Fix, don't accuse. Fix, don't accuse. You could have said, "Where are you getting that number?" or "the frequency of need is not important so the ratio is no issue". -t
- Here's an example of how such numbers can be useful. Note lines 3 and 6.
- 1. A: "Tables have some nice properties over arrays."
- 2. B: "Yes, but tables are slower."
- 3. A: "So, I'll take the nice properties over speed."
- 4. B: "But tables are 1/1000 as fast as arrays."
- 5. A: "What? No, that's not true. I wish for more evidence about that."
- 6. B: "For the sake of argument, if it was 1/1000, would you use arrays instead?"
- 7. A: "It depends on whether speed is the bottleneck for the app or not...."
- In this case, the numerical value helped clarify communication. -t
- If I really wanted to do what you accused me of, I would have left out the word "say".
- If you really did not do what you have been accused, you'd have beat me over the head with a justification for your estimate. Instead, you threw a tantrum.
- Utter bullshit! You suggested it was intentional devious numeric manipulation BEFORE I threw a tantrum. This is YET MORE EVIDENCE that your bias distorts your "reality", right here in black&white. You are either a liar or stupid.
- I never said you threw a tantrum 'before' you were accused. However, it is your habit to throw a tantrum when you've been called on your bullshit. When you have a justification for something you've been called on - which is incredibly rare - you tend to respond reasonably. The reason you threw a tantrum is because you were backed into a corner. The accusation pegged you.
- Why not check first BEFORE accusing me of malice.
- Because guilty people and innocent people say the same things when asked. Why would I believe you?
- Guilty-Until-Proven-Innocent is from the barbaric days.
- Innocent-until-proven-guilty only applies in criminal law. Nobody has accused you of a crime punishable by law. Science, math, debate, and logic still embrace these barbaric ways of giving many negative names to ideas that have not yet proven themselves to be reasonable (or, metaphorically, "innocent").
- Tell me, why would somebody include the word "say" if they WANTED to paint it with legitimacy? It makes no sense.
- Sure, it makes sense. It would weaken your assertion if you were actually honest and said what it really was: "to support my argument, I'm going to pull a number out of my arse and say that 1/1000 C++ apps might benefit from ...". You wanted to elevate this 1/1000 figure to the somewhat more legitimate status of an 'informed, rough estimate'. That said, despite the harm to the argument, honesty would have looked better for you as a person.
- You didn't answer the question. Why is "say" there instead of NOT there?
- Because if you didn't go to that devious extra effort of ensuring it looked like a rough estimate, you'd know you'd have been called on it. Fact is, it isn't an 'estimate' at all.
- And what is "SOMEWHAT legitimate"? Is that like "somewhat official"? "Kinda certified"? Again, you dance in and out of legalistic formality and loosey-goosey bar-talk as your reference point as it serves your argument. You cherry-pick the assumed formality level when complaining. It's a bad habit of yours.'
- Somewhat expresses a state between two absolutes. Offering the deception that a figure is "somewhat legitimate" is a good step higher on the EvidenceTotemPole than the "obviously illegitimate" that honesty would have required of you.
- Well in my English lobe, "say" knocks it fairly far down on the EvidenceTotemPole. If your interpretation varies for some reason, that's fine, but check first instead of assuming malice as the default. Don't accuse people of malice without getting the full story.
- [The average pet bunny weighs, say, three and a half tons. The average pet bunny weighs three and a half tons. I don't see much difference between the 'say' version and the non-'say' version. Do you?]
- Yes, I do. "Say" is often short-hand for "Let's say for instance...".
- [Then "let's say for instance" is a synonym for "let's pretend", yes? So, what you meant when you wrote "since only say 1/1000 of every C++ app needs it", is "let's pretend 1/1000 of every C++ app needs it"? Isn't that the very essence of a made-up statistic, or even worse, a wholly constructed number used purely for rhetorical purposes?]
- I bet you complained to your Geometry teacher that the givens were "made up". Holy Anal JeZuez?! There's a point where stupidity cannot be covered up with further stupidity.
- [There's a big difference between a geometry example "given" and a made-up, unsubstantiated statistic constructed to look like it was meaningful. I find it interesting that you've defended your use of such "statistics", but not the statistics themselves.]
- Re: "constructed to look like". Ah, Intent. I see. For somebody so gungho about program verification techniques, it's curious why you completely neglect it for intent assumption verification. Your "Integrity" applies to your software but not to your mouth. I now see why you struggle with WhatIsIntent and why your "type definition" is so fucked up.
- [You've confused me with someone else. I have not expressed any opinion on program verification techniques, I've had nothing to do with WhatIsIntent, I have no personal "type definition", and I don't know what an "intent assumption verification" is or why you consider it relevant. The only relevant thing here is that you've still made no attempt to defend your statistics.]
- You just think so weird both technically and socially. You don't act human. Are you an AI experiment? If so, it needs some serious tuning. It may pass the Turing Mental Institution Test, but not much else. Or perhaps you are inventing crimes merely to yank my chain and watch me explode in frustration, sort of like a behavioral arsonist.
- If you're looking for a classification for your frustrating opponent, try FlameWarriors. You have been classified (even before I came to the WikiWiki, though I fully agree with the decision) as Ferrous Cranus: http://redwing.hutman.net/~mreed/warriorshtm/ferouscranus.htm - basically, you're a small-minded, stubborn ignoramus with an agenda. (I first learned of FlameWarriors via ObjectiveEvidenceAgainstTopDiscussion.)
- As usual, that's not specific. I don't think you know what the word "specific" means. You have a reoccurring problem with it. And your agenda is your silly idealistic GodLanguage packed with all kinds of bloated goodies that nobody really needs. In a previous life you sold refrigerators to Eskimos.
- Regarding the 'frequency' assertion - I do not agree. I aim more for an absolute return on investment: a feature with a large up-front cost but low maintenance cost and zero runtime footprint when unused (common to standard libs) may fully pay for itself in just a few large applications. (Of course, it still shouldn't be 'standardized' to the language unless you have historical reason to believe it would be reinvented often.) Relevantly, total efforts are not equally distributed to all applications: I am quite willing to make lots of small applications slightly more complex in order to make a few big problems in the larger-scale system far more simple. One example: I am avoiding the common global 'standard console IO' in order to avoid accidental complexity for multi-user, multi-console, and distributed code in large apps... despite its convenience for toy and educational apps.
- It takes a serious WorseIsBetter mentality to say that simply because your clients are used to pressing 'refresh' buttons means they should be. The demand for real-time features has been around for years now. My own clients want DVR-style rewind capabilities - not a refresh button, but a temporal 'slider' to quickly review histories. But people are used to disappointment, and you seem entirely too pleased and smug about disappointing them.
- Please tell me more about this application. How long does it keep a history? How frequently? Does it get data from a database? Does it keep any stale snapshots, or is it constantly querying? How do you ensure its not flooding the RDBMS?
- My application: user-defined time to keep history (limited by HDD), reactivity requirement is <100ms, data is from several dozen different sources (unmanned systems, different maps databases, sensor networks (cameras, radar, motion sensors, GPS units) - all of them 'live'), there are also hundreds of rules that must be evaluated determining alerts and display properties (i.e. whether a particular icon is greyed or enabled, whether a particular icon is visible or highlighted, etc.) And ideally there is no network traffic between it an any external DBMS except when a relevant change occurs (polling - as in "constantly querying" - is only for shitty designers). But even typical CBAs would benefit from much the same... i.e. a forum or WikiWiki where one could see changes immediately as people post, and where the checkmarks in the check boxes indicate a live state that you're free to mutate.
- Reply at RefreshNeedDiscussion.
- And I said that auto-polling was also an option, not JUST a refresh button.
- Auto-polling might work for a small number of small-scale apps, but as noted elswhere: it doesn't scale - not performance wise, and not complexity wise. That is: explicitly polling even a few remote resources - and at the same time ensuring reactivity to a change from any of them and the user - starts to get very complex for the developer. If one adds hundreds of evaluation rules to the mix, the problem becomes intractable without support from a "convoluted framework". Auto-polling is not a more simple solution than what I've been proposing - it is, at best, about equally complicated in the most trivial of cases: that of refreshing the entire system.
- Discussion continued at RefreshNeedDiscussion.
- Your 1/1000 stat is nonsense. "Long-lived" applications benefit from easily hooking and receiving up-to-date information about the world. Applications are challenged to discover, identify, and hook the correct remote data sources to ensure the right information is delivered at the right time to the right destination (ideally without grabbing information that isn't needed). Much data in the real-world is "live" - i.e. subject to continuous updates. Applications benefit from temporal de-coupling and disruption tolerance that comes from a proper design of these features - i.e. avoiding the requirement that all the servers or system components be fired up in the right order, and re-establishing connectivity if the other side resets. Even the short-lived apps benefit from several of these features. Which "niches" benefit: command and control, multi-media and teleconf, any SceneGraph based system (i.e. describing positions of objects as being relative to other objects) including simulations and games, any rules-based systems (i.e. a themostat app, a security system, a simulation), and live reports (reports that update as you view them, which includes dashboards and GUIs).
- Anyhow, the use of a "convoluted framework" is exactly the sort of thing I'm arguing against. Adding FirstClass language support for a feature eliminates the need for a programmer to explicitly interact with a convoluted framework for that feature, i.e. via shifting said framework into the language implementation similar to GarbageCollection or ResumableException. Your "auto-polling" approach probably requires programmers deal with a more convoluted framework than anything I'm suggesting, especially once you need to poll dozens of resources (which is not at all unusual).
Not all trade-offs are acceptable, but in my experience as a
LanguageDesigner I'll say with confidence that with
enough search and thought and experiment, you can discover a 'trade-off' that is a 'trade-up' by almost every metric... the design-equivalent of sacrificing a bad habit.
For the 'reactivity' case, for example, the trade-off is to enforce a certain form of SeparateIoFromCalculation that avoids SideEffects inside the expressions. This allowing arbitrary degrees of (a) caching, (b) data-parallelism, (c) partial-evaluation, and (d) sub-expression abstraction, refactoring, and reuse. Yet this separation doesn't actually 'hurt' anything - it doesn't prevent making the expressions themselves react to external side-effects or introducing side-effects as reactions to changes. LazinessImpatienceHubris will follow the encouraged discipline if only because that's the laziest, fastest way to achieve a working, high-performance program - separation will be easier to code than explicitly weaving side-effects with reactions.
This is exactly the sort of trade-off that is ultimately a 'trade-up' for productivity: the resulting code that eschews interweave if SideEffects will be reusable in more contexts, will require less knowledge of the implementation or usage context, and will have good engineering properties (scalable performance, suitable for automated safety analysis, suitable for refactoring and abstraction, etc.) - and all of this comes from being 'lazy' in a context where laziness is supported and rewarded, and the laziness itself supports productivity further since programmers don't need to know usage context, don't need to stop and think ahead or be especially 'prudent'.
Only the LanguageDesigner bears the massive burden of 'foresight', but even that 'burden' can be eliminated: LanguageDesign itself can be turned into an iterative process, where one mines frameworks and SoftwareDesignPatterns from established languages to know which features people 'need' based on the simple fact that they have needed it often in the past. Once the need for foresight is eliminated, the LanguageDesigner only needs SelfDiscipline and rigor to figure out how to achieve the features with acceptable trade-offs.
SelfDiscipline + Unix is GoodEnough for TopMind (page-anchor: unix context)
Your solution(s) is a TechniqueWithManyPrerequisites, resembling a GodLanguage. Mine is incremental: use unix-style interfaces to tie existing languages to various tools and services. You need to demonstrate your technique in production for roughly a decade before it's worth considering. The unix philosophy of tool/service interconnection has been around for almost half a century. (A dedicated TOP language would be nice, but one can get decent TOP without it.) -t
First, incremental approaches are over-rated unless you can control for CodeOwnership issues. In particular, they leave behind a long history of BackwardsCompatibility problems. Effectively, all the problems with older approaches are additive, but the benefits of older approaches are marginal.
Second, unix PipesAndFilters and EverythingIsa File approach was a partial success, but at the same time a magnificent failure - one that effectively illustrated a need for richer language-like features by its failure.
The unix approach suffered difficulties when dealing with:
- multi-instance concurrency, especially problematic when a processes will access global resources, including a global database, FileSystem, or UI.
- synchronization issues. The pipeline concept supported a basic synchronization primitive based on buffered I/O, but the unix system was forced to continuously introduce richer primitives (named mutexes, file locks) in order to support users, and these were introduced in the global space (thus exacerbating problems with multi-instance concurrency).
- A DBMS is probably the better tool than the file system for such. PickTheRightToolForTheJob.
- Agreed. Indeed, I think FileSystems suck salty chocolate balls. But the relevant point to take away from the above is that the synchronization ends up occurring in a global namespace via ad-hoc mechanisms in a manner that causes more problems for multi-user and multi-instance concurrency.
- You know I'm going to ask for a psuedocoded example for both the "bad way" and the "good way" for a specific scenerio. This applies to the rest of the list also. -t
- performance issues. The continuous serialization costs and context switches have a non-trivial overhead, and it is difficult to support a wide range of performance optimizations across the process boundaries (inlining, partial evaluations, removal of redundant code, lazy evaluations).
- interpretation issues. Extracting meaningful content from an octet stream is a non-trivial problem, one that programming languages have done a great deal to mitigate internally via use of 'type systems' and 'data structures' and so on. Dealing with 'names' for laziness, for caching, for callbacks, for communications - is especially problematic. Programmers in the unix model must deal with this on a per-application basis, and must also deal with potential for 'upgrades' to a given output stream. The introduction of XML and other 'extensible' input formats is relatively recent. See also CrossToolTypeAndObjectSharing.
- error-handling issues. Dealing with partial-failure in a single process is hard. Dealing with partial-failure in a graph of processes is harder. Debugging for multi-process applications also suffers.
- UI integration concerns. I.e. each process may need some interaction with the user, but one needs to ensure these interactions are cohesive, have a common style, have common and accessibility properties (like language), and so on. This has encouraged alternatives such as 'GUI shells'.
- distribution concerns. Unix faced a big hit when we went from multi-user centralized-processing-and-storage systems to multi-user distributed-processing-and-storage. The 'processes' are not safely/securely distributed to the resources they need. The 'namespace' doesn't readily support distribution, though some distributed FileSystems aim to help out. The InterProcessCommunications needed an overhaul - and the new system (sockets) was never quite as configurable by 'glue' languages like shell scripts as the old PipesAndFilters system.
Often it proves easier to re-implement whatever features the other process was supposed to provide, turning applications into ever-larger monoliths. Worse, this is a self-perpetuating cycle, since no performance-conscious programmer wants to invoke a monolith from a small application just to get a small subset of its features. The above forces have pressured
libraries, as opposed to
InterProcessCommunication, into the position of the most prominent and popular form of program extension.
You're imagining that the unix-style plus a heavy helping of SelfDiscipline, is GoodEnough. This makes me think you haven't noticed the unix approach failing, haven't recognized the clues identifying failure: the heavy use of DLLs, the fact that socket connections cannot be easily described in shell-scripts, the ever growing dependence on frameworks.
If the above problems could be solved, the multi-process approach would certainly be viable. However, I think it important to recognize that language-based solutions and new-OS-based-solutions are equivalent by almost every metric (LanguageIsAnOs, or close-enough). No matter what, a TechniqueWithManyPrerequisites will be required.
You only talk in generalized claims and convoluted innuendos. I want to see specific scenarios of failure or problems. Semi-realistic code samples with sample input and the specific line numbers where a specifically described problem occurs if applicable. If you don't wish to provide such, then I am done here and will LetTheReaderDecide. RaceTheDamnedCar or I leave. Unix has proven successful in the marketplace and is more popular with techies than Microsoft. You need more than vague innuendos to dethrone it. -t
Your demand is naive, like asking me to prove why a forest is dying by pointing at a specific piece of bark and refusing to recognize more subtle forces like climate change, pollution, or infestation by non-native parasites. There is no specific code or line number that demonstrates why the old unix glue-able applications approach has fallen out of favor; there are simply a large number of emergent forces and cycles that make programmers reluctant to favor it - despite its advantages - and many of those forces were described above. The proof of failure is in the application libraries, and in the heavy use of dynamic libraries and plugins instead of multi-process designs. You're free to peruse these libraries, if you wish - online resources include Freshmeat, Sourceforge, the Ubuntu app guide, etc.
- I fail to see how your forest analogy applies to Unix. One can provide numeric metrics to back all of those anyhow: bark samples from multiple trees, water samples from multiple streams, soil samples from multiple spots, etc. I didn't ask for just a single specific peice of bark. If you cannot comparatively analyze and present your claims, most will ignore you, dismissing it as personal speculation. What application libraries "failed" and where is the replacement? Nothing specific, just innuendos. You are HandWaving away from science.
- Applications and libraries ARE the "replacement" that proves the failed Unix model. The Unix model is (or was) PipesAndFilters and EverythingIsa File. Nowadays, *nix is following the Microsoft model - that being applications and libraries.
- What specifically, GUI's?
- GUIs are included. Indeed, UI integration concerns were listed above because, despite working well enough for certain shell apps, PipesAndFilters worked very poorly for developing integrated GUI applications. TK could work a bit, but all the style and internationalization issues and such were very problematic.
- Microsoft is tending toward a GodLanguage model where everything is integrated (which shouldn't surprise anybody). But I'm not sure I agree that everything is moving that way, or even "should" move that way. For example, microsoft tends to be slow to trends because people have to wait for the integration, and often have no control over the guts. Services, on the other hand, can be used almost immediately, without language change. Do you think when a new standard or trend comes along that Python, Ruby, Java, C++, PHP, etc. users want to wait for language integration? Even if you think they suck, they're not going away anytime soon. Companies still have to link them to other tools and systems. And, do you think MS will be friendly to competing services such as Oracle, MySQL, and Flash? MS has carved out a nice niche where if you live within their products and accept GoodEnough for most typical office uses, then everything is well-integrated. Whether everything should fill that niche is another matter. Java tried to do something similar for larger companies, but for the most part failed at that goal. MS throws armies of developers at their products to try to keep up and integrate at the same time, but only just keep up. If 2 GodLanguage suites compete for that niche, then the market share may likely be too small to justify armies of programmers. Further, MS would choke if a decent desktop-like HTTP-friendly GUI standard came along. The desktop GUI is how they hold us by the balls. Software developers could then hit Windows, Mac, Linux, Arm, and Google OS's at the same time with little or no re-coding, and would stop favoring Windows. -t
- PipesAndFilters is not any less a GodLanguage model than is ComplexEventProcessing, DataDistributionService, or the ApplicationsAndLibraries? model. To me, LanguageIsAnOs, and vice versa. That is, an OperatingSystem is a VirtualMachine no different from the JVM or CLR, and presents its own model for concurrency, IO, GUI, etc. I find it utterly baffling that you see some of these as less 'GodLanguage' than others. With PipesAndFilters, there is a language to hook components together; further, individual components must know (or adapt) the necessary output formats and input formats for any information to be processed. With DDS and CEP, one must also know the IO formats, but also name the IO sources and destinations. With the ApplicationAndLibraries? model, one must hook and manipulate a 'shell' published as a library or DocumentObjectModel. These approaches have their strengths and weaknesses, but all of them require things be integrated. Do you think that when a new standard or trend comes along that Python, Ruby, Java, C++, PHP, etc. users actually have a choice about waiting for language integration? They don't. There is always, always, always an integration effort. You delude yourself into believing that certain forms of integration are "special". That said, I don't like the ApplicationsAndLibraries? approach to development... my reasons, of course, involve all that SafetyGoldPlating you jeer at, but I honestly favor the PublishSubscribeModel (CEP, DDS) and PipesAndFilters model. The former are practical and usable today, and very popular in at least the DOD and US Navy. The latter needs to fix the problems listed above for it to take a prominent position... i.e. we need to support the equivalent of creating PipesAndFilters across multiple machines from a local shell-script, and integrating this with UI local to the machine that initiated it.
- I meant that TextAndAttributeBasedMessaging is usually quicker for a given app language to adapt to than waiting for language-specific libraries; or worse: for a root language change. -t
- So your objection is, essentially, to the use of ABIs produced by separate compilation of libraries? Would you prefer that languages compile into RemoteProcedureCall designs (supporting text) and such?
- Excuse me, but what does ABI stand for? Abstract Business Interface?
- [Your inability or unwillingness to use Google is telling, as is your choice of the word "Business" in the given context. ABI = Application Binary Interface.]
- I apologize for not using Google. My bad. I thought ABI was something already mentioned here. Anyhow, it's not so much about "compiling". It's about app-language-specific libraries (which are usually binary as a by-product). For example, almost every desktop GUI engine in common use is app-language-specific or closely based on an app language, making usage with other languages more difficult such as requiring involved wrappers/adaptors. If we want to create a single GUI engine that is not app-language-specific, then TextAndAttributeBasedMessaging is the way to go in my opinion. We did it with RDBMS so that each language doesn't have to reinvent RDBMS or large parts of it.
- [A distinct display or UI language is an old idea. See, for example, http://portal.acm.org/citation.cfm?id=563741]
- Something sufficiently relevant to modern GUI's published 1975? Sounds suspicious. And for members eyes only to boot.
And this is not "dethroning" Unix, but rather recognizing that the old 'unix' approach to application design fell off the throne a long time ago. I'd honestly like to get glue-able apps back on the throne, but I'm not the sort of delusional moron who pretends that the failure was due entirely to a 'fad'. If you ask 'why' the unix approach failed, and search for an answer, you'll find a number of issues that it failed to address - many of which are named above. You can interview developers, ask why they chose not to use the
PipesAndFilters style for their app, or why they used a DLL-plugin style instead, and so on. Having been one such developer, most of my answers would be from the above list.
If you continue to speak in terms of generalities, there will be no real communication between us. If you want practitioners to care about your pet solutions, you must find a way to demonstrate them being better. Can we at least focus on the code creation and maintenance aspect? Can you show a scenario where Unix would require more code and/or more code changes than your pet solution?
I have not been promoting a pet solution. I have been demoting the Unix solution. There is a difference. When I'm attempting to make people care about my specific solution - which would be totally inappropriate for this page - I'll be sure to provide examples.
If you can show noticeable improvement measurements in say 5 to 10 different factors, then your claim may at least deserve to spark further inquiry. Same with your forest example.
And I never said to always use PipesAndFilters for everything. But for language-neutrality, they are hard to beat. Re-inventing every service for every app language is not economical. It is a huge violation of OnceAndOnlyOnce. Related: WhereToImplementPattern.
Yes, PipesAndFilters have lots of nice properties. Now, if only they weren't a failure for reasons conjectured above...
I wish to see a semi-realistic UseCase/scenario of failure. The above is vague and indirect and non-specific. This is not an unrealistic request. Most people want to see real-world problems, not just chalkboard problems.
Sorry, TopMind, but that is an unrealistic request. For four reasons: First, realistic scenarios don't fit into a WikiWiki discussion - only ArgumentByLabToy or simplified example, plus generalization by logic, fits into a debate. Second, you have this strange definition of 'realistic scenario' that excludes things that don't involve report generation and business applications, both of which are outside my expertise. Third, realistic scenarios tend to be complex and messy to the degree that the issues relevant to the debate are obscured by issues irrelevant to the debate, thus attempting to demonstrate anything by the use of realistic scenarios strikes me as unrealistic. Fourth, realistic scenarios are precisely the set of scenarios that are easy to achieve with the tools that exist today, which makes them a remarkably poor vector for demonstrating the weaknesses of a given approach in achieving a desirable set of features... i.e. it is largely an "unrealistic scenario" to use Unix PipesAndFilters to build a GUI because of the weaknesses listed above. If you think a "realistic scenario" will be relevant to defending you in the debate, I invite you to find one and bring it in - find an example of PipesAndFilters used for GUI to demonstrate that the weakness doesn't exist. At the very least, you can learn how unwieldy "realistic examples" happen to be.
Even the most complex problem-area I can describe to someone if I simply try hard enough. It is doable, it just takes articulative mind power and patience to pull it off. And while it's true one may not be able to describe an entire application, it is usually possible to describe specific problem scenarios/areas. If it's beyond description, then it's also likely beyond rational mental analysis, and thus is merely an emotion-based feeling in your head which is cherry-picking pet factors out of a big sea of factors unconsciously. -t
You make a serious error if you equate "describing a problem area" to demonstrating a property. The examples you've offered in the past of your oh-so-superior articulation essentially rely upon the audience accepting you at your word as opposed to looking for any possible reason to reject your examples and explanations (i.e. "why do I need tables? I could easily use arrays for all the examples you've listed in the WikiWiki so far... they're so small, no indexing needed!" - strikes me as no less reasonable than your "but we could use polling here... it's just a single case!"). How many people in your audience demand you find a definition before moving on, then demand you find an "algorithm" for your definitions, then use "but it could be done another way" (in some arbitrary context) to reject your simple examples, or reject them as not being "realistic" because they don't involve their favorite problem domain, and so on? If you are oh so magnificent in your articulation, then please explain, with realistic examples how to deal with an audience that has an agenda in rejecting everything you say, so that I know how to deal with you. Perhaps I should emulate your behavior for a while - which will require that I act stupendously uneducated - so that you can see what it's like deal with you. Or, perhaps, I should ask you to prove your mettle on comp.object, where (similar to WikiWiki) most everyone thinks you're a troll. Anyhow, the simple examples and lists I present are very subject to mental analysis. Of course, since you brought up Unix methodologies, I do assume a certain level of practitioner's knowledge in Unix - i.e. basic knowledge about how Unix PipesAndFilters hook up under-the-hood, basic knowledge about how sockets work, and so on, basic knowledge on each of the synchronization types named above. If you don't already know those things, you really shouldn't be discussing the 'merits' of Unix from a developer's standpoint.
- Most of the above is too general to reply to (as usual).
- What a hypocritical criticism. Specifically, what is too general to reply to?
- As far as providing "proof" of my favorite techniques, as I've said before many times, much of the evidence involves personal WetWare. I don't claim universality because every head is different. But I will defend specific issues objectively when they arise, like showing how tables can go from one value to two values per node far easier, less code change, than arrays. If I can find 10 common situations where tables do measurably better than arrays but the array fan only finds 2 that show arrays better, then I've made a pretty good case. The reader can then decide which scenarios and/or metrics best represent their particular need or style and plug in their own weights. That's how it should be. It may take several scenarios to get enough angles, but so be it. Some readers may even find your 2 array scenarios better fit their situation. That's fine by me and I expect it for some niches. But they cannot select the scenarios that best fit their situation if you don't give any. You just lack the imagination or will to represent perceived problem areas as semi-realistic little scenarios. I can't help you with that gap. I suspect you lack sufficient real-world experience to have that skill. You've been playing with school toys too long. -t
- As far as "everyone thinks you are a troll", that's not true. When I used to have a public email address, I'd get plenty of "fan mail" encouraging me to fright on. There are a lot of people who've been brow-beat for not accepting OOP as the One True Paradigm, but are too shy to defend their POV. Those who disagree with me are the ones calling me a "troll". Personal attacks are an over-used web substitute for real science and debate. You guys rarely have any real evidence against me, only innuendos, hand-wavy notions, and self-bragging. You wouldn't have to make statements such as "everyone thinks you are a troll" if you were truly comfortable with your evidence. It would be irrelevant. -t
- I'll need to remember: if you praise TopMind even once for anything at all, he'll call it "fan mail" and start thinking in terms of "toppie supporters" and the "anti-toppie conspiracy". TopMind, even if you do not acknowledge it, there must be a reason you are the only person with a whole page on WikiWiki dedicated to bashing him (ObjectiveEvidenceAgainstTopDiscussion) despite the number of other wikizens who don't bow to the sacred cows. Regardless of whichever straw cows you were burning, that ought to put a dent in your arrogant claim that you are oh so great at articulating things in ways that other people will believe make sense. Learn some humility. Every time you start referencing your alleged 'fans', I think you're an arrogant ass - the sort of idiot who rests on the laurels of decades before. You are not impressing me of anything else.
- Those who don't "bow to sacred cows" usually avoid the more heated discussions. It's a relatively small cabal of strong-willed people who repeatedly attack me. After all, is it a coincidence that 90% of my "enemies" don't sign their content???
- [I don't sign my content because I believe in an EgolessWiki; it's got nothing to do with you. The truth is that no one is attacking you. We're defending rationally-obtained knowledge against your ignorance. The sad fact is that although you appear to think you know a lot about everything, you obviously know very little about anything. In fact, you know so little about anything that you aren't even aware you know very little about anything. You are a classic example of UnskilledAndUnawareOfIt, probably because HumansAreLousyAtSelfEvaluation. From the point of view of an educated practitioner, creations like TypesAreSideFlags, or your claim that the HTTP is based on PipesAndFilters, are ludicrous. They're as ridiculous as insisting to the medical community that DiseasesAreCornFlakes, or insisting to automotive engineers that cars are based on cheese. If you're going to persist in posting such gaffs, we're going to persist in defending against them. By the way, do you mean "cabal", or "COBOL"? There is no "cabol".]
- I meant that the signing rate of those who call me names is noticeably smaller than c2 wiki in general, suggesting those name-callers are not a representative sample of c2 opinion. As far as HTTP, I meant in a general philosophical/architectural sense, not as actual technical building blocks. Again, you slide back and forth between super-literal or very loosey goosey as it suits your argument. As far as always being wrong, you've had ample opportunity to corner me with actual facts and logic, which should happen often if I was as stupid as you paint me. (And your def of "types" is tied to "intent", which you've failed to clearly define, fuzzhead! You repeatedly mistake personal opinion for objective universal truth.) You've got nothing real on me because you are anti-science without even knowing it. You are afraid of semi-realistic test scenarios as if they are some kind of poison. Just admit that you lack sufficient practical experience and are a fish out of water. Get your face out of academic books that nobody cares about and do some some damn real work for a change for real people and real companies instead of working for ex-classmates. (I fixed some of the spelling you complained about. Topics that are too large overwhelm my spellchecker on slower PCs.) -t
- Perhaps people are embarrassed to associate their name with a stubborn ignoramus who isn't even respectful enough to offer his own.
- Nice excuse, blowhard. Anyhow, I don't expect RealNames?, just consistent handles.
- [Why? So you can better target your AdHominem attacks and avoid tackling the salient points even more frequently?]
- You mean something like: Mr. Square-bracket-quoter, you are an arrogant verbose round-about excuse-making detached-from-reality self-important fool!
- [Yes, that would be it.]
- Well, handles won't affect the frequency, only help dole them out to the proper persons.
- [Yes, there's a bit of that, too. Top, even in a philosophical/architectural sense, HTTP has nothing to do with PipesAndFilters. You have been proven wrong repeatedly; it is only you who can't see it. I have not defined "types". As for practical experience, I've been developing custom business applications since 1984. Despite being employed as a full-time academic, I continue to do real development and real work for real companies. It is precisely on that basis that I claim you are ignorant. I suspect you wish you were knowledgeable or else you wouldn't be here, but what you lack is the determination and dedication to become knowledgeable, and so you attempt to substitute the impression of knowledge but provide none of the substance. Perhaps there's a place where you get away with that, and are highly regarded for your "wisdom". If so, fine. Here is not that place.]
- If you are a full-time academic, then likely you are hired by companies closely associated with academics and they are used to academic-speak and academic-centric techniques (obscure techniques and languages that look wonderful on paper). CompaniesHireLikeMinded. Anyhow, your refusal to demonstrate your claims in terms of semi-realistic scenarios is both very strange and very frustrating. You are soooo against RaceTheDamnedCar that we will just never be able to communicate effectively. You are an alien life-form with an alien language. And you confused the whole Unix thing, as described elsewhere. -t
- [I've been developing custom business applications since 1984. I've been a full-time academic since 2003. In that time, I successfully worked with a wide variety of companies, some of which were indeed academic, but others were exceedingly pragmatic and even anti-academic. Actually, I have demonstrated my claims using not only semi-realistic scenarios, but with code based directly on production systems. See PayrollExampleTwo. It was you who has not addressed my request for an equivalent using TableOrientedProgramming. So, who's against RaceTheDamnedCar? As for "the whole Unix thing", I've no idea what you're referring to.]
- As far as PayrollExampleTwo, I *agreed* in PayrollExampleTwoDiscussion that TOP was not appropriate for the formula-oriented portion as long as there was not a large quantity of formulas (TheRightToolForTheJob). There were also some WetWare issues as far as code size, style, and readabilty that are not subject to objective analysis (without a large sample of volunteers). If there is a specific point/metric you wish to revisit, then please do so there. Otherwise, we have no goal and it's not relevant to anything here.
- [...Other than the fact that you accuse others of doing what you do yourself, thus making you a big, fat hypocrite.]
- Whaaaat? You pointed out a weak-spot in TOP and I fricken agreed with you. What else do you want? Jeez Louise!
- [From you, I expect no less than scientific rigour, balance, depth, impartiality, and ROBOT-LIKE PERFECTION. Mere agreement is of such negligible significance as to be ignored.]
- Huh?
- Note my criticism of your articulation abilities is not necessarily a claim that mine are A+. Some of your personal digs suggest that I claimed A+ ability, which is false. However, I would be ashamed of myself if I thought that may pet ideas were objectively better but not able to demonstrate it in any usable objective form.
- You should be ashamed of yourself for pretending "it works better in MY WetWare" is a rational statement. Even if it was true, you can't logically prove it - not even to yourself. It is impossible for you to separate nature from nurture with a sample-size of one, thus your entire argument devolves to: "but this is what I'm familiar with whether or not the other approach would work better for me if I could be bothered learning it". Thus, your arguments about subjective superiority are at best dubious and indefensible.
- Please clarify re: "the simple examples and lists I present are very subject to mental analysis." And I wish to see scenario analysis with SOME kind of numeric metric. Your "mental analyses" tend to meander into your own personal convoluted round-about mental WalledGarden. We need scenarios and numbers to escape that, to ground them to reality. -t
- Consider the first thing I listed above: "multi-instance concurrency, especially problematic when a processes will access global resources, including a global database, FileSystem, or UI." This, itself, contains a list of global resource classes: "global database, FileSystem, or UI". If you know Unix, then you can think of plenty of examples of each of those without any prompting from me (i.e. files in /etc or /usr/share, the password file/database, etc.) - indeed, very few processes in Unix escape the need for configuration or persistence of some sort. If you know also about concurrency issues (race-conditions, time-reversals, etc.) then you should be fully capable of the minimal "mental analysis" to apply those concurrency issues to these global resources. You seem to think it a "personal, convoluted round-about mental WalledGarden", TopMind, but I think anyone who isn't either ignorant or hostile could make that very-heavily-implied connections. And, if you invoked 'Unix philosophy' as a defense in ignorance of the problems it ran into historically, I consider that to be the result of arrogance and ignorance - your problem, not mine.
- You are talking about specific technologies first and needs second, if at all. You might be using the wrong specific technology for the job. I cannot tell if that's what happening without knowing what "the job" is. Create a semi-realistic scenario that Unix cannot handle. There's 5000 stores and each store needs to blah blah blah every 3rd Thursday of the month because etc. etc. etc. If you do X, then Unix will burst at port 7 and explode in the operator's face. That kind of thing. -t
- Anyhow, I'm switching to the term TextAndAttributeBasedMessaging because if I mention Unix, then you over-focus on specific commands or features, missing my point entirely. -t
These specific examples should be obvious to anyone who understands Unix. For example, if you consider which files are global resources you could quickly identify anything in the /etc and /usr/share directories, including the password file and a large number of configuration files. Many non-trivial components end up using global resources, often for both input and output. If you also know what 'multi-instance concurrency' means - which follows its denotation very simply: multiple instances of a process (i.e. for multi-users) - and you also grok the various concurrency issues (race conditions, time reversals, etc.), then no more needs to be said.
As far as pipe-based GUI's, it could be done if decent language and conventions are created.
Where's your realistic example? And if you want to defend that Unix approach works, then you need Unix PipesAndFilters, TopMind. Resorting to "creating a decent language" sounds like something I've said. Yes, with a decent language, we could make GUIs out of PipesAndFilters again. Now you just need to perform and justify a RequirementsAnalysis for this 'decent language'.
So far everyone tries to make app-language-specific GUI engines out of historical habit. People tend to copy what they know already and our GUI habits were born of Parc Place SmallTalk. I may present a draft desktop/crud-friendly markup language one of these days. I've been studying desk-top GUI idioms looking for common patterns such that I can factor them into fewer idioms that are sufficiently close to actual instances. -t
If you look into history, you'll find that some of the older forms - especially alert boxes - have indeed been used as elements of a pipeline. Problem was dealing with style-issues and such... i.e. you need to push all the GUI IO back through a central locus in order to consistently apply style and internationationalization and accessibility transforms, and to deal with modalities, and so on.
I see nothing wrong with pushing many of those to the server. It's better OnceAndOnlyOnce to upgrade a central server than 3,000 clients with 20 different browser versions/brands. And does it make sense for the client to download 100 language mappings and then just use one?
What exactly are you pushing to the server, and how are you achieving Unix PipesAndFilters while doing it? (This discussion, if you've forgotten, is about why the Unix approach failed, not about the best way to do GUIs.)
As far as GUI's, are you saying that a markup-over-HTTP approach cannot be made to make effective GUI's because of some universal flaw, or merely that it has not been demonstrated as possible so far? -t
I have said nothing about the markup-over-HTTP approach, and I especially haven't said anything about universal flaws (only about Unix flaws, which you'd know if you were paying even the slightest bit of attention...). However, I would say that the design of modern web-servers, and HTTP itself, is a major departure from the original Unix philosophies and design methodologies (such as PipesAndFilters). For example, you cannot take arbitrary process that needs HTTP input, and use a script to hook it to a process that provides an HTTP service. Instead, you need expensive and round-about approaches, usually going through a global DomainNameService?. Even CommonGatewayInterface itself broke the mold of EverythingIsa File, since it needs to treat processes as files (which Unix never did natively, though PlanNine made a reasonable attempt).
- You don't have to go thru global DNS if you don't want to, especially if it's an intranet. If it is on the internet and you want to make repeated access cycles, then you can fetch the IP address once during the start of a session and then use the IP address for the rest of the session instead of domain name, saving some time. Any system that has a logical and physical address space will potentially offer up such choices, along with the natural trade-offs that come with each. At least HTTP offers that as a choice. That's generally a good thing.
- Even just hooking through IPs is still via a global service.
- I highly suspect than ANY global service with similar abilities will have something similar. It's all about name/address-space management. I doubt it can be improved upon significantly without taking something else a way.
- If you start by assuming a global service, then of course you'll end up with one. One can improve both system flexibility and system security by taking away the "global". The reason Unix PipesAndFilters can be hooked together by scripts is that the input and output ends on each process are initially owned and manipulated entirely within the shell's command-line process.
Indeed, for GUIs I believe that markup-over-HTTP has already been proven to make effective GUIs (WebApplications
?). I think we can do
much better, but the fact that we could do well at all is itself an indicator of various flaws in the ApplicationsAndLibraries
? approach to software composition.
Programming with HtmlDomJsCss stack is butt-ugly compared to dedicated desktop GUI tools. So far, improved deploy-ability is why they are often favored over desktop apps, not the GUI development itself. Most who used VB, Delphi, PowerBuilder, etc. to build custom biz apps will generally agree. I believe the situation could be improved if we dump the e-brochure-based GUI paradigm that HtmlDomJsCss is built around. -t
NovemberZeroNine
CategoryHumanFactors, CategoryUnix