Role Of Computer Science

I'm a ComputerScience guy. In my role as a ComputerScience guy, I study computation, communication, calculation, and the properties thereof. I trust the developers and software engineers to know what properties they want, and I find ways to give it to them.

How does ComputerScience help developer productivity, code management, and such?

To me, that is an irrelevant question. You don't ask a rocket scientist how much 'rocket science' helps the human endeavor, because the rocket scientist doesn't make knowing that part of his job. The rocket scientist might point out the satellites orbiting Earth, but likely does so without knowing their cost-to-benefit ratio.

I can note is that my skills are in heavy demand, and that works of ComputerScience are often used today in ways that are claimed by others to aide "developer productivity" and "code management". (Examples of such works include programming languages, concurrency models, garbage collection, data models, database management systems, commit protocols, version control software, configuration management utilities, etc.) I can't offer you exact numbers on how much these 'help', or a precise 'model' of 'developer productivity'. Knowing those things isn't part of my job.

I've looked at a few ComputerScience papers. It seems they're just offering opinions or observations about how something will 'help' developers.

Authors of ComputerScience papers aren't expected to prove demand for a given set of computational properties. They need only prove, empirically or mathematically (or both), that the set of properties is achieved. This means that the 'reasons' for such properties often get waved away in the introductions to the papers - the reasons aren't critical to the contents of the paper, except insofar as knowing the reasons for a set of properties might help users of the paper decide (e.g. due to an analogous situation) whether its contents are relevant to them.

Among those properties we are asked to achieve for developers in particular: OnceAndOnlyOnce, support for ad-hoc ChangePatterns, ability to detect mistakes prior to shipping products, ability to break large projects into bite-sized pieces that can be contracted to independent developers, ability to guarantee there are no deadlocks or race conditions, the ability to code without worrying about buffer overruns, ability to ensure a change in persistent data to many cells is atomic in event of certain hardware failures, and on and on and on.

ComputerScience people don't need to ask "why"... except due to curiosity, perhaps a need to feel their work is meaningful, or as observations from their own roles as developers. If need proof that properties are worthwhile, you'll need to wait until the authors (or the employers of the authors) put on their SoftwareEngineering hats and write SoftwareEngineering papers.

I've read about "provably correct software", but it has proved too time-consuming in practice.

As a practitioner of ComputerScience I have only once been asked to prove a particular unit of software correct (a RAID driver). Most of the time, when it comes to 'correctness', we are asked only to demonstrate or resist certain classes of errors. For example, the ability for a commit protocol to avoid partial-writes in event the computer is shut down while in mid-transaction. Or the ability to identify obvious instances of undefined behavior prior to shipping a product.

Your claim that it "has proven too time-consuming in practice" is probably a claim you'll need to take up with SoftwareEngineering people. All I can do is observe that people are still paid to do it on occasion, . . .

and that a great many organizations use automated tools (automated type checkers, code combers, memory leak detectors) to resist certain forms of errors from being shipped. And they do so on a regular basis.

Fans of "type-heavy" programming also claim it allows the compiler to verify more and prevent tons of bugs with few downsides, but we have enough debates about that already.

Yes, let's not start a debate here. Let's, instead, only point out that some people believe the cost-to-benefit ratio is a profitable one, and that some people do not. And perhaps they're both right. The world is a big place, and the economic incentives can vary from one group to another.

It is not a fundamental role of science to decide what people "should" do, only to support them in enhancing what they "can" do. Type-heavy programming is one means by which people "can" verify code and prevent certain classes of software error.


RE: "Different mechanisms of accomplishing a particular goal are evaluated economically, not scientifically. Scientists and engineers might work to make a particular mechanism cheaper, of course. For example, type-checks were a lot more expensive and less reliable when they were done by hand."

Don't the type-heavy proponents here claim that heaving typing is objectively more economical, and that those who disagree are simply not smart enough to understand their "economic" proof?

I'm certain fanatics of all sorts make claims that I'm not willing to defend. But, honestly, I've never seen this claim that types are objectively more economical in all situations always... perhaps someone claimed it, but I've not seen it. I have seen the claim that software-provably-without-certain-classes-of-bugs is 'better' than software-that-may-or-may-not-have-those-bugs. It's not a claim I'd dispute; the only question is whether or not that level of 'better' is worth the cost of achieving it.

Now you are suggesting that there is no such economic truth to type-heavy.

There is no universal economic truth except for supply and demand. It is likely that "type-heavy", however you're defining it, is objectively more economical in some circumstances, and not economical in others, and on that borderline where the cost-benefit ratio is near 1.0 in yet others.

Which is it? Is this smart-person-only type-heavy proof scientific or economic in nature? Or just a wrong view? --top

If they claimed it was universally more economical, it would be bullshit. If they claimed it was more economical but didn't specify 'universally', then perhaps they were referring to a situation described in context. If they claimed that software provably without bugs is better than software possibly without bugs, then you've grossly misinterpreted their claim. I guess I'd need to see the assertion if I'm to answer this question of yours.

If I encounter it again, I'll link to. I had hoped this insistent typing proponent(s) would chime in. They implied a pretty wide scope as I remembered it; that even "nimble" stuff would be quicker and smoother because well-defined types would make more be automated.

RE: "I had hoped this insistent typing proponent(s) would chime in."

(Moved "troll" complaint to BeingUnpopularHere.)

I'm not certain why you'd wish to enshrine personal stuff in its own topic, unless you refer to your home page.

Too full. Eventually we can consider deleting that topic anyhow.

Note that I may have inadvertently deleted a reply or two of yours. I apologize. It was not intentional.


I'm not quite sure what you mean. Determining which behaviors optimize customer satisfaction scores (or whatever success metric is being used) is still useful even if some developers ignore the recommendations.

You seem to be working hard to justify a weak showing. You can only dress up a canary so much, but it's still not an Xmas turkey.

Top, you are declaring ComputerScience to have a broader scope than it does, then you are blaming ComputerScience for its lack of accomplishment in areas that you just declared to be part of its scope. Perhaps the breadth of ComputerScience isn't as grand as you had hoped. As a ComputerScience guy, that your hopes are dashed doesn't bother me. Your dashed dreams don't diminish the demand for ComputerScience in the real world.

SoftwareEngineering is a cousin branch to ComputerScience, sharing much terminology and little in terms of subject matter. If you want to claim that SoftwareEngineering hasn't had any successes or improved productivity or improved consistency and reliability above whatever would be accomplished by a bunch of disorganized hackers in a room, then feel free to take it up with the SoftwareEngineering guys. I expect they'll know what has proven successful better than I do with my one, lonely, introductory course in SoftwareEngineering relative to my dozen courses in operating systems, language theory, automated theorem proving, etc.

"Computer science" provides some raw ideas to be tested in production, sort of like the way mutations are the fuel of bio evolution but not the engine. Some of these ideas stick, but a good many don't. There are so many dead-end ideas from CS that it's hard to tell if the success rate of "computer science" is any higher than random ideas. --top

Science isn't about the creation of ideas. Science is about the systematic development, testing, analysis, and mechanical rejection of models (a limited class of 'ideas').

It's both. You cannot test models if there are no models to test. You should know this if you are the big ball of education you claim you are.

With a little self-education, you'll find all real sciences have a great many dead-end ideas. ComputerScience fits right in.

Yeah, the ones you back.

Yes, I back the sciences that kill ideas. That's what science is: a systematic approach to ensuring bad ideas (in particular, ineffective models and erroneous hypotheses) reach a dead end. I back 'real' science. Which one do you back?

What you do is NOT science. Science actually expects empirical tests. Science measures stuff. You don't measure. You come up with convoluted notations or convoluted chains of reasoning and believe that to be sufficient. It's not. You must race your car, not merely talk about your car or just show math about your car.

Theories and hypotheses are repeatedly put to the test and verified empirically and mathematically then documented in various theses and papers written by the more avid practitioners of ComputerScience. Far more CS-related analysis and testing goes undocumented or remains proprietary. Most CS practitioners aren't proposing GoldenHammers that are better in every possible way and situation; they're just finding systematic approaches to achieving sets of properties that are useful in a given domain. Testing and derived measures and mathematical proofs demonstrate that the full set of targeted properties is achieved. Perhaps the properties they're testing and measuring aren't the ones you care about, but that doesn't mean there isn't empirical testing going on. Perhaps you should delay your assertions about what people are and are not doing until after you start paying attention to what is happening in the world around you. Read the occasional CS thesis, ACM journal article, conference paper, design presentation, lessons-learned document, etc.

That's a pleasant and diplomatic response (which is rare around here), but otherwise free of usable content and usable specifics, to be frank. It's similar to the "if you go to church with me every Sunday, you will eventually "get it".

[I'm not sure what you're looking for... A specific ComputerScience article, containing mathematical and empirical testing, that helps you with an everyday and common issue? How about http://www.minet.uni-jena.de/dbis/lehre/ws2005/dbs1/Bayer_hist.pdf or http://portal.acm.org/citation.cfm?id=359839 Without this type of indexing, TableOrientedProgramming and relational DBMSes would not be feasible.]

Again again again, that is about "performance issues", which I don't dispute with regard to "the gap". CS has done quite well in the performance arena (although it could perhaps be argued that only incremental improves in indexing techniques have been found since the 1970's.)

[ComputerScience, when concerned about pragmatic issues, is principally about finding ways to improve or reliably predict machine performance, reduce or reliably predict resource consumption, or provide mathematical guarantees of reliability, integrity, security, or correctness. I suspect what you're looking for are definitive results from SoftwareEngineering, information technology, business management studies, or other Computing research (e.g., in HCI), but unfortunately these fields don't work like that. The amorphous and unquantifiable nature of qualities like "better" or "easier" makes achieving singular results exceedingly difficult. Hence, these fields tend to produce subtle results rather than dramatic ones, except for studies like the Standish Group's CHAOS Report (for example) that found a consistently poor success rate for very large IT projects. That is significant, and no doubt part of the general industry impetus for moving from traditional development methodologies to Agile approaches. Surely, that has had some impact on the organisation you work for (and, therefore, you as a developer and your productivity), if indirectly, and even if only to influence attitudes and/or discussions around the water cooler.]


Scope of CS Improvements

This is still not very specific. What is specific "computer science" research you can point to as helping everyday and common issues related to code management and developer productivity?

You are repeating questions that I already explained are irrelevant. Revisit the answer to: "How does Computer Science help developer productivity and such?"

But those are areas that a common practitioner has little control over, even if I did agree that they came only from CS (semi-disputed). I don't make compilers, I make custom software. I was not hired to make compilers from scratch and would probably get fired if I did. You seem to be dismissing such concerns as "economic and engineering questions, not science" (paraphrased). If that's the case, then a detailed knowledge of CS may not be of much help to a custom software practitioner, which is what I originally stated. (I'm not saying such knowledge has zero value for such circumstances, only that you are greatly exaggerating its worth.) -t

[Is this not the same as, "I'm a car mechanic, what physics research can you point to that helps me fix a transmission?" Obviously, physics does not play a direct role in this mundane job function, yet it indirectly (and obviously) underpins it in a fundamental way. Yet, for the car mechanic who does understand physics, specialised jobs can represent profitable opportunity. For example, a mechanic's ability to understand the forces involved in a spinning transmission assembly can have a direct impact on his or her choice of which vendor's gears to choose when re-building it for racing purposes. The corner garage grease-monkey might not find anything of value in physics research, but the F1 engineer -- or even a serious hot-rodder or modder -- certainly will. In short, understanding ComputerScience may not be of much relevance to the average programmer, but to one who wishes to transcend the mundane and create something new (such as a code generator, i.e., compiler, that automates certain tedious grunt-work), it can be very relevant. And profitable...]

I suspect you (top) greatly underestimate the worth of ComputerScience. As a pseudo-metric, perhaps you should count as the "cost of ignoring ComputerScience" the man-years in real software projects spent re-implementing software architectures into designs known twenty or more years before the version of the software being re-implemented. Projects go through such re-implementation on a regular basis, re-inventing AbstractFactory and PluginArchitecture and Lisp and DatabaseReinventedInApplication? and ScriptingLanguageAgnosticSystem and AlternateHardAndSoftLayers, in addition to reinvention of specific things like cryptography and plain-text protocols. The degree to which such rework is avoided is the degree to which the current state of ComputerScience is aiding the 'common practitioner'. ComputerScience done today probably won't help the common practitioners of today, but that doesn't mean it wasn't worthy of investment. Start counting in two decades.

Your answer is not specific enough to be usable. It comes across as nebulous patronization, and does not even seem to address the issue raised. Many of the tools/techniques you mentioned came about by experimentation, not raw math/science anyhow.

Huh? What about science excludes experimentation? You seem to have some very far-fetched ideas about what 'science' is.

Tinkering about is not usually what people think of when they hear "computer science". However, I do agree the working terminology could use some refinement. Even a bear hunting for food under a new log is doing some degree of "science", and except for Yogi, has no formal education.

I'm not sure what 'people usually think of' when they hear "computer science", but I doubt you know either, and I'd appreciate it if you stop polluting the discussion with invented facts.

It strikes me as HandWaving and causes me to distrust your words. I suspect that most people think of 'tinkering' and 'invention' when they think about the science industry in general, be it chemistry, physics, psychology, or computers. Tinkering, documenting success and failure, and capitalizing on success via production, are all parts of industry. Hypothesis (about what works, about why it works, about which different things can be combined), application of hypothesis to future activities, and especially killing of hypotheses by use of observations outside the pool of observations used to originally construct each hypothesis, is heart of science.

Fundamentally, the monetary value of the formal education is the cost of every mistake and every unit of rework avoided due to gaining experience without the education minus the years of labor lost in formal education. As a prospective employer, I'd rather not have you making mistakes and gaining experience at my expense unless there is no other choice. I'd only be willing to hire people who are either educated or who have gained a lot of experience at someone else's expense. I think most employers feel the same. This suggests a formal education in CS is still of considerable value to potential employees. Make of that what you will.

But you didn't say "formal education in CS is overestimated". You said "knowledge of CS is overestimated".

I don't think many practitioners would usefully achieve much if they had no access to knowledge artifacts of ComputerScience, such as examples of what works and what doesn't, explanations as to 'why', tutorials, language idioms, rules of thumb, etc. Actually, science is about the filtering, so simply take away all filters such that you have no way to identify a good hypothesis or example from a bad one, which examples work from which don't, no way to identify pure speculation from ideas produced by analysis and filtering against known observations, etc.. Or at least no way to tell without trying it yourself. I'm pretty bright (or so I'm told), but I've a feeling I'd get lost very quickly in such an environment. How far do you think you'd make it? (And if this sounds too far-fetched, consider that secrets in many trades have been hoarded over much of history. Don't make the mistake of assuming the freedom of information and knowledge available today to be an a natural law of human nature.)

Or here's an alternative: let's replace ComputerScience with a religion with a bunch of irrational models and rules that have little basis but that thou shalt follow OR ELSE be thee doomed to hell. {Gee, that sounds a lot like the type-heavy zealot's talk. -t} I wonder how well practitioners would do if they came to you with preconceptions, beliefs entirely on faith, beliefs that are obviously wrong in the face of evidence (but cannot be swayed otherwise), and arbitrary rules and restrictions on their programs and programming behavior. Which software development artifacts might suffer in such an environment? Would we have compilers that do different things on Sundays and that refuse to operate on Black Friday?

ComputerScience is a science, and it does involve experiment, it does involve hypothesis, and it does reject assumptions and axioms derived entirely from faith. It is not math in that the axioms and models of ComputerScience must relate and make useful predictions over something real: computations.

I have a strange feeling that TopMind tends to view ComputerScience as a religion carried by men wearing white togas in an ivory tower, and so he feels it's okay to preach his own religion called TableOrientedProgramming. Feh. TopMind is mistaking language or paradigm zealots for computer scientists.

Sometimes people indeed do exaggerate the importance of their pet field, HobbyHorse, or technique almost to the point where it starts to resemble a religion. The human ego blinds and distorts. We tend to re-project the world in terms of our own personal interests, strengths, and careers, and this often biases us, as humans.

[TopMind, I find it rather hypocritical for you to post that when you are unquestionably the worst offender here. Unless, of course, you're admitting it in yourself.]

Projection, dude. You are the wrongo, not me. But, look at it this way: if I can "deceive myself" without being aware of it, what makes you immune from the same malady?

[Simple: (a) I'm not promoting any pet field, HobbyHorse, or technique; and (b) I've not drawn extensive criticism, as you have, for promoting TableOrientedProgramming -- which is obviously your pet field, HobbyHorse, and technique.]

When I first started criticizing OOP, proponents would make very similar-sounding claims such that OOP is "better theory" and only ignorant dum-dums say otherwise, etc. But I'd also get at least an equal number of emails from those who supported me and expressed that they were too timid to take on the critics. (I've since pulled the email address due to spam.) Some of them were from established writers. Orielly's "Oracle Programming" book even had a reference to my website for at least 2 editions. Further, popularity does not automatically mean "truth". Otherwise, mobs could vote the world flat. Defensive blow-hards often protect their turf using intimidation disguised as the academic high-ground. Academic knowledge does not make one immune from human nature. -t

I don't forgive the 'OOP proponents' who make unjustifiable claims, either (and I've met fanboys who are every bit as bad as TopMind in this matter). But I do suspect that TopMind often misinterprets a claim of "better" by stripping it of the context in which the claim was made.


Re: if I can "deceive myself" without being aware of it, what makes you immune from the same malady? - TopMind

Nothing makes a scientist immune to self-deception. Wisdom requires that one look with even greater skepticism at his or her own thoughts and ideas to help compensate for inherent confirmation bias. I constantly question my approaches; if I've forgotten a justification for a particular design, I often spend a day or two sketching out the pros and cons and reasons and requirements justifying that approach over alternatives, and often I come to the same conclusion. Sometimes I learn that a change made in a related design obviates the design element I'm reviewing. And when someone is skeptical of my ideas, I take that as a sign of "where there's smoke, there's fire" until and unless I can explain - most of all to myself - why that skepticism is unwarranted.

However, TopMind, you often give impression of failing to be appropriately critical of your own ideas. You answer skepticism by ShiftingTheBurdenOfProof to others to 'prove' flaws in your ideas. You usually set the burden for said counter-proof unreasonably high, such as a formal counter-proof for what you originally presented as a 'sketched' idea. You find excuses to dismiss skepticism rather than confront it. This is exacerbated by your provision of (often ridiculously) weak justifications for your ideas, such as appeal to 'the masses', or arguing for app-language-neutrality on the basis of avoiding fanboyism, or claiming without evidence that there is some sort of 'psychology' benefit or that it 'fits your mind' better. You need statistics about 'the masses' and the market issues surrounding 'fanboyism' or the first two are appropriately called HandWaving. You need to control for nature vs. nurture and your own fanboyism, or the latter claim about 'fitting the mind' is worse than uninformed speculation.

Compensating for confirmation bias and other forms of observer bias means becoming your own worst critic. You cannot be a fanboy, a salesman, or a soft-hearted mother. You mustn't fear taking a use-case gladius and stabbing your intellectual baby forty times before either marking its resilience as 'adequate' or throwing it out with the bloody bathwater. Can you do this?

I disagree with your assessment. Your criticism of my ideas is poor. If you give good and clear criticism, I will listen, I promise. I think part of the problem is the kind of evidence that I accept compared to what you accept. I believe in RaceTheDamnedCar while you believe that talking about the alleged elegance and "nice theory" of the engine is sufficient. I want to see scenarios where the language or tool crashes and burns or has noticeably more lines of code or requires more lines/routines/blocks to be changed per typical change scenario, etc. The Frank Gilbreth school of software efficiency, if you will. I want to witness specific cases of it being good or bad. Your analysis technique, on the other hand, is much more indirect and round-about, risking the error of ignoring factors or skipping steps. You view it through your pet academic models instead of looking at the practical impact. I simply don't trust long chains of reasoning when non-discrete and subjective variables are involved. You lean toward the Greek approach in EvidenceEras. -t

I think the problem is that you accept illogical and irrelevant evidence for your positions, and reject or fail to recognize logical counter-arguments. You don't "RaceTheDamnedCar" either; RaceTheDamnedCar is a hypocritical expectation required only of your opponents. Anyhow, your disagreement is noted. I'll LetTheReaderDecide.

If it's clearly illogical, then use a formal-logic-based outline to debunk it. State the givens, symbols used to simplify the formulas, etc, and then use formal logic in all it's glory. (English-form, not ASCII-math, at least not exclusively.)

Wow, that's convenenient: you just clearly demonstrated two of my above assessments (with which you disagreed): (a) you practice ShiftingTheBurdenOfProof by demanding others 'prove' flaws in your ideas. (b) You set the burden of proof unreasonably high, such as a formal counter-proof for what you originally presented as a sketched idea.

Anyhow, you should improve your understanding of logic, TopMind. Learn this: An argument or statement S is illogical if it cannot be proven by logic from available evidence, givens, and axioms. There is a set of false statements that can be disproven by valid deductive logic or a combination of evidence and sound inductive logic. For any consistent logic, the set of false statements is a subset of the set of 'illogical statements. Among illogical'' statements that cannot be disproven are several subcategories: From the above you should conclude that asking me to 'disprove' statements I call illogical is, itself, an unreasonable and illogical request on your part because not all illogical statements can be disproven. You should apologize for making such unreasonable requests.

A related property: a statement or S is 'irrelevant' to a given argument A if there is no logical relationship between S and A. Attempting to use an irrelevant statement in an argument, however, is either illogical (a form of NonSequitur) or a distraction tactic that falls in the field of rhetoric rather than logic (aka RedHerring). I would note that 'code' serves no logical or rhetorical purpose unless you can bridge the gap by arguing or demonstrating the code 'relevant' to a particular argument.

TopMind, many of your 'illogical' arguments consist illogical leaps in logic (NonSequitur), of making unjustifiable claims (HandWaving), and of irrelevant rhetorical distractions (RedHerring). As noted above, formal logic does not disprove these things. If I don't feel like use of short-hand descriptions of your argument, I might say that 'there is no rule of sound or valid reasoning that allows you to make that leap' (for a NonSequitur) or 'it is unclear how this is relevant to the argument' (for RedHerring) or 'that assertion doesn't seem justifiable' (for HandWaving). Alternatively, I can give you the benefit of the doubt and ask you to fill in the 'leap' that doesn't seem to be legally filled, ask you demonstrate relevance of the claim that seems irrelevant, and ask you justify the assertion that doesn't seem justifiable. I tend to give the benefit of the doubt to persons who don't regularly demonstrate inability to fill these gaps.

Well, we have a terminology conflict here. Lack of sufficient evidence (assuming that's the case) is not the same as "illogical". For example, let's take a recent event: the decision about the future of the Afghanistan military conflict. Some experts want to add more troops, and some want to reduce them. Neither path can be "proven by logic" since nobody can predict the future with sufficient certainty, especially given complex interactions. However, to go around and say that both sides are "illogical" is misleading, if not slanderous. You may have a personal working definition of "illogical", but please don't ruin the common usage by substituting your own personal vocabulary reality willy-nilly. -t

Merely lacking sufficient evidence is not illogical, TopMind. However, asserting a statement to be "true" in an argument, despite insufficient evidence, IS illogical. If "neither path can be proven by logic", then stating either path as a point in the argument is not logical. And your 'example' itself is misleading - the debate over Afghanistan military is about principles and priorities and assumptions, not about 'predicting the future with certainty'. One can still make an "illogical" argument from a given set of principles, and one may have "irrational" principles (Consider: It's our duty to waste resources and money! That's the American way! Therefore, we should continue expending resources and money in Afghanistan. Military activity expends much resources and money. Therefore, I support the war!) But principles themselves (what you consider 'better') are outside the domain of logic - asking whether a principle is "true or false" doesn't even make sense. Ask, rather, whether they are rational.

Just because you claim it's "insufficient evidence" doesn't make it so.

I agree with that. OTOH, I don't recall ever attempting to claim "insufficient evidence" in order to make it so.

And I believe the Afghanistan analogy is sufficient to compare to the HTML++ debate. You are abusing the term "illogical". -t

How is the Afghanistan analogy is relevant to the HTML++ debate? And how is it relevant to misuse of the term "illogical"? You often make or assume statements for which I see little or no logical reason to believe, such as "GML is CrudScreen friendly" or "disabling on-display actions improves computer security". It isn't as though you can argue for GML by simply stating principles. You must also argue that GML fulfills those principles. I don't think it abuse to call various among those bridging arguments "illogical". One example of an illogical argument: "starting over with a particular goal in mind sometimes results in a better product for that goal, therefore GML is CrudScreen friendly (relative to HTML++)".

With regard to "GML is CrudScreen friendly", that is misleading. You imply it's an isolated self-contained claim, which it's not. The argument was that starting from scratch makes it easier add CRUD-friendliness without worrying about non-crud-baggage (eBrochure baggage). Remember the car-to-boat analogy? Your confusion is not my sin there. You seem to also mistake explanations for formal statements, and "debunk" that statement in isolation. Explaining something and providing a water-tight logical claim for a given portion of text are not necessarily the same thing. Obviously, the one "sometimes" sentence above begs the question of whether this is a case of that situation, which was addressed in other parts. If the above was all I said about that topic, then your complaint would be justified. But it was not, so you see I am not wrong and bad and evil. Be careful about the context implications of your examples. -t

To contradict, the above 'sometimes' sentence is, in fact, the only logical reason I have identified in the entire GuiMachineLanguage page to believe "GML is CrudScreen-friendly" relative to HTML++. You have also appealed to irrelevant market properties and fanboyisms. You have also provided much verbiage for 'security' benefits, which mostly make me wonder whether you even know what computer security means (see IwannaLearnComputerSecurity if you're interested in the initial bits). But, if you think there's another reason for me to believe GML is CrudScreen-friendly, then I'm at a loss for what it is. Further, you seem to ignore the greater context of development-side CrudScreen-friendliness and server-side CrudScreen-friendliness and multi-organization CrudScreen-friendliness and concurrent-user CrudScreen-friendliness and so on, which leaves me wondering what 'CrudScreen friendly' means to you. (Admittedly, 'CrudScreen-friendly' is pretty vague, but to me that means looking at it from as many perspectives as possible.)

With regard to GML, you claim the forces I mention are "irrelevant", but I disagree with your evidence, which is largely anecdotal. A disagreement is not the same as being "illogical". It is soft evidence against soft evidence. You have no OfficialCertifiedDoubleBlindPeerReviewedPublishedStudy that shows them irrelevant. You only have "soft" evidence. Stop mistaking personal notions and personal assessments for universal truths. (To keep the text train simple, I will not address the security sample here.) -t

Pay attention to immediate context. I said the market forces you mention are "irrelevant [to whether or not GML is CrudScreen friendly]." That is, at least, how I read the sentence. Sure, market forces are relevant to market success. That's a different argument. If you'll note an earlier statement of mine: "a statement or S is 'irrelevant' to a given argument A", not as a universal principle. I assumed, apparently incorrectly, that you could reasonably infer which argument was being discussed. Also, it is your burden to show relevance for evidence you present. Are you going to argue these fanboyism market properties are relevant to whether "GML is CrudScreen friendly"? If so, I ask that you explain the connection between the two.

Again, I find your writing style strange and difficult. (Others who think like you maybe don't. So be it.) But I did make several related sub-cases of why I think HTML++ won't bend smoothly enough. You agreed that there are reasons to start over rather than try to force-fit a semi-related standard in some situations, as I interpreted it. There is essentially a threshold at which point starting over makes more sense than bending the existing. That threshold is a key issue.

Then we each gave our arguments about why HTML++ will or won't bend. Neither of us has presented a double-blind peer-reviewed certified published study that proves what HTML++ will do in the future. We both can only make educated assessments based on observations of both technology and behaviors of peoples' interaction with the technology (AKA marketing). Some of our evidence is anecdotal, based on past observations. In this respect it is similar to the Afghanistan analogy, where calling both sides "illogical" is inappropriate, rude, and misleading. Using "soft" evidence is NOT justification alone for calling the evidence "illogical". If you simply disagree, simply say so. Don't use overbearing language to exaggerate the disagreement.

There's also the issue of MS sabotage. Even if it is technically possible to bend HTML++ to do GUI's well, MS can still throw a monkey wrench into the pond because of the dominance of IE (per arguments found at LimitsOfHtmlStack). Thus, even if my assessment of HTML++ bending is wrong, MS's behavior could make that side of the argument moot. My argument is thus tied to an OR: HTML++ will fail if it cannot bend OR if Microsoft sabotages it. -t

It is, I agree, rude to point out arguments as "illogical"; that's part of an AmericanCulturalAssumption: it's always rude and offensive to say someone is wrong, even if they are wrong. Some people are taught to avoid "you" phrases for anything less than flattering: "I disagree" or "we should expect" or "one would be wrong to". Anyhow, if you wish to practice politics and careful inoffensive wording, feel free. I suggest that one should not expect academics and practitioners in SoftwareDevelopment to necessarily share in that philosophy in the context of a technical forum, and that such individuals may not appreciate (and even find 'rude') repeated attempts to inflict upon them a value system requiring such political effort.

Anyhow, "educated assessments" can be illogical, TopMind. Each "based on" offers potential for an illogical leap.

Also, I do not acknowledge your position on the Afghanistan analogy. I reiterate: the primary argument there comes from differences in priorities and principles, and calling these "illogical" is indeed inappropriate, but for reasons that do not serve your argument. One may still make illogical arguments "based on" a given set of priorities and principles, observations, and hear-say anecdotes. I suggest we desist with this ArgumentByAnalogy.

Rather than further expanding the HTML++ line of argument here (not sure how you got off of GML, it isn't as though an argument against HTML++ is necessarily an argument for GML), I offer: EditHint - move to LimitsOfHtmlStack.

It started as an example of my allegedly ignoring principles of computer science and/or "logic". It's clear now you are either wrong or greatly exaggerating. I believe you to be a drama queen, repainting professional disagreements as blatant and clear violations of math or logic. That aspect *is* relevant to this topic. Further, "priorities and principles" apply to GUI kits also. For example, how the down-arrow grid issue affects how likely a user is to accept a new standard or tool. -t

You do ignore principles of logic on a regular basis, TopMind.

And, yes, "priorities and principles" apply to GUI kits too. For example, being 'CrudScreen-friendly' is your 'priority'. But that does not mean it logical to claim "GML is CrudScreen friendly" simply because that is your goal,

nor was GML's CrudScreen-friendliness well supported by your other arguments.

And sure, the default behavior of the 'down-arrow' being undefined or non-conventional may determine market success. But 'CrudScreen-friendly' is not defined by 'successful in market'; it's defined, in the broad sense, by how well a standard optimizes performance, portability, safety, security, etc. for both server-side and client-side and productive development of CRUD applications (including widgets, database hookup, etc.). So, your appeal to "how likely a user is to accept a new tool" is irrelevant to whether 'GML is CrudScreen friendly', and using irrelevant points in an argument is illogical. If you were better skilled at organizing your thoughts, you could take the same damn information and make a logical argument: "if default key-bindings match common convention, then developers of CrudScreen applications will not need to repeat efforts to bind them properly. Further, the implicit default key-bindings should better allow clients to achieve localizations to non-American keyboards. Additionally, since these key-bindings do not need piped along the network, that may offer an (admittedly marginal) performance improvement. For these reasons, conventional default key-bindings is more CrudScreen-friendly in terms of productive development, portability, and performance". But noooo. TopMind can't possibly make a logical argument even if the information is available. Even for a logically justifiable conclusion, TopMind resorts to HandWaving and NonSequitur.

Market acceptance is a pre-requisite. If nobody adopts it, it's gloriousness matter nils. YOU are the illogical one here because you want people to care about your pet technologies and thus pretend like everybody cares about what you care about. And I didn't exclude those, I simply didn't put the gold-plated bloated versions that you personally favor. PersonalChoiceElevatedToMoralImperative

"If the widgets don't support default behaviors, it decreases their chance of acceptance" was plenty sufficient for normal human beings. And, if you wanted more detail, simply ask instead of insult me. Solve problems, don't create them by running around shouting "Illogical! Illogical!" like a Vulcan on crack.

I'm done with you. You see everything through the lens of your HobbyHorses, have a convoluted writing style, put words in my mouth, are trigger-happy with vague insults, and are rude to boot. I might finish individual sub-arguments, but I'm done with any debate about summary conclusions. Debates with you regarding more than one variable invariably turn into a big mess. -t

Very well, TopMind. Other than your goal at large, I doubt anyone is convinced by your reasons. You keep bringing things like 'market success' or 'HTML++ sabotage' up in response to my questions about technical merit, and you claim vehemently you weren't being illogical. I must say, I don't know anyone other than you who would think it logical to point at market success as a 'pre-requisite' for CrudScreen-friendliness. I just can't reason with that sort of 'logic'.

Any discussion of "technical merit" with you will almost certainly lead to your HobbyHorse topics. I know you think those are the most important things in the entire universe and that puppies will all get cancer if not done your way and I disagree and don't want to dance that dance yet again. -t

And apparently a discussion of "technical merit" with you hops (with nary any mention of technical properties) to discussions of sabotage, fanboyism, and marketing. I suppose you think that's the better route.

I'm just pointing out that one must consider the audience. You are hinting at your Xanadu tendencies again: "I don't care if nobody wants it: it's great, dammit!" -t


JanuaryZeroNine and OctoberZeroNine


EditText of this page (last edited September 29, 2010) or FindPage with title or text search