I'm a ComputerScience guy. In my role as a ComputerScience guy, I study computation, communication, calculation, and the properties thereof. I trust the developers and software engineers to know what properties they want, and I find ways to give it to them.
How does ComputerScience help developer productivity, code management, and such?
To me, that is an irrelevant question. You don't ask a rocket scientist how much 'rocket science' helps the human endeavor, because the rocket scientist doesn't make knowing that part of his job. The rocket scientist might point out the satellites orbiting Earth, but likely does so without knowing their cost-to-benefit ratio.
- Why can't it help there? What is holding it back? As a practitioner, this stuff is key. Our "customers" want more features with fast turn-around for less cost. (I skipped "and less problems" because you'll start talking about type checking again if I dare mention it.)
- YOU FOOL! LOOK WHAT YOU'VE DONE! You had to go and mention it! NOOOOOooooo!!!!
- The reason it cannot help is because it is not the scientist's job to apply his findings to problems. That's the job of the engineer. It is your responsibility to take what is known of your field and apply it to solve the problems most important to you. For what it's worth, computer science already has numerous publications on the topics you mention. I recommend visiting CiteSeer and perusing their vast library of papers. However, something tells me that you'll just call BookStop on me.
- You give me a specific example and details about where it has helped (exludeding those already mentioned), and I'll consider it.
I can note is that my skills are in heavy demand, and that works of
ComputerScience are often used today in ways that are claimed by others to aide "developer productivity" and "code management". (Examples of such works include programming languages, concurrency models, garbage collection, data models, database management systems, commit protocols, version control software, configuration management utilities, etc.) I can't offer you exact numbers on how much these 'help', or a precise 'model' of 'developer productivity'. Knowing those things isn't part of my job.
I've looked at a few ComputerScience papers. It seems they're just offering opinions or observations about how something will 'help' developers.
Authors of ComputerScience papers aren't expected to prove demand for a given set of computational properties. They need only prove, empirically or mathematically (or both), that the set of properties is achieved. This means that the 'reasons' for such properties often get waved away in the introductions to the papers - the reasons aren't critical to the contents of the paper, except insofar as knowing the reasons for a set of properties might help users of the paper decide (e.g. due to an analogous situation) whether its contents are relevant to them.
Among those properties we are asked to achieve for developers in particular: OnceAndOnlyOnce, support for ad-hoc ChangePatterns, ability to detect mistakes prior to shipping products, ability to break large projects into bite-sized pieces that can be contracted to independent developers, ability to guarantee there are no deadlocks or race conditions, the ability to code without worrying about buffer overruns, ability to ensure a change in persistent data to many cells is atomic in event of certain hardware failures, and on and on and on.
ComputerScience people don't need to ask "why"... except due to curiosity, perhaps a need to feel their work is meaningful, or as observations from their own roles as developers. If need proof that properties are worthwhile, you'll need to wait until the authors (or the employers of the authors) put on their SoftwareEngineering hats and write SoftwareEngineering papers.
I've read about "provably correct software", but it has proved too time-consuming in practice.
As a practitioner of ComputerScience I have only once been asked to prove a particular unit of software correct (a RAID driver). Most of the time, when it comes to 'correctness', we are asked only to demonstrate or resist certain classes of errors. For example, the ability for a commit protocol to avoid partial-writes in event the computer is shut down while in mid-transaction. Or the ability to identify obvious instances of undefined behavior prior to shipping a product.
Your claim that it "has proven too time-consuming in practice" is probably a claim you'll need to take up with SoftwareEngineering people. All I can do is observe that people are still paid to do it on occasion, . . .
- I'm not disputing that, but its a small niche.
and that a great many organizations use automated tools (automated type checkers, code combers, memory leak detectors) to resist certain forms of errors from being shipped. And they do so on a regular basis.
- Most of these are just accounting and look-ups of sorts: making sure there's no reference cycles, called functions are actually defined, parameter signatures match based on the type DAG, etc. There's not a lot of brand new ideas there (unless working with an odd language). DAG traversal algorithms are probably older than me (a possible case of PostSeventiesIdeaSlump). Plus, you suckered me into talking about @#%* types again. Enough about types please.
- Then stop complaining about types and just accept that they're useful if you want certain benefits from programming. And, yes, type-checking, et. al. are forms of automated proof verification. It just happens that they're not as rigorous as some applications demand. They're not all about making sure there's no reference cycles (what would a heart defibrillator need with this anyway?), that called functions are defined (a linker enforces this admirably anyway), or that parameter signatures match based on type DAGs (hey, look, it's static type checking!). If it were, there'd be no need for FormalMethods at all. Remember, we lost a $327.6 million space probe because of insufficient application of FormalMethods. That's $327.6 million of our tax dollars. It all depends on what application you're trying to solve. --AnonymousDonor
- I agree there apps/situations where very fastidious techniques pay off. I never said otherwise. But there are many different ways to do fastidiousness, and no scientific way to evaluate the alternatives.
- [Different mechanisms of accomplishing a particular goal are evaluated economically, not scientifically. Scientists and engineers might work to make a particular mechanism cheaper, of course. For example, type-checks were a lot more expensive and less reliable when they were done by hand.]
- This implies economics is not a "science". I'm not sure there will be universal agreement with that. And I've already agreed that "computer science" has helped with computer machine performance. The "hardware" is not where the gaps are. --top
- [Perhaps I should clarify: economics is not among the studies of ComputerScience. And I did not recently mention machine performance.]
- Yet another reason to rethink the term "computer science". And most of my concerns as a rank-and-file developer generally relate to or lead to economics. If CS chooses to not help there much outside of faster hardware, then it's of little use to us rank-and-file developers. I'm asking you again: where does CS help us rank-and-file developers? (excluding those already discussed) -t
- [CS provides tools for rank-and-file developers, of which I've already supplied a plethora of examples. An education in CS also teaches rank-and-file developers of approaches to solving common development problems based on what has been proven empirically or mathematically to work in the past.]
- If somebody invents a flying car, that would help also. But it is very indirect. It is not a reason for the rank-and-file developer to study more CS.
- [If you're asserting that the tools mentioned have only been of 'very indirect' support to rank-and-file developers, I'd like to see you try working without them.]
- I couldn't work without a car either (my wife's job and mine are too far apart), but that does not mean I should study car engineering.
- Actually, it does. I did study car engineering. I bought my latest used BMW at a wholesale price and replaced the cracked radiator myself for the cost of the parts. I've done the same with a long string of cars and motorcycles. For the negligible cost of gaining a little knowledge, I save a lot of money. An analogous saving applies to knowing ComputerScience.
- I'm skeptical of the total economics of that. Plus, BMW tends to have a high cost of ownership anyhow. If you buy an expensive-to-repair car it will tilt the calculations. It would be like buying Oracle when MS-Access or FoxPro would do and claiming your 30k Oracle training paid off.
- [I'm with top on this one. If a person has an interest in car engineering, then by all means they should study it... but you need to account for the time-cost and interest-level of repairing those vehicles as well. Those without time or interest would have listed a lot more than "for the cost of the parts".]
- Despite clever positioning (and a festoonment of non-critical luxury gadgets, plus often-exhorbitant scheduled maintenance fees) by BMW Corporate to ensure BMWs are regarded as "premium sports sedans" in North America, in much of the rest of the world they're just cars, put together with bolts and nuts like any garden-variety Chevy. Learning basic car maintenance, diagnostics, and straightforward repairs is conceptually no more difficult than changing computer cards and hard drives. For the cost of a repair manual (a do-it-yourself oriented Haynes manual will do, though factory service manuals are often inexpensive and better-written) and/or a Google search plus some common hand tools, anyone with a bit of mechanical inclination (typical among Computing types) can change fluids, filters, cooling system components, brakes, and so on. However, this is intended as an analogy, not a topic unto itself. The important point is that for a minimal investment in greater knowledge (e.g., in ComputerScience), you can save money and effort by being able to do tasks that you'd otherwise have to (a) hire, or (b) waste time having explained/debated to/with you by Wiki participants.
- [The analogy has some value. I think it worth pointing out that: if you aren't willing to study car engineering, mechanics, techniques, tools, and terminology related to car construction or repair, then you probably shouldn't be offering opinions on them. A certain degree of knowledge is expected. Similarly, while you might not need ComputerScience or SoftwareEngineering to do your job, you probably shouldn't be offering opinions about them if you aren't willing to learn their associated techniques, tools, and terminology. If the 'practitioner or cars' is someone who drives to work, his 'pipe-drive' model of power steering is not going to be well accepted on a forum that expects more of its participants.]
- I believe you have the burden-of-proof backward, for reasons already described in BookStop under "vetting". It would be fair to ask for a cost/benefit analysis, such as cost of time for learning car care, technology change learning, cost of tools, used-oil disposal fee/drive (some States), compared to direct service, etc. One does not have to read the entire book to ask for such. That would be a silly request. If you truly "know" why such car study is better, then you should be able to supply such numbers. Otherwise, it comes across as a finger-in-the-wind feeling-based assessment. Some of you appear to be doing the same with ComputerScience claims; mistakingly producing a PersonalChoiceElevatedToMoralImperative. --top
- You have a valid point about cost/benefit analyses -- it is appropriate to ask for overall costs vs benefits of learning car repair and do-it-yourself vs paying a mechanic. Or, more topical to this Wiki, asking for overall costs vs benefits of (for example) using a SQL DBMS vs rolling your own persistent container class for some purpose. However, what has spurred this thread is not the merits of such questions -- which, it seems, rarely get asked here -- but your tendency to offer off-the-cuff opinions or proclamations on ComputerScience and/or SoftwareEngineering subjects (such as type theory, or object oriented programming, or the RelationalModel and its implementations) with apparent authority whilst obviously lacking anything more than a casual knowledge of these subjects. As someone with a background in ComputerScience (and auto mechanics), it comes across to me as naive and arrogant -- like a non-mechanic who suggests with authority and adamance that pistons would be better made of wood than aluminium. In that case, asking you to learn some ComputerScience (or automotive engineering) seems entirely appropriate.
- If I say something objectively wrong, then simply point out why it's objectively wrong with semi-formal logic or objective evidence. It should be easy, but you make it hard by not presenting evidence properly because you are either too lazy to present it properly or it's your own PersonalChoiceElevatedToMoralImperative. Either way, you are not doing your job. You have not earned the right to insult me yet. (I do make mistakes on trivia, but so far not on key issues of long pivotal debates.) --top
- Your claims have been pointed out as wrong often and well. In most cases, the first rebuttal is sufficient and obvious to those of us even with minimal knowledge in the relevant areas. It is you who persists in arguing that pistons should be made of wood when anyone with more than a layman's understanding knows the idea has no merit. It's testimony to our patience and persistence that we actually debate with you, and don't merely roll our eyes and delete your contributions. In most cases, you are a classic example of UnskilledAndUnawareOfIt, or more accurately, IgnorantAndUnawareOfIt.
- Projection bullsh8t! You are just personally biased against me. You present no clean evidence. It's always murky and indirect like a mumbly senile professor on LSD. Plus, you are a sloppy goto-like documenter and unaware of it. (One of you two even admitted you were a crappy documenter.) The ivory towers should pull your damned funding until you justify your MentalMasturbation. GoodMetricsProduceNumbers, and you ain't got any. Nor formal logic. Lacking these two things are an orange-alert of a charlatan.
- No, the reality is that you largely don't know what you're writing about, and insist that you are right despite being corrected. You are repeatedly caught doing this and inevitably react defensively. In the future, you can prevent this unpleasantness (and elevate the level of discourse) by educating yourself. Stop insisting aluminium pistons are unproven without metrics until you understand engineering including materials and metallurgy, because you're only embarrassing yourself.
- Projection again. You wouldn't know real science and real evidence if it bit your dick off. You always turn to insults ("you are not smart enough") when at the end of the evidence rope. You don't know how to turn the vague fuzzy notions in your head into formal logic and numeric metrics. YOU ARE UNABLE TO DO THIS, ADMIT IT, FUZZBOY!!!! YOU are the dumb one, NOT ME. Your ego writes checks that your documentation skills cannot cash. And stop pretending like you are lots of people. You are one or two arrogant idiots trying to justify the crap you wasted your life on; hallucinating that your shit is of value like John Nash, but without his lucky one-hit-wonder. You have his people skills too. Use CS to stop being so crotchety. Now THAT would be a real-world benefit you could brag about.
- This cute rant defends your position... How? By the way, I didn't claim you're not smart enough. I claimed you're not educated enough. There's a big difference.
- You started the downward spiral, hypocrite. The education thing is a smoke-screen. You are unable to produce formal logic and macro-rigor numeric metrics and are trying to turn it back on me. If you had sufficient knowledge, you would do it. But you cannot.
- For what would you like formal logic and "macro-rigor" (what?) numeric metrics?
- Whatever the hell your GoldenHammer CS-derived product is.
- [I'll accept that logic. I wouldn't suggest everyone who works on a computer needs to know ComputerScience. I'll posit that if your domain involves SystemProgramming or architecting the back-end of a large application then an education in ComputerScience will offer a considerable head start over programmers and engineers of other classes (electrical engineers often become programmers) in knowing what will succeed, what will fail, and why. Further, the education will help you achieve this knowledge in a tiny fraction of the time it would take to gather it from experience alone (unless you're a super-genius).]
- That's largely because it is close to the hardware. Again, I don't dispute the performance claims. (It would be interesting to see what would happen if hardware-issues were largely removed.) You two keep coming back to hardware and types no matter how hard you try to get away from it. This is telling. -t
- [How does the above statement come back to 'hardware and types'? I suspect this claim of yours has no basis outside your active imagination. Are you implying the only properties CS has a handle on are type-safety and physical performance? Because a great many other properties CS analyzes were named in DesignVersusResultsEvidence and AbsolutismHasGreaterBurdenOfProof.]
- I already debunked your delusions in those topics. The details are always only in your head and not at your keyboard. Don't you realize that you are hallucinating? You provide no usable rigor, it's that fricken simple. It's only authoritarian evidence. That's all you have and all you use.
- [Sorry, I don't recall you debunking anything. Perhaps you can point me to a claim that I made in delusion that you then 'debunked'. I do remember you waving your hands about 'macro-rigor' and other products of your own imagination. Perhaps rigor that I find perfectly usable doesn't meet your ludicrous standards. Can you even demonstrate that 'macro-rigor' exists in a science you consider 'real'?]
- My standards are not ludicrous; they are basic science 101 standards. And YOU are the hand-waver as far as empirical evidence, not me.
- (Your "by hand" was a reference to better equipment, as I interpreted it. That's why I mentioned hardware.) -t
Fans of "type-heavy" programming also claim it allows the compiler to verify more and prevent tons of bugs with few downsides, but we have enough debates about that already.
Yes, let's not start a debate here. Let's, instead, only point out that some people believe the cost-to-benefit ratio is a profitable one, and that some people do not. And perhaps they're both right. The world is a big place, and the economic incentives can vary from one group to another.
It is not a fundamental role of science to decide what people "should" do, only to support them in enhancing what they "can" do. Type-heavy programming is one means by which people "can" verify code and prevent certain classes of software error.
RE: "Different mechanisms of accomplishing a particular goal are evaluated economically, not scientifically. Scientists and engineers might work to make a particular mechanism cheaper, of course. For example, type-checks were a lot more expensive and less reliable when they were done by hand."
Don't the type-heavy proponents here claim that heaving typing is objectively more economical, and that those who disagree are simply not smart enough to understand their "economic" proof?
I'm certain fanatics of all sorts make claims that I'm not willing to defend. But, honestly, I've never seen this claim that types are objectively more economical in all situations always... perhaps someone claimed it, but I've not seen it. I have seen the claim that software-provably-without-certain-classes-of-bugs is 'better' than software-that-may-or-may-not-have-those-bugs. It's not a claim I'd dispute; the only question is whether or not that level of 'better' is worth the cost of achieving it.
Now you are suggesting that there is no such economic truth to type-heavy.
There is no universal economic truth except for supply and demand. It is likely that "type-heavy", however you're defining it, is objectively more economical in some circumstances, and not economical in others, and on that borderline where the cost-benefit ratio is near 1.0 in yet others.
Which is it? Is this smart-person-only type-heavy proof scientific or economic in nature? Or just a wrong view? --top
If they claimed it was universally more economical, it would be bullshit. If they claimed it was more economical but didn't specify 'universally', then perhaps they were referring to a situation described in context. If they claimed that software provably without bugs is better than software possibly without bugs, then you've grossly misinterpreted their claim. I guess I'd need to see the assertion if I'm to answer this question of yours.
If I encounter it again, I'll link to. I had hoped this insistent typing proponent(s) would chime in. They implied a pretty wide scope as I remembered it; that even "nimble" stuff would be quicker and smoother because well-defined types would make more be automated.
RE: "I had hoped this insistent typing proponent(s) would chime in."
(Moved "troll" complaint to BeingUnpopularHere.)
I'm not certain why you'd wish to enshrine personal stuff in its own topic, unless you refer to your home page.
Too full. Eventually we can consider deleting that topic anyhow.
Note that I may have inadvertently deleted a reply or two of yours. I apologize. It was not intentional.
I'm not quite sure what you mean. Determining which behaviors optimize customer satisfaction scores (or whatever success metric is being used) is still useful even if some developers ignore the recommendations.
You seem to be working hard to justify a weak showing. You can only dress up a canary so much, but it's still not an Xmas turkey.
- You are demonstrating sheer ignorance of the differences between what science is and what engineering is. I would recommend you terminate this argument now.
- That was a non-answer. Insults flow free and loose, details don't. Physical engineering and software engineering are too different to compare nicely. (I think there's an existing topic on this somewhere.)
- *sigh* This is a statement of fact, not an insult. Objective proof: look at how many enemies you've made on this wiki. Worse, it's not your ignorance that is offensive, it is your abject refusal to learn. Another statement of fact.
- ArgumentFromAuthority again. Your articulation skills are the problem, not my knowledge. Stop deflecting your weaknesses by claiming I'm dumb. If you are not smart enough or motivated enough to produce SelfStandingEvidence, don't dump on me, idiot! Everybody on the web thinks they are Einstein. You're just another one of those self-puffers. (I'm not popular because I readily expose sacred cows, not because I am wrong.)
- [On whose 'authority' do you declare CS to be responsible for developer productivity? And SelfStandingEvidence doesn't exist - it's another thing you invented to support your sophistry. You are unpopular because you bray loudly that you've exposed sacred cows when all you've really done is burn a straw cow before charging off to joust with a windmill.]
- You dismiss SelfStandingEvidence because you have NO real evidence. LIVE WITH THE TRUTH! FACE UP TO IT! Fix yourself, not me. It's not my fault you wasted all that time and money and cannot turn your MentalMasturbation into real results.
- [I dismiss SelfStandingEvidence because there is no such thing for any non-trivial claim. You can't even prove TheEarthIsRound with SelfStandingEvidence. That is the truth. I live with it.]
- If the other side agrees to certain things, then it can be relatively short. Otherwise, its still do-able, but just takes more text.
- ["if the other side agrees to certain things", it is not "SelfStandingEvidence". In math and science both, a one-page demonstration of a property can require five-hundred pages of background to understand. That is truth.]
- In an absolute sense, you are correct. In a practical sense the other side usually does agree to much. At least one knows where the differences lie and can end the conversation leaving "solving" the dependency to the reader. Example, "Well, if Foo is TuringComplete, then I am right, but you are right if it's not." The problem is that with software there are too few absolute truths.
- [There are quite a few absolute truths in software. We call them 'invariants'. Discovering mechanisms to consistently achieve certain computational invariants (actually, whole sets of invariants, such as achieving a certain safety guarantee without compromising performance) is among the regular tasking and workload assigned to those involved in ComputerScience. Also, despite the deceptive fact that software can be used to simulate certain classes of physics and rules other than those of our own universe, there are quite a few physical and mathematical laws that limit software itself.]
- I've already agreed with hardware-related performance improvements above. That's not where the contention is. But outside of input/output restrictions and hardware performance, I disagree that there are many physical limits. I will agree with the "math" side because software is very similar to math in its flexibility and need to be internally self-consistent (to be useful). Thus, we agree on these "invariants" on typical software:
- Hardware performance/resources
- Any domain input/output requirements (customer requirements)
- Internal consistency (such as referencing operations or variables that actually exist)
- [Just so you know, even simple facts, such as that a message cannot be received before it is sent, is a physical limitation not a mathematical one. We can create mathematical universes with axioms such that messages are received before they are sent... we just can't implement them due to the physics of our own universe. But if you want to call this a limitation of 'math', that is acceptable.]
- What keeps them from being implemented? More likely, we just have a hard time relating to them. Also, there's a similar conversation about "time" somewhere on this wiki. I just don't remember where.
- [Messaging is by nature interactive - a model of messaging can tell you step-by-step what happens to a message after it is received by the model. Bitemporal messaging could only be implemented if you declare in advance all the inputs to the system, but could not be implemented as a messaging model. Physics constrains us.]
- I will comment further on this after I find the other "time" discussion.
- [And I suspect you have the wrong idea about invariants. Invariants in software and computation systems are often along the lines: "this variable will always have a greater value than that variable", or "if the machine shuts down at any point in this process, the system can recover to a consistent state". They aren't something that exist because people 'agree' they exist. These are properties and truths that exist because people aim to achieve them. (No less true, and far more valuable, for being arbitrary.)]
- Please elaborate on this.
- [Ask a more specific question.]
Top, you are declaring
ComputerScience to have a broader scope than it does, then you are blaming
ComputerScience for its lack of accomplishment in areas that you just declared to be part of its scope. Perhaps the breadth of
ComputerScience isn't as grand as
you had hoped. As a
ComputerScience guy, that your hopes are dashed doesn't bother me. Your dashed dreams don't diminish the demand for
ComputerScience in the real world.
- How are you measuring "demand"?
- $$$, Jobs, Requests, Workload
- I've seen a lot of wasteful, rambling, unread papers at my local university. Maybe politicians or deans are funding crap that shouldn't be funded, not unlike spelling-bees.
- I've no doubt. On the other hand, nobody has the power to divine what should be funded from what should not... or, if they do, they aren't sharing.
SoftwareEngineering is a cousin branch to
ComputerScience, sharing much terminology and little in terms of subject matter. If you want to claim that
SoftwareEngineering hasn't had any successes or improved productivity or improved consistency and reliability above whatever would be accomplished by a bunch of disorganized hackers in a room, then feel free to take it up with the
SoftwareEngineering guys. I expect they'll know what has proven successful better than I do with my one, lonely, introductory course in
SoftwareEngineering relative to my dozen courses in operating systems, language theory, automated theorem proving, etc.
"Computer science" provides some raw ideas to be tested in production, sort of like the way mutations are the fuel of bio evolution but not the engine. Some of these ideas stick, but a good many don't. There are so many dead-end ideas from CS that it's hard to tell if the success rate of "computer science" is any higher than random ideas. --top
Science isn't about the creation of ideas. Science is about the systematic development, testing, analysis, and mechanical rejection of models (a limited class of 'ideas').
It's both. You cannot test models if there are no models to test. You should know this if you are the big ball of education you claim you are.
- [Models and hypotheses come from observation, intuition and imagination. Science is the process by which the erroneous and inaccurate ones are rejected. Models and hypotheses in the absence of science become speculation, entertainment (e.g., sci-fi), and sometimes lies.]
With a little self-education, you'll find all real sciences have a great many dead-end ideas.
ComputerScience fits right in.
Yeah, the ones you back.
Yes, I back the sciences that kill ideas. That's what science is: a systematic approach to ensuring bad ideas (in particular, ineffective models and erroneous hypotheses) reach a dead end. I back 'real' science. Which one do you back?
What you do is NOT science. Science actually expects empirical tests. Science measures stuff. You don't measure. You come up with convoluted notations or convoluted chains of reasoning and believe that to be sufficient. It's not. You must race your car, not merely talk about your car or just show math about your car.
Theories and hypotheses are repeatedly put to the test and verified empirically and mathematically then documented in various theses and papers written by the more avid practitioners of ComputerScience. Far more CS-related analysis and testing goes undocumented or remains proprietary. Most CS practitioners aren't proposing GoldenHammers that are better in every possible way and situation; they're just finding systematic approaches to achieving sets of properties that are useful in a given domain. Testing and derived measures and mathematical proofs demonstrate that the full set of targeted properties is achieved. Perhaps the properties they're testing and measuring aren't the ones you care about, but that doesn't mean there isn't empirical testing going on. Perhaps you should delay your assertions about what people are and are not doing until after you start paying attention to what is happening in the world around you. Read the occasional CS thesis, ACM journal article, conference paper, design presentation, lessons-learned document, etc.
That's a pleasant and diplomatic response (which is rare around here), but otherwise free of usable content and usable specifics, to be frank. It's similar to the "if you go to church with me every Sunday, you will eventually "get it".
[I'm not sure what you're looking for... A specific ComputerScience article, containing mathematical and empirical testing, that helps you with an everyday and common issue? How about http://www.minet.uni-jena.de/dbis/lehre/ws2005/dbs1/Bayer_hist.pdf or http://portal.acm.org/citation.cfm?id=359839 Without this type of indexing, TableOrientedProgramming and relational DBMSes would not be feasible.]
Again again again, that is about "performance issues", which I don't dispute with regard to "the gap". CS has done quite well in the performance arena (although it could perhaps be argued that only incremental improves in indexing techniques have been found since the 1970's.)
[ComputerScience, when concerned about pragmatic issues, is principally about finding ways to improve or reliably predict machine performance, reduce or reliably predict resource consumption, or provide mathematical guarantees of reliability, integrity, security, or correctness. I suspect what you're looking for are definitive results from SoftwareEngineering, information technology, business management studies, or other Computing research (e.g., in HCI), but unfortunately these fields don't work like that. The amorphous and unquantifiable nature of qualities like "better" or "easier" makes achieving singular results exceedingly difficult. Hence, these fields tend to produce subtle results rather than dramatic ones, except for studies like the Standish Group's CHAOS Report (for example) that found a consistently poor success rate for very large IT projects. That is significant, and no doubt part of the general industry impetus for moving from traditional development methodologies to Agile approaches. Surely, that has had some impact on the organisation you work for (and, therefore, you as a developer and your productivity), if indirectly, and even if only to influence attitudes and/or discussions around the water cooler.]
Scope of CS Improvements
This is still not very specific. What is specific "computer science" research you can point to as helping everyday and common issues related to code management and developer productivity?
You are repeating questions that I already explained are irrelevant. Revisit the answer to: "How does Computer Science help developer productivity and such?"
But those are areas that a common practitioner has little control over, even if I did agree that they came only from CS (semi-disputed). I don't make compilers, I make custom software. I was not hired to make compilers from scratch and would probably get fired if I did. You seem to be dismissing such concerns as "economic and engineering questions, not science" (paraphrased). If that's the case, then a detailed knowledge of CS may not be of much help to a custom software practitioner, which is what I originally stated. (I'm not saying such knowledge has zero value for such circumstances, only that you are greatly exaggerating its worth.) -t
[Is this not the same as, "I'm a car mechanic, what physics research can you point to that helps me fix a transmission?" Obviously, physics does not play a direct role in this mundane job function, yet it indirectly (and obviously) underpins it in a fundamental way. Yet, for the car mechanic who does understand physics, specialised jobs can represent profitable opportunity. For example, a mechanic's ability to understand the forces involved in a spinning transmission assembly can have a direct impact on his or her choice of which vendor's gears to choose when re-building it for racing purposes. The corner garage grease-monkey might not find anything of value in physics research, but the F1 engineer -- or even a serious hot-rodder or modder -- certainly will. In short, understanding ComputerScience may not be of much relevance to the average programmer, but to one who wishes to transcend the mundane and create something new (such as a code generator, i.e., compiler, that automates certain tedious grunt-work), it can be very relevant. And profitable...]
- Again, I'm not saying such knowledge has zero value for practitioners, only that you are greatly exaggerating its worth. Also, there's room for practical experiments and documenting the results of practical experiments, perhaps the kind of stuff that leans more into economics and "engineering" instead of pure science. For example, in SolutionsSought, I describe how I'm seeking a better mix/combo of the idioms found in existing data tools. I don't claim I'll get it right the first time. It'll probably require the try-adjust-try-again cycle by me or somebody else. -t
I suspect you (top) greatly underestimate the worth of
ComputerScience. As a pseudo-metric, perhaps you should count as the
"cost of ignoring ComputerScience" the man-years in real software projects spent re-implementing software architectures into designs known
twenty or more years before the version of the software being re-implemented. Projects go through such re-implementation on a regular basis, re-inventing
AbstractFactory and
PluginArchitecture and Lisp and DatabaseReinventedInApplication
? and
ScriptingLanguageAgnosticSystem and
AlternateHardAndSoftLayers, in addition to reinvention of specific things like cryptography and plain-text protocols. The degree to which such rework is avoided is the degree to which the current state of
ComputerScience is aiding the 'common practitioner'.
ComputerScience done today probably won't help the common practitioners of today, but that doesn't mean it wasn't worthy of investment. Start counting in two decades.
Your answer is not specific enough to be usable. It comes across as nebulous patronization, and does not even seem to address the issue raised. Many of the tools/techniques you mentioned came about by experimentation, not raw math/science anyhow.
Huh? What about science excludes experimentation? You seem to have some very far-fetched ideas about what 'science' is.
Tinkering about is not usually what people think of when they hear "computer science". However, I do agree the working terminology could use some refinement. Even a bear hunting for food under a new log is doing some degree of "science", and except for Yogi, has no formal education.
I'm not sure what 'people usually think of' when they hear "computer science", but I doubt you know either, and I'd appreciate it if you stop polluting the discussion with invented facts.
- I have a right to make observations about language usage. If you disagree, that's fine. Just say so instead of turning it into a conspiracy. You have a penchant for exaggeration and drama.
- Yes. Observe all you want. Make statements about your observations, if you wish. But a right to make observations is not a right to make false statements, or a right to be illogical, irrational, or stupid on WikiWiki. You have a habit of inventing facts, later saying they are 'observations' (which is rubbish; observation doesn't imply statistical fact about what people 'usually think of' or anything else), then ShiftingTheBurdenOfProof. In debate, that's sophistry, and it is one of the greatest sins. You try to excuse your habit of sophistry by blaming those who point it out. That's just bad sportsmanship.
- "Where's your double-blind study that most people think foo implies bar also?" is silly. Do you demand your wife deliver sealed photographs of the top being off the toothpaste before you admit you forgot to screw it on?
- I wouldn't marry someone I didn't trust, and I wouldn't consider a debate over toothpaste caps worth defending. False analogy. I suggest you review the basics of ArgumentByAnalogy.
It strikes me as
HandWaving and causes me to distrust your words. I
suspect that most people think of 'tinkering' and 'invention' when they think about the science industry in general, be it chemistry, physics, psychology, or computers. Tinkering, documenting success and failure, and capitalizing on success via production, are all parts of industry. Hypothesis (about what works, about why it works, about which different things can be combined), application of hypothesis to future activities, and especially killing of hypotheses by use of observations outside the pool of observations used to originally construct each hypothesis, is heart of science.
Fundamentally, the monetary value of the formal education is the cost of every mistake and every unit of rework avoided due to gaining experience without the education minus the years of labor lost in formal education. As a prospective employer, I'd rather not have you making mistakes and gaining experience at my expense unless there is no other choice. I'd only be willing to hire people who are either educated or who have gained a lot of experience at someone else's expense. I think most employers feel the same. This suggests a formal education in CS is still of considerable value to potential employees. Make of that what you will.
But you didn't say "formal education in CS is overestimated". You said "knowledge of CS is overestimated".
- It's not clear to me what this is quoting.
- It's quoting you. Is it not clear to you what you've been saying? First you said: "then a detailed knowledge of CS may not be of much help" - t. Later you objected: "Even a bear hunting for food under a new log is doing some degree of "science", and except for Yogi, has no formal education." -t. You have implicitly equated knowledge and education.
- That's back to the problem of what is considered "CS", and the boundary of science, math, observational tinkering, economics, engineering, etc.
- How so? How does equating knowledge vs. education come back to "what is CS"? What does it have to do with the 'boundary' of science, math, tinkering, etc? Reasoning, please. I'm completely not following you here.
- It harkens back to the statement made by one of your alter egos: "Different mechanisms of accomplishing a particular goal are evaluated economically, not scientifically." This implies a dichotomy of subjects and that economics is not science [or, from context, just not part of the same science used to describe and model and understand and explain the mechanisms]. If we want to address what the role of "science" is and what is not the role of science, we first need to distinguish "science" from non-science.
- How so? What harkens back? What does that have to do with knowledge vs. education? It seems completely NonSequitur for you to bring it up here, and so arguing about it at the moment is silly. How did you get "back" to this alleged problem?
- Without a rigorous definition of "computer science", I cannot address the alleged "bear" conflict you pointed out. You appear to toggle between a loose and tight definition of CS as you raise various issues. Thus, its time to nail it down now.
- Sigh. You're still talking illogical nonsense to me. The conflict between knowledge and education in X has to do, if anything, with the definition of 'knowledge' and 'education', not the definition of X.
I don't think many practitioners would usefully achieve much if they had no access to knowledge artifacts of
ComputerScience, such as examples of what works and what doesn't, explanations as to 'why', tutorials, language idioms, rules of thumb, etc. Actually, science is about the filtering, so simply take away all filters such that you have no way to identify a good hypothesis or example from a bad one, which examples work from which don't, no way to identify pure speculation from ideas produced by analysis and filtering against known observations, etc.. Or at least no way to tell without trying it yourself. I'm pretty bright (or so I'm told), but I've a feeling I'd get lost very quickly in such an environment. How far do you think you'd make it? (And if this sounds too far-fetched, consider that secrets in many trades have been hoarded over much of history. Don't make the mistake of assuming the freedom of information and knowledge available today to be an a natural law of human nature.)
Or here's an alternative: let's replace ComputerScience with a religion with a bunch of irrational models and rules that have little basis but that thou shalt follow OR ELSE be thee doomed to hell. {Gee, that sounds a lot like the type-heavy zealot's talk. -t} I wonder how well practitioners would do if they came to you with preconceptions, beliefs entirely on faith, beliefs that are obviously wrong in the face of evidence (but cannot be swayed otherwise), and arbitrary rules and restrictions on their programs and programming behavior. Which software development artifacts might suffer in such an environment? Would we have compilers that do different things on Sundays and that refuse to operate on Black Friday?
ComputerScience is a science, and it does involve experiment, it does involve hypothesis, and it does reject assumptions and axioms derived entirely from faith. It is not math in that the axioms and models of ComputerScience must relate and make useful predictions over something real: computations.
I have a strange feeling that TopMind tends to view ComputerScience as a religion carried by men wearing white togas in an ivory tower, and so he feels it's okay to preach his own religion called TableOrientedProgramming. Feh. TopMind is mistaking language or paradigm zealots for computer scientists.
Sometimes people indeed do exaggerate the importance of their pet field, HobbyHorse, or technique almost to the point where it starts to resemble a religion. The human ego blinds and distorts. We tend to re-project the world in terms of our own personal interests, strengths, and careers, and this often biases us, as humans.
[TopMind, I find it rather hypocritical for you to post that when you are unquestionably the worst offender here. Unless, of course, you're admitting it in yourself.]
Projection, dude. You are the wrongo, not me. But, look at it this way: if I can "deceive myself" without being aware of it, what makes you immune from the same malady?
[Simple: (a) I'm not promoting any pet field, HobbyHorse, or technique; and (b) I've not drawn extensive criticism, as you have, for promoting TableOrientedProgramming -- which is obviously your pet field, HobbyHorse, and technique.]
When I first started criticizing OOP, proponents would make very similar-sounding claims such that OOP is "better theory" and only ignorant dum-dums say otherwise, etc. But I'd also get at least an equal number of emails from those who supported me and expressed that they were too timid to take on the critics. (I've since pulled the email address due to spam.) Some of them were from established writers. Orielly's "Oracle Programming" book even had a reference to my website for at least 2 editions. Further, popularity does not automatically mean "truth". Otherwise, mobs could vote the world flat. Defensive blow-hards often protect their turf using intimidation disguised as the academic high-ground. Academic knowledge does not make one immune from human nature. -t
I don't forgive the 'OOP proponents' who make unjustifiable claims, either (and I've met fanboys who are every bit as bad as TopMind in this matter). But I do suspect that TopMind often misinterprets a claim of "better" by stripping it of the context in which the claim was made.
Re: if I can "deceive myself" without being aware of it, what makes you immune from the same malady? - TopMind
Nothing makes a scientist immune to self-deception. Wisdom requires that one look with even greater skepticism at his or her own thoughts and ideas to help compensate for inherent confirmation bias. I constantly question my approaches; if I've forgotten a justification for a particular design, I often spend a day or two sketching out the pros and cons and reasons and requirements justifying that approach over alternatives, and often I come to the same conclusion. Sometimes I learn that a change made in a related design obviates the design element I'm reviewing. And when someone is skeptical of my ideas, I take that as a sign of "where there's smoke, there's fire" until and unless I can explain - most of all to myself - why that skepticism is unwarranted.
However, TopMind, you often give impression of failing to be appropriately critical of your own ideas. You answer skepticism by ShiftingTheBurdenOfProof to others to 'prove' flaws in your ideas. You usually set the burden for said counter-proof unreasonably high, such as a formal counter-proof for what you originally presented as a 'sketched' idea. You find excuses to dismiss skepticism rather than confront it. This is exacerbated by your provision of (often ridiculously) weak justifications for your ideas, such as appeal to 'the masses', or arguing for app-language-neutrality on the basis of avoiding fanboyism, or claiming without evidence that there is some sort of 'psychology' benefit or that it 'fits your mind' better. You need statistics about 'the masses' and the market issues surrounding 'fanboyism' or the first two are appropriately called HandWaving. You need to control for nature vs. nurture and your own fanboyism, or the latter claim about 'fitting the mind' is worse than uninformed speculation.
Compensating for confirmation bias and other forms of observer bias means becoming your own worst critic. You cannot be a fanboy, a salesman, or a soft-hearted mother. You mustn't fear taking a use-case gladius and stabbing your intellectual baby forty times before either marking its resilience as 'adequate' or throwing it out with the bloody bathwater. Can you do this?
I disagree with your assessment. Your criticism of my ideas is poor. If you give good and clear criticism, I will listen, I promise. I think part of the problem is the kind of evidence that I accept compared to what you accept. I believe in RaceTheDamnedCar while you believe that talking about the alleged elegance and "nice theory" of the engine is sufficient. I want to see scenarios where the language or tool crashes and burns or has noticeably more lines of code or requires more lines/routines/blocks to be changed per typical change scenario, etc. The Frank Gilbreth school of software efficiency, if you will. I want to witness specific cases of it being good or bad. Your analysis technique, on the other hand, is much more indirect and round-about, risking the error of ignoring factors or skipping steps. You view it through your pet academic models instead of looking at the practical impact. I simply don't trust long chains of reasoning when non-discrete and subjective variables are involved. You lean toward the Greek approach in EvidenceEras. -t
I think the problem is that you accept illogical and irrelevant evidence for your positions, and reject or fail to recognize logical counter-arguments. You don't "RaceTheDamnedCar" either; RaceTheDamnedCar is a hypocritical expectation required only of your opponents. Anyhow, your disagreement is noted. I'll LetTheReaderDecide.
If it's clearly illogical, then use a formal-logic-based outline to debunk it. State the givens, symbols used to simplify the formulas, etc, and then use formal logic in all it's glory. (English-form, not ASCII-math, at least not exclusively.)
Wow, that's convenenient: you just clearly demonstrated two of my above assessments (with which you disagreed): (a) you practice ShiftingTheBurdenOfProof by demanding others 'prove' flaws in your ideas. (b) You set the burden of proof unreasonably high, such as a formal counter-proof for what you originally presented as a sketched idea.
- You said my arguments are "illogical". Generally one should only use that term if they identify specific logic contradictions. -t
- Disagree. The 'terminology difference' is, however, explained immediately below this interjection, so no reason to expand here.
Anyhow, you should improve your understanding of logic,
TopMind. Learn this: An argument or statement S is
illogical if it
cannot be proven by logic from available evidence, givens, and axioms. There is a set of
false statements that can be
disproven by valid deductive logic or a combination of evidence and sound inductive logic. For any
consistent logic, the set of
false statements is a
subset of the set of 'illogical
statements. Among illogical'' statements that cannot be disproven are several subcategories:
- Statements that are undecidable (that is, for which the logic cannot make a decision).
- Statements that may be decidable, but are still 'unknowable' because it is impractical, physically impossible (as opposed to logically impossible), or illegal/unethical to gather sufficient evidence and information to make a decision.
- Statements for which there is insufficient evidence or information to make a sound or valid decision, but for which it may be possible to gather such evidence and information (aka 'unknown' statements). The act of asserting unknowns to be true is sometimes called 'HandWaving'.
- By your standards, what the hell is NOT HandWaving?
- A variety of arguments from insufficient information are perfectly reasonable. Argument from a hypothesis (if X is true then...), argument from circumstantial evidence (though we can't conclude it, we have good reason to believe X, these reasons are ...), argument from accepted information (according to the almanac (brand) 1981, the ..., therefore ...), argument with reference to prior arguments (if we accept argument XYZ we know ...), argument from principles (language implementations that trash HDDs are 'better' according to my weights, therefore ...), even argument from personal observations (I have seen ABC, therefore I cannot accept that ABC is impossible) - so long as you don't elevate personal observations to 'statistics'. In addition, there are a lot of fallacious arguments that don't involve HandWaving, but I don't think you were asking for those.
From the above you should conclude that asking me to 'disprove' statements I call illogical is, itself, an unreasonable and illogical request on your part because not
all illogical statements can be disproven. You should apologize for making such unreasonable requests.
A related property: a statement or S is 'irrelevant' to a given argument A if there is no logical relationship between S and A. Attempting to use an irrelevant statement in an argument, however, is either illogical (a form of NonSequitur) or a distraction tactic that falls in the field of rhetoric rather than logic (aka RedHerring). I would note that 'code' serves no logical or rhetorical purpose unless you can bridge the gap by arguing or demonstrating the code 'relevant' to a particular argument.
TopMind, many of your 'illogical' arguments consist illogical leaps in logic (NonSequitur), of making unjustifiable claims (HandWaving), and of irrelevant rhetorical distractions (RedHerring). As noted above, formal logic does not disprove these things. If I don't feel like use of short-hand descriptions of your argument, I might say that 'there is no rule of sound or valid reasoning that allows you to make that leap' (for a NonSequitur) or 'it is unclear how this is relevant to the argument' (for RedHerring) or 'that assertion doesn't seem justifiable' (for HandWaving). Alternatively, I can give you the benefit of the doubt and ask you to fill in the 'leap' that doesn't seem to be legally filled, ask you demonstrate relevance of the claim that seems irrelevant, and ask you justify the assertion that doesn't seem justifiable. I tend to give the benefit of the doubt to persons who don't regularly demonstrate inability to fill these gaps.
Well, we have a terminology conflict here. Lack of sufficient evidence (assuming that's the case) is not the same as "illogical". For example, let's take a recent event: the decision about the future of the Afghanistan military conflict. Some experts want to add more troops, and some want to reduce them. Neither path can be "proven by logic" since nobody can predict the future with sufficient certainty, especially given complex interactions. However, to go around and say that both sides are "illogical" is misleading, if not slanderous. You may have a personal working definition of "illogical", but please don't ruin the common usage by substituting your own personal vocabulary reality willy-nilly. -t
Merely lacking sufficient evidence is not illogical, TopMind. However, asserting a statement to be "true" in an argument, despite insufficient evidence, IS illogical. If "neither path can be proven by logic", then stating either path as a point in the argument is not logical. And your 'example' itself is misleading - the debate over Afghanistan military is about principles and priorities and assumptions, not about 'predicting the future with certainty'. One can still make an "illogical" argument from a given set of principles, and one may have "irrational" principles (Consider: It's our duty to waste resources and money! That's the American way! Therefore, we should continue expending resources and money in Afghanistan. Military activity expends much resources and money. Therefore, I support the war!) But principles themselves (what you consider 'better') are outside the domain of logic - asking whether a principle is "true or false" doesn't even make sense. Ask, rather, whether they are rational.
Just because you claim it's "insufficient evidence" doesn't make it so.
I agree with that. OTOH, I don't recall ever attempting to claim "insufficient evidence" in order to make it so.
And I believe the Afghanistan analogy is sufficient to compare to the HTML++ debate. You are abusing the term "illogical". -t
How is the Afghanistan analogy is relevant to the HTML++ debate? And how is it relevant to misuse of the term "illogical"? You often make or assume statements for which I see little or no logical reason to believe, such as "GML is CrudScreen friendly" or "disabling on-display actions improves computer security". It isn't as though you can argue for GML by simply stating principles. You must also argue that GML fulfills those principles. I don't think it abuse to call various among those bridging arguments "illogical". One example of an illogical argument: "starting over with a particular goal in mind sometimes results in a better product for that goal, therefore GML is CrudScreen friendly (relative to HTML++)".
With regard to "GML is CrudScreen friendly", that is misleading. You imply it's an isolated self-contained claim, which it's not. The argument was that starting from scratch makes it easier add CRUD-friendliness without worrying about non-crud-baggage (eBrochure baggage). Remember the car-to-boat analogy? Your confusion is not my sin there. You seem to also mistake explanations for formal statements, and "debunk" that statement in isolation. Explaining something and providing a water-tight logical claim for a given portion of text are not necessarily the same thing. Obviously, the one "sometimes" sentence above begs the question of whether this is a case of that situation, which was addressed in other parts. If the above was all I said about that topic, then your complaint would be justified. But it was not, so you see I am not wrong and bad and evil. Be careful about the context implications of your examples. -t
To contradict, the above 'sometimes' sentence is, in fact, the only logical reason I have identified in the entire GuiMachineLanguage page to believe "GML is CrudScreen-friendly" relative to HTML++. You have also appealed to irrelevant market properties and fanboyisms. You have also provided much verbiage for 'security' benefits, which mostly make me wonder whether you even know what computer security means (see IwannaLearnComputerSecurity if you're interested in the initial bits). But, if you think there's another reason for me to believe GML is CrudScreen-friendly, then I'm at a loss for what it is. Further, you seem to ignore the greater context of development-side CrudScreen-friendliness and server-side CrudScreen-friendliness and multi-organization CrudScreen-friendliness and concurrent-user CrudScreen-friendliness and so on, which leaves me wondering what 'CrudScreen friendly' means to you. (Admittedly, 'CrudScreen-friendly' is pretty vague, but to me that means looking at it from as many perspectives as possible.)
With regard to GML, you claim the forces I mention are "irrelevant", but I disagree with your evidence, which is largely anecdotal. A disagreement is not the same as being "illogical". It is soft evidence against soft evidence. You have no OfficialCertifiedDoubleBlindPeerReviewedPublishedStudy that shows them irrelevant. You only have "soft" evidence. Stop mistaking personal notions and personal assessments for universal truths. (To keep the text train simple, I will not address the security sample here.) -t
Pay attention to immediate context. I said the market forces you mention are "irrelevant [to whether or not GML is CrudScreen friendly]." That is, at least, how I read the sentence. Sure, market forces are relevant to market success. That's a different argument. If you'll note an earlier statement of mine: "a statement or S is 'irrelevant' to a given argument A", not as a universal principle. I assumed, apparently incorrectly, that you could reasonably infer which argument was being discussed. Also, it is your burden to show relevance for evidence you present. Are you going to argue these fanboyism market properties are relevant to whether "GML is CrudScreen friendly"? If so, I ask that you explain the connection between the two.
Again, I find your writing style strange and difficult. (Others who think like you maybe don't. So be it.) But I did make several related sub-cases of why I think HTML++ won't bend smoothly enough. You agreed that there are reasons to start over rather than try to force-fit a semi-related standard in some situations, as I interpreted it. There is essentially a threshold at which point starting over makes more sense than bending the existing. That threshold is a key issue.
Then we each gave our arguments about why HTML++ will or won't bend. Neither of us has presented a double-blind peer-reviewed certified published study that proves what HTML++ will do in the future. We both can only make educated assessments based on observations of both technology and behaviors of peoples' interaction with the technology (AKA marketing). Some of our evidence is anecdotal, based on past observations. In this respect it is similar to the Afghanistan analogy, where calling both sides "illogical" is inappropriate, rude, and misleading. Using "soft" evidence is NOT justification alone for calling the evidence "illogical". If you simply disagree, simply say so. Don't use overbearing language to exaggerate the disagreement.
There's also the issue of MS sabotage. Even if it is technically possible to bend HTML++ to do GUI's well, MS can still throw a monkey wrench into the pond because of the dominance of IE (per arguments found at LimitsOfHtmlStack). Thus, even if my assessment of HTML++ bending is wrong, MS's behavior could make that side of the argument moot. My argument is thus tied to an OR: HTML++ will fail if it cannot bend OR if Microsoft sabotages it. -t
It is, I agree, rude to point out arguments as "illogical"; that's part of an AmericanCulturalAssumption: it's always rude and offensive to say someone is wrong, even if they are wrong. Some people are taught to avoid "you" phrases for anything less than flattering: "I disagree" or "we should expect" or "one would be wrong to". Anyhow, if you wish to practice politics and careful inoffensive wording, feel free. I suggest that one should not expect academics and practitioners in SoftwareDevelopment to necessarily share in that philosophy in the context of a technical forum, and that such individuals may not appreciate (and even find 'rude') repeated attempts to inflict upon them a value system requiring such political effort.
- It is simply an inappropriate word for the situation. It implies a fallacy in formal logic. What you hint at is that there's insufficient evidence to sway YOU personally. Why is it all your errors in terminology are negative? Why don't you ever make positive errors? I suspect it's because you are biased or intentionally rude. If you want to "fix" people with accusations and insults, then use a scalpel, not a sledgehammer. Disagree like a civilized professional instead of a grumpy arrogant asperger who fires off official-sounding insults like a drunken Cheney. -t
- Yes. "Illogical" does imply a fallacy. Fallacies do not mean the conclusion is 'false' or can be 'disproven'. A fallacy is any form of invalid, unsound, misleading reasoning, and describe a case where an argument towards a conclusion is not justifiable logically (such as requiring illogical leaps - NonSequitur). The error in terminology is yours, TopMind. And you are a fucking hypocrite when it comes to this 'civilized' stuff; either lead by example even when it means politely replying to insults, or simply accept that you're a rude and grumpy bastard who throws temper-tantrums and blames everyone else for his issues. If I considered you a civilized professional (and I consider you neither civilized nor professional) perhaps I'd avoid rudeness as I do for everyone else on this WikiWiki.
- Then say NonSequitur instead of "illogical". I disagree it's a NonSequitur though. Just because you claim it is does not make it so.
- "Illogical" is more general than "NonSequitur". The latter is just one example of an illogical argument. And I don't claim things to 'make them so'.
- No. "Fallacy" would be the more general term.
- "Fallacy" is also more general. I didn't say "Illogical is THE more general term", did I?
Anyhow, "educated assessments" can be illogical,
TopMind. Each "based on" offers potential for an illogical leap.
Also, I do not acknowledge your position on the Afghanistan analogy. I reiterate: the primary argument there comes from differences in priorities and principles, and calling these "illogical" is indeed inappropriate, but for reasons that do not serve your argument. One may still make illogical arguments "based on" a given set of priorities and principles, observations, and hear-say anecdotes. I suggest we desist with this ArgumentByAnalogy.
Rather than further expanding the HTML++ line of argument here (not sure how you got off of GML, it isn't as though an argument against HTML++ is necessarily an argument for GML), I offer: EditHint - move to LimitsOfHtmlStack.
It started as an example of my allegedly ignoring principles of computer science and/or "logic". It's clear now you are either wrong or greatly exaggerating. I believe you to be a drama queen, repainting professional disagreements as blatant and clear violations of math or logic. That aspect *is* relevant to this topic. Further, "priorities and principles" apply to GUI kits also. For example, how the down-arrow grid issue affects how likely a user is to accept a new standard or tool. -t
You do ignore principles of logic on a regular basis, TopMind.
- You just mistake disagreements with objective central universal truths. I committed no clearly-identifiable objective fallacies. None. Zilch. You can pretend all you want, but they simply have not been identified.
- Everyone commits fallacies. Even you. Even I. I correct and learn from mine. You deny yours, even after they have been pointed out to you... which happens often.
- I don't remember any pivotal ones that you pointed out recently.
- I suppose it helps to have a selective memory.
And, yes, "priorities and principles" apply to GUI kits too. For example, being '
CrudScreen-friendly' is your 'priority'. But that does not mean it logical to claim "GML is
CrudScreen friendly" simply because that is your goal,
- That is false. I did not do such. You imagined it. It outright never took place. This is yet more evidence that you are self-deceiving. One of us is just plain delusional, seeing stuff that is simply not there, or distorting arguments in their brain to fit their world-view.
- Yes, one of us is just plain delusional indeed.
nor was GML's
CrudScreen-friendliness well supported by your other arguments.
- I have to dissagree with that assessment.
- You say that like it's a duty.
- SEE, you instinctively see the negative side/angle first and foremost out of reflex.
- I see many sides, but I favor darker humor.
And sure, the default behavior of the 'down-arrow' being undefined or non-conventional may determine market success. But '
CrudScreen-friendly' is not defined by 'successful in market'; it's defined, in the broad sense, by how well a standard optimizes performance, portability, safety, security, etc. for both server-side and client-side and productive development of CRUD applications (including widgets, database hookup, etc.). So, your appeal to "how likely a user is to accept a new tool" is irrelevant to whether 'GML is
CrudScreen friendly', and using irrelevant points in an argument is illogical. If you were better skilled at organizing your thoughts, you could take the same damn information and make a logical argument: "if default key-bindings match common convention, then developers of
CrudScreen applications will not need to repeat efforts to bind them properly. Further, the implicit default key-bindings should better allow clients to achieve localizations to non-American keyboards. Additionally, since these key-bindings do not need piped along the network, that may offer an (admittedly marginal) performance improvement. For these reasons, conventional default key-bindings is more
CrudScreen-friendly in terms of productive development, portability, and performance". But noooo.
TopMind can't possibly make a
logical argument even if the information is available. Even for a
logically justifiable conclusion,
TopMind resorts to
HandWaving and
NonSequitur.
Market acceptance is a pre-requisite. If nobody adopts it, it's gloriousness matter nils. YOU are the illogical one here because you want people to care about your pet technologies and thus pretend like everybody cares about what you care about. And I didn't exclude those, I simply didn't put the gold-plated bloated versions that you personally favor. PersonalChoiceElevatedToMoralImperative
"If the widgets don't support default behaviors, it decreases their chance of acceptance" was plenty sufficient for normal human beings. And, if you wanted more detail, simply ask instead of insult me. Solve problems, don't create them by running around shouting "Illogical! Illogical!" like a Vulcan on crack.
I'm done with you. You see everything through the lens of your HobbyHorses, have a convoluted writing style, put words in my mouth, are trigger-happy with vague insults, and are rude to boot. I might finish individual sub-arguments, but I'm done with any debate about summary conclusions. Debates with you regarding more than one variable invariably turn into a big mess. -t
Very well, TopMind. Other than your goal at large, I doubt anyone is convinced by your reasons. You keep bringing things like 'market success' or 'HTML++ sabotage' up in response to my questions about technical merit, and you claim vehemently you weren't being illogical. I must say, I don't know anyone other than you who would think it logical to point at market success as a 'pre-requisite' for CrudScreen-friendliness. I just can't reason with that sort of 'logic'.
Any discussion of "technical merit" with you will almost certainly lead to your HobbyHorse topics. I know you think those are the most important things in the entire universe and that puppies will all get cancer if not done your way and I disagree and don't want to dance that dance yet again. -t
And apparently a discussion of "technical merit" with you hops (with nary any mention of technical properties) to discussions of sabotage, fanboyism, and marketing. I suppose you think that's the better route.
I'm just pointing out that one must consider the audience. You are hinting at your Xanadu tendencies again: "I don't care if nobody wants it: it's great, dammit!" -t
JanuaryZeroNine and OctoberZeroNine