These "attacking" titles ("fraudulent") need to go. It is not helpful. I suggest something like "IntegrityAndSafetyVersusNimblenessAndDynamism?". I'm leaning toward PurityVersusOrganic?.
- Well, there is a lot of fraud in the OOP communties. Maybe hype too.. but I consider it fraud when it comes to the point of people violently protecting things like OOP as if it is the only one true way. I've seen similar hype in other functional communities that is borderline of fraud, but the OOP crowd seems to be the worst.
- We rarely have evidence that anybody does it intentionally. People over-latch onto ideas for different reasons, usually familiarity (which is not necessarily a bad thing: MindOverhaulEconomics) or personal preference. The spirit of AssumeGoodFaith would dictate we don't call it "fraud". "Zealotry" perhaps, but not accusations of fraud. --top
- Doing it for money - being sued for malpractice/monopoly.. is quite the evidence. However, since programming isn't mission critical in many cases such as web apps that can break.. usually people don't get sued for saying "Reuse, Reuse, Reuse... our paradigm saves 6,320 hours of development time!"
- "Doing it for money" is open-ended. Car manufacturers would have to put 2-foot thick pads around cars if they were to avoid all preventable safety problems and deaths, but it would make cars expensive and fuel thirsty, and cut into profits. The boundaries for what's a "greedy shortcut" are fuzzy. Is SafetyGoldPlating for money also "fraudulent"? -t
This pattern discusses mindsets that programmers have which are fraudulent.. i.e. the idea that one mindset is obviously so much better than another mindset.
- Are you saying that 'the idea that one mindset is obviously so much better than another mindset' is an inferior mindset? I find your judgment of this to be far more fraudulent than the mindset, and incredibly hypocritical (and arrogant). There should be laws against that level of hypocrisy.
- {It is fraud to encourage and distort people into thinking that one single mindset is better, using hype, FUD, etc. It is trickery, which is the definition of fraud.}
- With this I'll agree.
- {Having several views of all paradigms for example is not fraud. But beleiving in one pure paradigm as if it is the best mindset, is fraud.}
- That statement is completely unjustified. How do you possibly qualify a belief as fraud? Who is one tricking?
- It is self-fraud, internal fraud. When I started my first company, I beleived everything was perfect and I did very well.. I even named my company something that was very perfect sounding. Fraud can actually help one succeed in this world.. for example fraudulently believing that everything is okay and nothing is going to go wrong, and that whatever we do with good faith will land us in heaven. Similarly, believing in purism is fraud to oneself.
- {How is this hypocrisy? All intelligent programmers know that one single mindset is not the way to address problems.}
- It is hypocrisy to claim that 'the idea that one mindset is better than another' is an inferior mindset. Period. Think about it carefully.
- But, it is not a mindset, to have multiple mindsets.. it is a mindsets in plural.. hence a group of mindsets. So having one mindset, is not the same as having a group of mindsets (your claim is that a group of mindsets is a mindset itself.. but 1 plus 1 does not equal 1.. well, it does, but, that would be going into EverythingIsRelative nonsense again :).
- So am I to understand that your (singular) mindset that you believe greater than all others is to make a habit of entertaining (plural) other mindsets?
- Viewing mindsets (not embracing one) is not a set in stone mind. Viewing several mindsets as if one is a third person, instead of going into first person mode and becoming the one mindset itself... isn't a set mind at all. A mindset is where one sets his mind on something. Viewing multiple mindsets with a critical eye and open mind, therefore is really not a mindset at all.. but more like a critical open mind that isn't set, that sees several advantages of other ridiculous pure mindsets, without becoming set in stone. It's more an open mind not set. Having more focus on one paradigm if it suits the task better, is okay.. but to set on something is bad (CategoryWeenie). That's why, for example, when I discuss modular programming.. I clearly state that I am not against objects or against OOP or against functions and procedures.. even a module can contain objects in it. It is not a war of procedural vs object oriented programming. Many people confuse me as a procedural coder, sadly.. for example.. because they only see black and white: procedural vs object. They are comparing first person mindsets, instead of getting a clue and becoming a third person. For example, if functional programming has little side effects.. a person with a third person view sees that as an advantage, and applies that to his program even if he isn't using a functional language. A person stuck in first person mode, must immediately only use functional language and nothing else because functional is perfect pure and this is the holy grail..
- ...The fanboys on the Haskell wiki, and the fanboys that yap about OOP reuse (without realizing their OS and OS Api isn't OOP), and the Lisp fanboys, the fanboys that think Standard Pascal is the way of the future (there are some on Usenet, seriously); these people are all first person fanboys. They aren't taking a third person view and seeing all the mindsets.. and unsetting the mind. This leads to religion. Religions are fraud (essentially, when you think about it.. people are only fooling themselves and others, i.e. trickery).
- Wow. I'll have to remember this button so I can poke it in the future...
OOP Mindset
A quote from paulgraham.com (PaulGraham). Although it talks explicitly about OOP, the mindset itself is not limited to OOP.
- (begin quote) "Object-oriented programming is popular in big companies, because it suits the way they write software. At big companies, software tends to be written by large (and frequently changing) teams of mediocre programmers. Object-oriented programming imposes a discipline on these programmers that prevents any one of them from doing too much damage. The price is that the resulting code is bloated with protocols and full of duplication. This is not too high a price for big companies, because their software is probably going to be bloated and full of duplication anyway."
- So: Object-oriented programming is unpopular for small companies, because it doesn't it suit the way they write software. At small companies, software tends to be written by small (and never changing) teams of super programmers. The lack of object-oriented programming allows them to do much damage. The price is that the resulting code can only be maintained by the single person that created it. This is not too high a price for small companies, because their software is probably going to be unmaintainable or not worth of maintenance anyway. Is that what you are implying? (Note that this is just one of many possible interpretations)
- I did not write it. The dot-com millionaire Paul Graham did.
- I hope you are not implying ArgumentFromAuthority... I know you didn't write it.. the question is what are YOU implying by copy&pasting it here?
- {The lack of object oriented programming allows them to do much damage? Please, let's discuss this in a page like Modular Versus Object oriented programming where I can beat the living boobies out of you for saying such a comment. Sometimes OOP causes brain damage, if abused. As for one person only being able to maintain any code that is not OOP? Excuse you, but such tools as the Unix API and other modular interfaces, including kernels that don't use languages that even have OOP built into them.. are maintained daily by many people. OOP does not magically create a setup where now programmers can read the code. This sounds like an ObjectWeenie speaking above. }
- I agree completly with you, I only wrote "The lack of object oriented programming allows them to do much damage" to show how absurd was to say that "Object-oriented programming imposes a discipline on these programmers that prevents any one of them from doing too much damage" both statements are wrong.
- The ObjectCapabilityModel is a proven security model based strictly around the fact that OOP helps prevent damage. However, any statement that 'OOP prevents any one programmer from doing too much damage' should (if you aren't a fan of OOP) simply be interpreted as 'OOP doesn't offer a great deal of power to the programmer'.
- {Well, the OOP code I work with has fatal exceptions, dangling pointers, etc. In a garbage collected language OOP is not as much damagable.. but from the languages I've worked with where one has to free his objects and where exceptions are used and encapsulated where I cannot see them.. I have to say that OOP can be dangerous. Recently I was working with a web server using threads that someone else wrote with the threads encapsulated in objects... it would not work and I could not find the bugs, nor did I have any patience to dig deep into the class inheritance mess to find the bugs. So I decided to not use objects and just use a direct BeginThread? call instead of a class that TThread.Creates, since the thread class encapsulated so many bugs. So much crap was happening behind my back in OOP that it was IMO dangerous for my server. With more direct API calls in sequential order with error checking after each call, it can be less dangerous than throwing some hidden exceeption. That being said, if the OOP code is written well, it can be safe too. But I just don't see OOP as a holy grail or a safer way of programming (doing less damage). By the way, the web server that I was working on that doesn't use a thread class, and just uses BeginThread? is now located at http://powtils.googlecode.com/svn/dev/tools/aservia/. }
- "Object-oriented programming generates a lot of what looks like work. Back in the days of fanfold, there was a type of programmer who would only put five or ten lines of code on a page, preceded by twenty lines of elaborately formatted comments. Object-oriented programming is like crack for these people: it lets you incorporate all this scaffolding right into your source code. Something that a Lisp hacker might handle by pushing a symbol onto a list becomes a whole file of classes and methods. So it is a good tool if you want to convince yourself, or someone else, that you are doing a lot of work." (end-quote)
Relational Mindset
Discuss why relational weenies have the wrong mindset here, etc etc
The entire idea of relational purism being perfect is fraud.. for example to think in "thin tables" just doesn't work. It's too hard, so it is a bad mindset. Unless, somehow, we can make it easier to visualize.. the idea that relational purism is the best way to go about doing data management in every app is just fraud.
Thin tables rule, they are the right mindset if our tools make them easier to create (so that we don't see them as overhead/separate hard things to grok and visualize. Think of them like different lists of data.. why have two arrays, or two linked lists in a program? What is so wrong with having more than one huge master wide array or linked list in a program? No one has fear of creating more arrays or lists in programs to manage data, but people seem to have FearOfAddingTables).
I often only use lists and maps in apps because they don't easily support local tables anymore, not because I like lists and maps. Stuff that's hard to grok and visualize creates is OWN problems.[DemandForEvidence] Trading visibility and simplicity for security/integrity-oriented bloat can be the wrong trade-off. It just trades one group of problems for others [DemandForEvidence]. WaterbedTheory. It can create conceptual mistakes that a clear system may have prevented [DemandForEvidence]. Conceptual mistakes are just as bad as security/integrity/type errors, and may even cause them [DemandForEvidence]. Trading KISS for safety may result in neither [DemandForEvidence]. Big companies often have armies of people who specialize in a particular portion of a convoluted setup ("complexity lawyers" of sorts) when a cleaner design would put them out of a job. Perhaps it is possible to throw bodies at the problem: If you have 500 rowers in your Roman-era ship, you never have to worry about the wind dying down (your integrity/safety). However, it is often not economical except in very critical apps (banking, medical, etc.). If the pre-motor ship absolutely must make a given trip on time regardless of the whether, then 500 rowers is indeed the right solution. However, it would bankrupt the economy if every ship did that instead of use sails.
I must admit to being curious as to exactly how thin tables are "hard to grok and visualize". Maybe for you they are. But big tables are like putting tons of facts together with one big conjuctive AND clause. (Santa is fat AND he's hungry AND he likes cookies AND he wears a big red suit AND he rides the sleigh AND he calls the reindeer AND he is generous AND he keeps a list AND carries presents AND he enslaves elves...). Further, it visualizes in one HUMONGOUS table that is difficult to properly view, much less visualize.
As a compromise, what if the DB provides a "wide" table view to the users and DBA as a default, while the column-centric sub-tables are more or less hidden most of the time and are just an implementation detail? And if its mostly a matter of safety, why not use constraints and triggers instead of skinny tables? Make the constraint/trigger language/protocol simpler for such maybe.
- Yes why not invent something like create view and allow to have joins inside so that one can denormalize at will... hey database now already do that!!!
- That is backward in my opinion. It should start the other way around. Plus, I can't always control what the DBA's do.
- [Seconded. Skinny tables should be default. Wide 'views' can easily be implemented, and skinny tables don't have the OTHER problems associated with wide tables.]
--top
- {Top, I'm curious: What is it you do for a living? You have what I would consider to be unusual views on databases, and some curious priorities. Comments like "[t]rading visibility and simplicity for security/integrity-oriented bloat can be the wrong trade-off" I find literally shocking -- as if a doctor told me he still believed in treating syphilis with mercury. However, rather than get angry about it (again), I'd like to try to maintain some composure and attempt to determine where such views are sustainable. You wouldn't happen to be developing vertical market applications for a retail industry, would you? E.g., software for video rental stores? Years ago, I worked with some people in that domain who had surprisingly (to me) lax views on data integrity, and their clients didn't seem to mind. I, on the other hand, used to develop payroll, accounting, health records, and medical billing systems, so I suppose it's not surprising I have an almost diametrically opposite view on such things.} -- DaveVoorhis
- As already mentioned, I do believe in PickTheRightToolForTheJob. With critical systems such as medical and banking, the "overengineered" approach is indeed probably the way to go. I am not disputing that. You seem to like that kind of work, and thus gravitated to it. It's usually good money too. I, on the other hand have gravitated toward custom data analysis tools, or "interactive reporting systems" and data extraction tools for spreadsheet generation/imports. These are where there are canned reports that are not flexible enough. I've honed UI and programming approaches to allow more flexible reports on the cheap. The thing is, the users, usually domain analysts (like marketers or customer pattern analysts) and management, do not quite know what they want. They only know that the canned reports are not flexible enough, and asking for custom queries to be written takes too long because the database staff is overworked. If the user does not quite know what they want, one needs to be able to experiment without braking the bank. This has become my sub-niche. I also bang out small departmental CRUD web apps, the sort of thing MS-Access is often used for, but web-based. Generally I am automating something that people already do, but manually or semi-manually. Thus, even if it isn't perfect, its usually about 10 times more reliable than the prior manual approach. The more critical it is, the more careful I am on it, and I factor that into the estimates I give. I am often where people turn to when the "BigIron people" are too swamped or give too high a cost estimate. I use meta techniques to deliver something almost as good as the "big team" approach, but for about 1/5 to 1/10th the cost. Most of the "bugs" in my apps are due to not understanding the domain (business rules) and NOT related to lack of type-checking or formal constraints. I could probably make more money being a BigIron developer, but I like my McGuyver? niche/roll. It's fun to pull rabbits out of hats and surprise people. The other day I replaced a 80-grand system written in the mid 90's with an 8-grand one using off-the-shelf components. (True, technology improvements helped some, I must admit.) --top
- {Thank you. This is quite enlightening. I can certainly see how the focus of your domain (it's the reports, stupid!) could engender a difference in priorities from my domain (it's the database, stupid!)} -- DaveVoorhis
- I believe most people settle into specific areas and their biased are shaped by those areas. That's why there's often conflicts between DBA's and app developers (DbasGoneBad). If there were an army of DBA's to supply better views, then perhaps I wouldn't care as much about table design because it wouldn't matter. But most co's don't want to hire in such a way, making table design matter for those concerned heavily with queries and the output. --top
- There does not need to be an army of DBA's to supply better views. Nor does there need to be an army of DBA's to normalize a table. A huge wide table is a bad view itself, as the table gets wider. You cannot grasp the huge wide table on a single screen, ,as it gets wider and wider and wider - just like in program code you don't throw everything into one single global variable. Things get modularized and split up with time. GetOverIt. And tools can be improved to show the wide view if you need to see the wide view. KeepAnOpenMind. If your current tools are so perfect, and there is no room for innovation.. then I'll ask.. why argue? Everything is perfect. PeopleArgueToFindOut and trigger new innovation. Stop taking this as an attack and start learning.
- There's times I split such stuff up, but it confused the user. As far as being too wide for the screen, the user can often select which columns they want and even the order they want them. They are not bound to any internal grouping. I've learned some UI techniques that easily permit such. I haven't found any for "skinny tables" yet. I am not saying they don't exist, only that I have not discovered them yet. (And I do have "soft grouping" in the column pull-down lists.) --top
- The user has nothing to do with this - the developer, maintaining the database, is what we are discussing. What the user sees is whatever you allow the GUI to see. If you let the user have access to the DB directly, then he is more a developer.. not a user. A user can see any view that you give him and is not tied to normalization. Your confusion with designing the database poorly in ways to suit the user makes as much sense as a farmer designing his crops to grow in a way that the customer likes the crops to grow. The customer doesn't care what the farmer's crops look like in his field or how he organized his lines of vegetables and fruits - the customer cares what the end fruit looks like that he is given at the store.
- I thought *you* were talking about the user, and I addressed that. Anyhow, if most consumers of the tables (be it developers or users) do NOT use its narrow form, then we waste a lot of time and joins on translations. It's premature classification.
- Straw man argument. Premature classification: assuming a wide table is what the customers need. You just do not understand normalization. The products you use have caused BrainDamage because joins are too hard, foreign keys are too hard, etc. It's not all your fault, some of it is the products' fault.
- That's not evidence, merely ArgumentFromIntimidation and YouJustDontGetIt-syndrome. Typical calling cards from those lacking real evidence to present. Spend more time shoring up your evidence instead of insulting me. --top
- //The evidence is provided in ConstantTable (your own fricking pattern), PDF files from experts, websites, etc - but as soon as we give you them, you say "that is just academic, not evidence, bunch of PDF files from well known authors. Not evidence!" Even when they include case studies in them (just like in ConstantTable which contains a case study, and just like in the Bitmask example on Fabians site). You do not have an ounce of evidence that wide tables are better in a database, nor have we seen PDF files, trusted websites, case studies, or well known experts proving that wide tables are better. Your own ConstantTable page is hypocrisy and you don't even realize what normalization is.. probably not even understanding your own ConstantTable page is going against every view of your own.//
[The only place I'd prefer lists to tables is when doing certain forms of systems programming where I need to make guarantees on access times for message queues and such. That said, I believe skinny tables are often good things because, fundamentally,
DatabaseIsRepresenterOfFacts and the use of hundreds of 'skinny' tables allows you to have each row represent
exactly one fact. This also makes it much easier to add tables and update facts because (a) you never need to mess around with any 'central' table to add new facts, and (b) you don't need to specify the maintenance of the dozens of columns you aren't updating, and (c) there are far fewer problems with properly describing integrity constraints.]
It will be easier to use relvars (tables) as replacements for lists if they were simply made easier to use than what is currently offered with SQL and TCP/IP connection based databases. I find the main reason I don't use databases often enough is because they are very hard to access without a lot of upfront planning and work.. such as setting the server up, figuring out how to get the SQL sent over the connection, making the user accounts and passwords, etc. Sometimes we just need an embedded relvar similar to SQLite.. but we need some proper database and not a basterdized relational model such as SQLite with no types and no integrity. I suggest that this replacement for SQLite even be easier to use than SQLite, such as not requiring hard to do stuff such as constructing SQL strings.. rather something like RelProject. The relational model isn't successful so far because it has always been hard to use, and frankly no products out there are even truly relational.. except some infamous ones that are approaching true relational.. such as RelProject. I will continually harp on the fact that relational could be so much more widely used if we just offered some products that made it easy to use and easy to tap into. That is the main problem today.
[As far as 'simplicity' goes, a hundred skinny tables is far, FAR simpler to grok, view, secure (with per-table or per-fact policy), optimize, verify, AND manage (all at the same time) than is on big table with a hundred columns. Even top should know that a relational database already allows one to look at any 'view' one wishes, and any good RDBMS or TableBrowser can remember or suggest views, so visibility is an utterly fallacious excuse. I'm quite convinced that TopMind has never really grokked the nature of simplicity - not with the way he goes on and on declaring 'KISS' as an excuse to defend solutions that cause far greater complexity than do the alternatives. 'The Sun goes round the Earth' seems simpler than the Monty Python Galaxy Song, but it forces diligent observers (those who don't just wave away observations and blind themselves to all other evidence) to start AddingEpicycles to the motion of everything else. There is such a thing as essential complexity, or necessary complexity, and going simpler than that makes a notion 'simplistic' instead of 'simple'; simplistic solutions force everyone else to add epicycles. TopMind likes the simplistic. The EinsteinPrinciple reminds the wise: "As simple as possible but no simpler", but I doubt Top will wise up before he dies. So just take a warning: the moment Top starts waving his hands and throwing around the word 'KISS' (usually performed without justification, all the while dismissing other concerns and even insulting those who consider them relevant ("security/integrity-oriented bloat")), run. I should have; I think I become less intelligent and devolve into an angry ape whenever Top starts saying simplistic, simple-minded things in the name of 'keeping it simple'.]
- If you want me to "wise up", then SHOW ME realistic scenarios with realistic probabilities of occurrence, not chicken-little exaggerations, and properly address all the down-sides given. Name-calling is not a short-cut to doing real scenario-analysis. --top
- [Trying to show you anything has proved to be a stupid venture because YouCantLearnSomethingUntilYouAlreadyAlmostKnowIt and, frankly, (a) the depth of your ignorance is surprising to the point that I have often thought your statements to be malign rather than naive, and (b) your ignorance is willful and stubborn; I can point a top at knowledge but I can't make him think. You'll persist in your fallacy regardless. You also display an appalling amount of arrogance and hypocrisy; have you ever given 'realistic scenarios with realistic probabilities of occurence' or 'properly addressed the downsides' to types, security, or integrity? If so I've never seen it - all I ever see from you is name-calling ("red e-tape", "bloat"), sometimes an anecdote (something you'd vociferously reject as evidence if anyone else were providing it) and the same sort of shrieking/whining you display here, demanding proof greater proof than you're ever willing to provide. So, no, I've given up caring whether you 'wise up'. I would much rather you go away.]
- Why, that's a wonderful description of YOUR own debate techniques. Except, that I don't claim any "one true way" that is universal and/or external to all intelligent minds. ProgrammingIsInTheMind (I didn't create that topic, BTW). AbsolutismHasGreaterBurdenOfProof. Why should I give an objective proof of something I never claimed was objective. If skinny tables fit your work-habits and psychology better, so be it. But don't over-extrapolate that to everyone. And, I did not start the name-calling. The person who used the accusation "lazy" did IIRC. And insulting an individual is a bigger WikiSin? than insulting an idea. --top
- You started the ConstantTable page, right? You are being a hypocrite by claiming wide tables are most often better better better since you yourself use skinny tables.
- [Only the individual making a claim bears burden of proof. That means you, even if you aren't claiming a 'one true way', have greater burden of proof than any absolutist who is skeptical of a claim. And, should an absolutist make a claim and point you towards an 800-page BookStop that contains the proof, that DOES count for the absolutist. You've made conscious decisions to avoid reading, to thumb your nose at academia, to discard rigorous proof 'MentalMasturbation', to dismiss concerns of others as irrelevant or bloat, to treat your opinion as though it had equal standing to any other fact, to avoid critically examining your own reasoning lest you discover flaws, and, generally, to condemn yourself to deep, abiding, and willful ignorance. You'll never get past it, top - not unless you take great efforts to change. Even if you write a book proffering your ideas, it will be met with the same derision you see on this wiki, and critics who have never even heard of you before will call you an unprofessional hack. You've grown so used to this treatment that you've started (in your extreme arrogance) thinking of yourself as some sort of genius pariah like Galileo, pushing unpopular ideas in an ObjectOriented world, thinking that you MUST be right - that your own ideas are only dismissed because they go against the 'fads' of the time. Sadly, you're no genius - you're just a pariah.]
- You don't even have a BookStop on your side. See "absolutism challenge" under AbsolutismHasGreaterBurdenOfProof. --top
- [You assume that you've made no 'absolute' claims. If you want proof regarding the reasons to choose skinny tables over wide ones, the paragraph below suffices. If you want proof of your own fallacy described above, cast aside everything you think you know and critically examine your own statements.]
- See PageAnchor: Claims Bee
[There are only a few (albeit significant) benefits to choosing one super-wide table. First, it makes queries a bit shorter - a simple select instead of a massive join. This is partially an accident of the query language; it would not be difficult to design query languages based more on
DataLog that would be far better for working with 'skinny tables' (being designed for it). Second, the table will be nigh-guaranteed to possess exactly the same physical representation in the RDBMS as it possesses logical representation; this allows far easier complexity guarantees and reduces access costs. This is something that could be solved by allowing programmers to make explicit suggestions to the RDBMS optimizer. Finally, it's easy to remember; you don't need a smart browser because all you need is one word: a table name. The cost is, of course, the need to represent NULLs (which can be avoided entirely with skinny tables where each row-entry represents exactly one fact), the need to make 'special' changes (e.g. splitting off columns) the moment some columns can simultaneously carry multiple values, making any sort of automated physical-storage optimization far more difficult, making security of data (which is already hard) even more difficult because everyone needs access to the same table, making updates and inserts with the data-manipulation-language more difficult (because one must represent every column for both the input and output record), and even making maintenance more difficult (because changes to a big-arse table occur by nature of adding and removing columns, it forces one to (each time) touch nearly all existing DML statements; with skinny tables one only needs to touch the DMLs for just that table, but even the event would be rarer since there is never cause to split a column off a table. Oh, and there are also vast hidden conceptual costs: the inability to maintain meta-data for individual facts (e.g. temporal databases, security policy, source or indications of how the fact was derived, degree of confidence) is a big one; it isn't something someone can even readily
think about before moving to skinny tables - an example (among many one can find in science) of simplistic notions making for stupid people who can't easily have other good ideas.]
In visual tools, why couldn't the foreign keys be visible as if it was a wide table? Example: it is duplicate in the visual view.. each table shows you a view of the entire wide table.. but in actuality, it is really a bunch of thin tables cleverly put together at the system level. Also, how to display normalization with the values plunked in a wide view, from the other normalized data in other tables? i.e. when you map "M" to "Monday" in another table, is Monday a peice of text, an enumeration, just a dumb label.. hmmm. Interesting to think about. In fact I'm going to load up MySQL query browser right now and see what it offers for different views with foreign keys - but I doubt it will have views for things like mapping a normalized table to a wide table - it may just have foreign key views, or maybe not even that.
[Answer: there is no reason that a TableBrowser couldn't take one look at the table meta-data and figure out how tables relate and allow views as though it were one wide table. Of course it would require a smarter browser than some we've seen. Good of you to not let TopMind's 'simple'-minded excuses interfere with your own thinking.]
Burgerkinging is one of the main advantages of relationial: "See it Your Way". Although existing RDBMS may not support such very well, the best compromise may be to make schema systems more flexible (table-driven) so that one can "query the schema" to present and find the view that they want, regardless of the physical layout underneath. --top
{OMG!!11!! I actually agree with this.} -- DaveVoorhis
When one is overly excited about TopMind finally showing some signs of agreement, we accidentally type out the number 1 on our keyboard instead of the exclamation mark. PwnAge.
Functional Mindset
What a bunch of hype and fraud:
Haskell is so much better because it doesn't have side effects, therefore more reliable than other programming languages.
- Ah, the ol' side-effects purity enthusiasts (a euphemism for a stronger title that I hold back). Haven't seen them around here lately. We're due for a storm.
- There are a great number of advantages that are gained with ReferentialTransparency - advantages that apply to optimizations, correctness reasoning, security, etc. And if all you've experienced is procedural programming, you can learn a lot by learning Haskell; it has been said that it isn't worth learning a language if it doesn't teach you to think in a new way, and Haskell will do that. That said, ReferentialTransparency and being side-effect free isn't in and of itself a justification to claim Haskell is 'better'.
- Functional languages include procedural/imperative nature in them even though no FunctionalWeenie will admit it. See also http://z505.com/cgi-bin/qkcont/qkcont.cgi?p=Functional-Programming-Is-Procedural
Practical
FunctionalProgramming is still in its infancy. There is a lot of research going on in mapping functional paradigms down to the relational database layer. A bigger advantage of functional languages are
analyzable programs. You should be able to transform a succinct functional map or filter operation into an optimized DB query behind the scenes. The managed side-effects
OnMonads help make the programs tractable to analysis.
Lisp is better because everything else is based on lisp.
- Lisp is better because it's older than every other 'extant' language and already has the features we'll be integrating into mainstream languages ten years from today.
Everything Is A File Mindset
It is a useful abstraction to consider some things files, but not everything should be viewed as a file. Some Unix gurus such as RobPike consider nearly everything a file - and this can limit their understanding of the relational model or other views. Not everything should be a file. See also the SawzallLanguage and the PDF file linked on that page.
Math Mindset
Math is fraud because ZeroIsWrong and one group of apples plus one group of apples can equal one group of apples, not two.
Physicists and scientists sometimes have quibbles and arguments with with mathematicians about issues in logic and in the world.
I recall once my physics teacher going into a great argument with my math teacher over what seemed such a minor detail regarding discrepancy between math and physics. Unfortunately I can't remember the exact topic they were arguing about - but the physics teacher came in and sat down in our math class and argued with the math teacher for about 1 hour straight (without a resolution at the end).
PageAnchor: Claims Bee
1. Stuff that's hard to grok and visualize creates is OWN problems.
- If I'm looking for a list of available bitmask options available, they are more visible and easier to grok if they are in their own table or if they are in their own enumeration type. They are harder to find if they are stored hidden away in some huge wide table, without any clear specifications and constraints telling me WHAT bitmasks are available (i.e. stored in the application logic).
Don't you agree with this in general? I find it strange you would challenge this. I want you to confirm that you disagree with it.
- Wide tables are hard to grok and visualize when they get fat.. that is why Excel users do not continually scroll left to right with huge spreadsheets but instead start making separate workbooks and separate sheets for storing lists of different stuff. When it becomes a huge mess of visual basic macros and cross references, it blows up. Blame Excel, not normalization.
I often find the things you disagree with entirely strange, to the point I almost believe you malicious. For the sake of this argument, I am not accepting your claim at your word. After all, you've claimed elsewhere (AbsolutismHasGreaterBurdenOfProof) that psychology arguments and assumptions don't count.
I am not going to defend something that you do not explicitly disagree with. There's enough on our plates such that we shouldn't bother arguing for the sake of arguing. Thus, please commit one way or the other.
I honestly believe that seeing a proper defense from you for anything you claim is too much to expect. But feel free to waffle longer and prove it.
You are trying to reverse the burden here. It is fair and reasonable and text-saving to only defend something that you explicitly disagree with. I am not going to defend the existence of the chair just for the shear hell of it. If you feel otherwise, then this experiment is over and I consider you unreasonable. --top
Hah! No, Burden of Proof IS on you to defend your own statements, top, so there is no 'reversal' involved. It is fair and reasonable that anyone can, for any reason, ask that you defend statements that you make. 'Disagreement' is only among the reasons - skepticism, lack of comprehension, distrust, etc. are also potential reasons. Call me a distrustful skeptic if you need a reason to defend your statement. I can, after all, think of cases where I believe 'difficult to grok and visualize' is par for the course, but where it is overall 'simple' and solves problems - negative numbers would be one example, and monads would be another.
The fact that YOU think that I should prove something about my motives, or even offer or explain disagreement, makes it seem to me that YOU are the one attempting to shift Burden of Proof. It would fit what I believe to be your normal pattern: say whatever you want, make false promises that you'll justify it should anyone ask, when called to do so resist (as you are doing here) until you can get someone to explicitly disagree, when someone disagrees ask them to explain their disagreement, then force them to defend their disagreement and NEVER get around to actually justifying your own claim. Even better if someone offers a counter-example up front! then you can avoid most of the steps and jump straight to attacking. Are you planning to do this, consciously or unconsciously, in this case?
I stand behind my request as fair and reasonable and that it keeps wiki cleaner. Would other WikiZens like to comment on this stance? --top
Damn, if you aren't unreasonable. You even need second opinions before you become willing to justify your claims to a skeptic? I think you just seek excuse to dodge your burden of proof when you can't figure out how to shift it. Would other WikiZens like to comment on this stance?
Would you rather we stop at this stalemate? Fine by me. Admit it dude, it's utterly ridiculous to waste time justifying something that you don't even disagree with. It's dumb. Think about it.
I've already told you I don't agree with what you say, though it appears you require some remedial lessons in reading comprehension. And I think you've now proven yourself to be a ridiculously unreliable flake - you'd much rather waste time attempting to get a justification to justify things you should have justified in the first place, or waste time quibbling over slights against you or how 'rude' people might be, or demanding people "ask nice" before you provide a rational argument for an irrational claim, than you'd ever like to spend being productive. That's dumb. Think about it.
PageAnchor: "Top is Done" (hopefully in more ways than one)
Rational or not, normal human beings typically put up with only so much personal abuse before they shut out the perpetrator. In your very paragraph above you both accuse me of "waste[ing] time quibbling over slights against you" and ALSO call me an "unreliable flake". I am not going to put up with this any more. I find you too difficult a person to debate with. You have an odd sense of web etiquette that seems foreign to me and are anal about the wrong things. I'm done with this. I've grown a thick skin over the years taking on sacred cows, but not thick enough yet to deal with the likes of you. Maybe I'll cool down in a few months and revisit the issue. In the meantime, go harass somebody else with your excessive name-calling and zealotry. --top
- I humbly suggest you reconsider your approach when "taking on sacred cows." If you're going to do it well -- and by "well" I mean doing it in a way that is not going to draw reactionary responses and abuse -- please consider presenting your views as cogent essays supported by extensive references, rational arguments (in the academic sense, not the quibble sense) using accepted terminology, comprehensive experimental results, and clear case studies and examples. This is especially necessary if you're going to tackle the generally-accepted wisdom, e.g., you're going to counter the views that well-normalised databases are inherently better than non-normalised or that object-oriented programming is better than procedural programming. It's worth applying the maxim of skeptics everywhere: extraordinary claims require extraordinary evidence. Claims that run counter to generally-accepted wisdom (whether said "wisdom" is correct or not is another matter) should always be considered extraordinary, whether you regard them as self-evident or not. Do this, and you will find acceptance for your ideas. For inspiration, it's worth noting that the cause of stomach ulcers -- which generally-accepted wisdom held was stress and/or excess stomach acid -- turned out to be mainly a result of a bacterial infection. The doctor who discovered this had to spend years developing, presenting, and defending his arguments against the generally-accepted "wisdom", despite what is now considered obvious evidence. (See http://www.vianet.net.au/~bjmrshll/features2.html) For better or worse, this is the way of science, and you'll do far better to embrace it. -- DaveVoorhis
- Science applies to everything, including "generally accepted wisdom". Otherwise, fads would be locked in forever. Being popular is not a Get-Out-Of-Science-Free card. By the way, generally accepted wisdom is to normalize to 3rd normal form. Any normalizing beyond mere duplication removal has always been contentious. And, also note that I will not prove that sacred cow X is not objectively worse because that is not my claim. I believe our selected tools/languages/paradigms/practices are largely personal preferences. I cannot "prove" they are personal preferences, but in light of the fact that nobody has proven any of these objectively "better" outside of machine performance lends some credence to this. Software design is similar to planning the layout of our desk (phone, papers, folders, etc.) [DemandForEvidence - this is an absolute claim.]. Math and science cannot dictate the "proper" way without taking human and personal psychology into account [DemandForEvidence - this is an absolute claim]. If you disagree, please supply the info requested under AbsolutismHasGreaterBurdenOfProof [mark: shifting BurdenOfProof illegally]. Remember: equal or unknown until shown otherwise [mark: hypocrisy].
- I essentially agree. However, it changes nothing of what I wrote. My advice stands; ignore it at your own peril (or abuse...) -- DV
- But you imply the burden-of-evidence is on me. It is not. I can't objectively disprove something that is not objective to begin with. Think about it. You are assuming this is symmetrical. It is not. The burden is on the absolutism claimers. It is contradictory to objectively prove subjectivity. It's like asking me to prove that unicorns don't exist just because some raving unicorn fanatic claims they do. And, what is your frame of reference for "extraordinary claims"? --top
- You have a responsibility to prove (or be willing to prove) any claim you make in the context of a reasonable debate. The corollary to that is you should not make any claims that you cannot or are unwilling to prove. BurdenOfProof IS on you, guaranteed, the moment you make a claim - just because you logically and physically CANNOT fulfill BurdenOfProof is an INVALID excuse for not fulfilling it. If you won't prove a claim, either reduce it (e.g. to a claim of belief) or recant it. You seem to have some serious misunderstandings about BurdenOfProof. If you say "unicorns don't exist" while arguing with a bunch of unicorn fanatics, BurdenOfProof is on you - even if it is impossible to fulfill. If you say "I have seen no convincing evidence that unicorns exist", then BurdenOfProof is still on you... but only to prove that you "have seen no convincing evidence" - which may require you to defend your position by examining what they have thus far provided as evidence and explaining why it is not convincing. If you say "I remain skeptical that unicorns exist", then you don't need to prove anything - in reasonable debate (at least where one isn't playing lying games), your statements about your own immediate thoughts are considered true since there is no reasonable way to contradict them. This may seem anal, but keep in mind that the correct interpretation of most English statements is the absolute one. Only by explicitly adding a 'this is my opinion' clause do you get around that, but even 'opinions' need reasons (only statements of belief do not). And, frankly, reasonable debates aren't about bandying beliefs about - they're about stating opinions and arguing reasons for holding them. And DV is right about the 'conventional wisdom': in practice "equal or unknown until shown otherwise" is NOT true, no matter what your ideals be - if you challenge conventional wisdom, you should consider it to be 'established fact' that you have the extra burden of toppling. -- db
- Re: "You have a responsibility to prove (or be willing to prove) any claim you make in the context of a reasonable debate." - No I don't. Often it is informal talk and illustrations, NOT formal claims.
- That is irrelevant, Top. If you make ANY claim, formal or not, you have a responsibility to support it. This is the fundamental principle of debate. If it isn't true for you, then it isn't true for anyone - and anyone could make any claim they want without ever supporting it and call it "reasonable debate". And that is very unreasonable. It is true that illustrations aren't quite the same as claims (as they may be hypothetical), but saying that you don't have to fulfill BurdenOfProof for claims you make simply because "often it is informal talk and illustrations" is COMPLETELY irrational - in particular, non-sequitur - a bit like saying you feel no need to wipe before you flush because geese migrate south in the winter.
- Does this curious rule apply to everybody on wiki? If so, why are you only policing me? --top
- It applies to me. It applies to you. It applies to everybody on this wiki. It applies to everybody in this world. I can't be bothered to police everybody, so I'm policing the one ASSH^H^H^H^H person on this wiki who thinks he's above reasonable debate. Frankly, you're the only person I interact with on an irregular basis who thinks this a "curious rule". It's as though you're standing there saying, "Modus Ponens? feh. Just say whatever you darn well please then shift BurdenOfProof like I do. Heheheh. Pssst - I'll even give ya' a little advice: create a bunch of random topics, support them with your word, invective, implied insult, and HotAir? alone. If unsure, use 'I think' without offering reasons as though statements of faith were meaningful in debate. On each page demand that anyone who opposes you convince you that you are wrong, then you can use the PageName as though it were gospel truth everywhere else. When someone scoffs, demand: 'If you disagree, please supply the info requested under [strike: AbsolutismHasGreaterBurdenOfProof, insert: PageName of choice].'"
- As it stands, there is no known objective proof of say skinny tables. If you have some, please present it. Otherwise, it shouldn't be preached as gospell truth. If you claim or imply it is objective truth, then you are obligated to show it.
- What the hell did I say that had anything to do with skinny tables in this last paragraph, Top? No... IIRC, I was informing you of your BurdenOfProof. Are you attempting some silly diversionary tactic? And yes, if I make a claim about skinny tables I do have BurdenOfProof for my claim - to provide supporting arguments. Indeed, where I did make a claim, I did provide supporting arguments - something you've never bothered attacking (or probably even reading).
- I don't claim my fav way to design tables is objectively better. I only know what works best for me based on experience (and I've seen some ugly skinny table designs), and a lot of it is subjective psychology. Things that model my head are easier for my head to work with. Thus, there is no objective claim from this statement. I cannot prove that it fits my personal psychology better, because that is by definition "subjective" and subjective is the opposite of objective. Systems just "feel" easier to change if you don't hard-wire specific classifications in up front. I've never felt it improving reliability to do such. I find it easier to change meta-data than to change code design or base schema "shape". I cannot quantify this "feel". --top
- The objective "proof" of "skinny tables" is inherent in the definition of FifthNormalForm? and the anomalies it avoids. <5NF == update anomalies; >=5NF == no update anomalies. QED. Extend that to SixthNormalForm? if you're using a temporal database system... -- DV
- One objective proof of "skinny tables" is that you never need to represent a NULL, which aren't part of DatabaseIsRepresenterOfFacts and cause problems described in a great many pages. Another objective proof of "skinny tables" is their ability to readily support considerably more meta-data for each fact: temporal databases (as DV mentions - time-period valid, time-period 'in database') but also secured databases (capability-requirements at the per-fact level) and databases that represent confidence, explicit knowledge of negation (that a fact is not true) allowing one to shift at will to the broader OpenWorldAssumption, and allows one to represent source of each fact (e.g. who entered it, or by which rules it was derived) directly in the database - these things simply aren't feasible in a wide-table system because one has an agglomeration of facts instead of one-fact-per-row. Another objective proof is code-change patterns: skinny tables reduce CouplingAndCohesion by decoupling only incidentally related facts; a change where in the one-big-wide-table system you would need to add a column then update EVERY insert/update query that touches your database requires none of this extra labor for skinny tables - only deciding to change 'types' or integrity constraints, delete, or rename a skinny table requires touching insert/update code. Another proof is the greater simplicity regarding places where facts can take multiple values simultaneously: multi-value facts aren't some "hacked" epicycle-laden approach that are "special exceptions" that get torn off of "the one true table" into unique many<->many tables; instead, every individual table is simply described with that table's choice of integrity constraints, and there are no 'special exceptions'. Another proof is that 'skinny tables' become easier for programmers to add, largely as a consequence of the reduced CouplingAndCohesion, reducing the bottleneck that is the DBA to simple policy. Another proof is that automated optimization of physical storage is simpler to implement for skinny tables - trivially, of course, one can optimize further since there are more tables to combine or potentially denormalize in physical storage, but the sort of optimizations one can perform for temporary tables, views, but since all tables share similar structure it only takes a few rules to create decent optimizations not only for the pre-existing tables, but also any programmers add. Another proof is that update transactions can be made considerably more optimal under concurrent load because need for locks becomes considerably finer grained than the one-big-table approach. One proof of "skinny tables" is that they are easier to describe logically, as views, as queries, and as tabular data (all at the same time) as is demonstrated in Prolog, Datalog, Mercury, and various other logic programming languages - there are fewer distinctions or specialization, which, again, makes things simpler. But you know what? It really only takes One proof. Only "skinny tables" follow the KISS principle - they keep the database, the implementation, the query language, and even the optimizations homogeneous and simple. -- db
- That is not "proof" but rather "evidence". There are complex trade-off and most of these have already been discussed and I disagreed with much of your assessment due to counter-factors, at least as an absolute. Sometimes it net helps to group stuff sometimes not. For example, the grouping may not fit how its actually used in the domain. In that case, it could be more tables updated per transaction rather than fewer. I thought you guys agreed that poorly-done skinny tables are a bad thing. Perhaps these should be visited one at a time rather than addressed 5 levels deep in an already-too-long topic. We keep scattering the same arguments in different places. I am not going to blame anybody at this point because I'm trying to be nice, pointing out that we need to work on organizing this debate together. --top
- Heh. If you're going to try your hand at word-games, you had best know that one definition for evidence is: "that which tends to prove or disprove something; ground for belief; proof". Convincing evidence towards a particular conclusion IS proof even in the less word-gamey of senses. I am fairly well convinced by each of the above. Your "complex trade-offs" have, indeed, been discussed, and I didn't find your counter-factors particularly convincing. I noted the few advantages of single tables generally come down to: (a) just one word to remember (ignoring columns and broken off many<->many tables and the panoply of little exceptions), and (b) better and more predictable optimizations with regards to modern RDBMSs, and (c) shorter queries. I consider (a) to be silly given the potential for browsing RDBMS meta-data. I consider (c) to be invalid given potential for simply defining a few common views. And for (b), I don't feel like I should limit my designs based on the modern RDBMS or I'll enter that same shitty little spiral you are in: people who design for the RDBMS and PrematureOptimization rather than designing on the natural curve of the DataModel and forcing RDBMSs to re-design themselves for you. I'll also note that "more tables updated per transaction" really isn't a problem: (1) an RDBMS can easily be optimized for updating many skinny tables within transactions, and (2) more tables means, despite potential for more tables per transaction, considerable potential for greater concurrency because one isn't bottlenecked by the one-true-table; even optimistic transactions become more feasible. I'll also note that skinny tables leads far more readily and naturally to CollectionOrientedProgramming, where multiple in-application databases are created, updated, read, and deleted by use of transactions. And while there are some bad ways to do skinny tables, and 'poorly done' anything is a bad thing, there are a great many more ways to poorly do the one-big-table (especially when dealing with nulls and one->many relationships). -- db
- (Also, you should resist use of that phrase - simply tossing an unjustified opinion on the table and saying "equal or unknown until shown otherwise" is fine for brainstorming but is an exceedingly poor debate etiquette because it is no stronger and no more meaningful than any random stranger saying "Is Not! Equal or unknown until shown otherwise".) -- db
- The burden-of-evidence is most certainly on you when you present views contrary to generally-accepted wisdom. That is because generally-accepted wisdom already has established evidence, or is considered self-evident. This, for right or for wrong, is how science works. You may not like it, but you can't escape it. See my example of stomach ulcers, above. Read the provided link. To use your example, the generally-accepted wisdom is that unicorns don't exist. Your claims, in effect, are that they do. Therefore, the burden of proof is on you. If the generally-accepted wisdom was that unicorns do exist and you claim they don't, it would be exactly the same -- the burden of proof would be on you. It has nothing to do with whether unicorns actually exist or not. And, my frame of reference for "extraordinary claims" is generally-accepted wisdom. It can change over time; indeed, I remember when the notion that object oriented programming was somehow "good" was an extraordinary claim. Now the claim that it isn't "good" is an extraordinary claim. The frame of reference has shifted over the last twenty-five years, because generally-accepted wisdom has changed. -- DV
- ArgumentByTheMasses is a formal fallacy in debate-land, and also to some extent in science (although "peer review" is a form of the same thing). Popular wisdom is that 3rd-degree normalization is sufficient and that scripting/dynamic languages have their use. It also dictates that people like using canned objects much more than they like building their own. Be careful what you ask for, guys. --top
- All true, but keep in mind that what is "popular wisdom" in the general IT world may be somewhat different from "popular wisdom" here on WardsWiki. Generally-accepted wisdom is context-sensitive. And it changes nothing of what I wrote. You can continue your current approach and endure continued abuse, or you can change your approach and gain support for your views. -- DV
- [The masses are not using the relational model and this is the problem.. The masses are not arguing for the relational model either. Most of them are arguing the way they do on Fabian's site (clueless about relations). The masses are using the OOP and the simple wide table model. Check out FabianPascal's site for what the masses argue.]
- As a DevilsAdvocate generally agreeing with DV, I'll bite this time and argue a bit. The relational model, has been far more proven than the OOP model, though DV... I consider OOP almost like XML, where just because XML is popular (and OOP) does not automatically make it sound. The problem with OOP is that no one knows what it is. See ObjectOriented. Messages, were supposed to be the basis of OOP - and most OOP languages stray far far away from messages and don't even involve them in many languages. The relational model, however, is not strayed far away from when people speak of the relational model (when they speak of the SQL products they are speaking of a partial relational model). No one here can clearly point to what OOP really is. We can clearly point to what the relational model is though. The original inventor of OOP was AlanKay who claimed it was about messages... and I'll repeat that most OOP experts today like XML experts haven't a fricking clue. So, I don't think OOP has truly proven itself rigorously as much as the relational model has. With OOP the problem is not that some of the thinking in OOP is useful - the problem is that OOP is not clearly defined like the relational model is. OOp has a lot of the problems XML has - hype, unfounded claims, etc.
- My intent in mentioning OOP was not to defend it, but to make a point using one of Top's favourite topics in hopes it would strike a resonant chord with him. Personally, I use OOP where it seems appropriate, and don't use OOP where it doesn't seem appropriate -- much as I use a hammer on nails and a screwdriver on screws. I treat OOP as a tool, not a philosophy. -- DV
- Sounds like good advice to me. If TopMind wishes to know which sort of zealot I am - I'm a proper reasoning zealot: anything that looks like sophistry is vile and evil in my eyes. I sometimes believe we should stick to old traditions and put sophists on trial then make them drink hemlock, their crime being the corruption of minds, but I also happen to respect freedom of speech. If TopMind wishes to "take on sacred cows" he had better provide proper justifications for his claims, do so up front, and never create a requirement as to a person's motives for asking him a fair question - which means any question regarding his argument. One can reasonably ask them about their motives simply to gain a better idea of what sort of answer they are looking for, but demanding certain motives is just a form of hedging. Dodging and hedging in any academic argument are forms of sophistry (and if you study sophistry, you can learn how to do them very well); sophistry is only productive in academic and scientific works when one's goal is fraud - signs of a true FraudulentMindset. As far as why I often harass TopMind: the vast majority of TopMind's arguments and responses just happen to look (to any trained observer) a great deal like unsupported opinion (which is a problem because all expert opinions must be supported by reasoning) followed by pointless dodging and sophistry. AND they happen to be in subjects I'm interested in, on venues I peruse, and generally contradict what I consider to be well-established (and often well-justified) wisdom, or even rigorously established fact (esp. in language and typing issues). Extraordinary claims require extraordinary evidence, and any attempt to dodge WILL get you shot down like a clay pigeon.
2. Trading visibility and simplicity for security/integrity-oriented bloat can be the wrong trade-off. It just trades one group of problems for others.
First, is there ever any tradeoff between "visibility and simplicity" and security/integrity? Is security/integrity always free?
- Visibility and simplicity is increased when you don't have to reinvent security and integrity yourself.. instead let the system automate it if it was already built in. An example is URL injection - is it simpler just to let all websites check URL injection themselves using manual error prone human checks? Or is it better to provide them with secure and integral functions like getUrlVarAsInt and getUrlVarHtmlFiltered (as opposed to getBlobOfInsecureText, forgetToFilterMySelf, insertIntoLooseDatabase, getWebsiteHackedInto, dieSadly). Never checking incoming URL variables for malicious injection may make your code simpler - but so will providing tools that are secure and integral to your developers. Automatically taking care of security or providing ways to easily check and verify security reduces code bloat. Real world experience speaking here. Writing manual checks and implementing all security and integrity yourself causes code bloat and creates complexity.
Are you telling me that even you don't understand what you said? Wow, TopMind. And no, security/integrity aren't generally free (security, correctness, optimization, and reflection penetrate everything), but calling them 'bloat' will be rejected by most people, including me. So give evidence of where it is the wrong trade-off.
3. [Trading visibility and simplicity for security/integrity] can create conceptual mistakes that a clear system may have prevented,
- Building in security and integrity can make the system clearer than letting the programmer manually do it himself everywhere.
4. Conceptual mistakes are just as bad as security/integrity/type errors, and may even cause them.
- Conceptual mistakes can be caused by making a mistake in your current loop range and going out of bounds (not knowing what exactly the algorithm is doing) where again an integrity check would help you (range checking). Another conceptual mistake would be simply passing the wrong thing into a function, like the wrong type. Whoops we missed the whole concept of the function and passed the wrong thing in. Conceptual mistakes can be caused by a lot of things.
Most rational developers will agree that conceptual errors can create security/integrity/type errors. I also find it odd that you challenge this. I want you to confirm that you disagree with it.
- Errors are caused by lots of things.. like not having any concept of what a type is.
I don't think you're in a position to make judgements about "most rational developers", as I don't think you're particularly rational. Can you define for me what a 'conceptual error' is? And can you prove to me that they are 'just as bad' as security/integrity/type errors? Depending on the definition, I might concede that one can cause the other, but I might deny that the whole 'can create conceptual mistakes' point.
5. Trading KISS for safety may result in neither [?]
- Having safety may make it KISS. Example: OpenBSD is considered simple and has lots of security. Powtils is considered simple and has lots of security.
- A "root" tool generally needs much more care than end-user tools, especially custom ones. Comparing the "prevention" value at the bottom of the stack to the top is apples to oranges. See BottomLayersNeedMoreCare.
Is this about equivalent to saying, "And if I flip a coin a million times, it may land all heads!"?
AprilZeroEight