A contentious theme to many debates is about what kind of evidence is accepted or how much it is weighed. It can be roughly partitioned into "how" mechanisms and "why" consequents, either of which may be subject to measurement or precise qualification. Examples:
The How
- Write Code, Get Started, JustDoIt - fundamental. If we could just 'think' software products into existence, we'd be out of jobs... and there'd be a high risk that at least some of the software products would be sentient and want to kill us.
- agile methodologies, SpikeSolution, ExtremeProgramming, etc. focus on hammering in just how important this step happens to be.
- beware that metrics for code written aren't typically as valuable as feature throughput while maintaining some level of correctness and reliability.
- Use of typing and type-checking to obtain proof of safety and resist regression.
- Use of unit-tests to obtain confidence in correctness.
- Modularity of code, to improve reuse and 'grokkability', and also to make other 'how' factors like unit-testing easier.
- Do the RightThing the first time - a 'how' not available in all fields. Avoids a great deal of rework.
- Simplicity - a property for any component in a model or design of containing the necessary or EssentialComplexity to avoid burdening the users of the component, while simultaneously avoiding any additional complexity. This relates to the EinsteinPrinciple: As simple as possible, but no simpler. Both halves of that principle are equally important. The goals of simplicity are to reduce maintenance costs, make correctness more provable, sometimes to increase 'grokkability', etc.
- There is such a thing as too simple. A component is too simple if a significant portion of distinct components in the overall system that interact with this component need to add similar or identical forms of complexity in order to make up for its deficiencies, such as carrying around state or re-implementing parsers and constraints checkers per-application, or start putting 'if NULL' on every statement, etc. In these cases, this complexity should have been part of the component or model in question. The word for too-simple things is "simplistic". One might say that simplistic approaches cause users of the model to start AddingEpicycles.
- It should be noted that simplicity DOES NOT offer "grokkability".. While that may be an aim for simplicity, it usually fails unless the simplicity is of a form that users already grok. The problem is that, once people grok a simple concept, it seems really easy to grok; at some point the concept 'clicks', and from that day forward they JustGetIt?, and, short of brain damage or a medical condition, they won't forget it -- much like riding a bike. We don't remember how difficult we found counting and negative numbers and fractions and functions and recursion and the concepts of propositional and predicate logic. That doesn't mean they were easy to grok; it only means that the brain forgets the struggle to acquire such 'simple' concepts when memory of that struggle isn't bound to our memories under conditions of having to memorize vast sums of raw data; people remember that learning organic chemistry or calculus is difficult because they do it in part by memorizing a large number of formula, but only teachers can truly appreciate how difficult it is for even the brightest of people to grasp advanced concepts of symbolic logic and algorithmic performance. That said, many people even on this WikiWiki have difficult grasping monads and the basics of CategoryTheory, both of which are extremely simple as concepts. In fact, I suspect that all cases of true simplicity are like the GameOfGo - extremely simple to define (e.g. an accurate description type-theory or category-theory or set-theory or monads can be often done in less than a page), rather difficult and slippery to grok (often requiring enough effort and play with the idea to earn a 'eureka' moment), and a lifetime to master... and anyone who tells you different is trying to sell you something. [EditHint: This should probably be moved to another page, perhaps just keeping the bold declaration and a link. Perhaps 'ToGrokTheSimple?' as in "It is difficult ToGrokTheSimple?"]
- Personally, I understood the reasons for Monads and Haskell as soon as I heard an audio speech from the inventor. Before hearing that speech, I didn't understand the purpose of them simply because no one had clearly explained it in clear English what their purpose was (all I heard was BuzzPhrase's, and Modularity wasn't really mentioned). Most likely the problem stems from the people that try to describe Monads who aren't capable of describing it in clear English (whereas the inventor of Haskell was able to, in his audio speech). If one cannot understand something as simple as modularity and Monads, it may not be due to Monads or Modularity - it may be that we can blame the person who did not describe it clearly and concisely with clear English. Whether or not Monads and Haskell in practice actually live up to their claims is another story (for example I'd like to see some real world large complex systems using multiple (hundreds) of processors with it).
- Understanding the reasons for X doesn't mean understanding X. Understanding the reasons for accounting doesn't mean you understand accounting. Understanding the reasons for statistics doesn't mean you understand statistics. Understanding the reasons for type-theory doesn't mean you understand type-theory. And understanding the reasons for monads doesn't mean you understand monads. Perhaps you understand both, but I do wish to note that this difference is often critical.
The Why
- Obtain 'services' or 'features' - similar to 'JustDoIt', service is the fundamental reason to write any software. This 'service' can range from 'entertainment' (for a video-game) to manufacturing to automated delivery of drugs to translating natural languages, etc.
- The word 'feature' is more often utilized relative to a base 'package' of services and in comparison to other products that offer the same base package - i.e. a "web-browser" offers a vague but commonly-understood set of services, and "tabbed browsing" (to avoid window-clutter) are 'features' atop that.
- Feature and service metrics by Qualification are somewhat difficult to compare on any absolute scale. Where one product fully eclipses another (such that the feature/service set is a strict superset of the other) then one can say for certain that it is 'higher' on this metric - such happens in practice most often between versions of the same product. Otherwise the feature-set can only be compared to similar products based on the context of use.
- Features by themselves are useless. Features with a customer are useful. The value derived by a feature depends on the number of and degree to which users rely upon it. Based on this, it might be possible to come up with a pseudo-metric regarding total derived value. High-value features are those useful for a variety of purposes and that are difficult or too painful for users to obtain without dedicated support for the feature - examples include KeyLanguageFeatures, noting that total added value for adding a new feature that can already be easily emulated is very low.
- Note that 'services' and 'features' must also answer to various restrictions. While such things as performance guarantees and small package-size, simplicity, minimal bloat, etc. could be considered 'features', they aren't service features and so aren't included under this heading.
- If customers of a product include the developers and the maintainers, then features internal to the design also qualify as features. This can include support for reflection, debugging, rapid addition of new features, good support for unit-tests, etc. This may be a critical point in some arguments, and for many products (especially those that are never 'feature-complete'), is largely true. For code that is just going to run a few times, it isn't quite as important.
- To make money - a not-so-fundamental reason to develop software, but an often important one. In this case, the following might also be aims to further the making of money, and metrics to examine your success: time-to-market, monopoly, vendor lock-in, maintaining incumbency, theft-prevention, reducing opportunity-loss, and sales.
- Note that ethics often fail to maintain any grasp on people focused on making money. Use of FUD, bully-tactics and legal threats, patent-hording, bribery of reviewers, buying up potential competitors before they become competitive, etc. are all quite common. That isn't to say most people who work on such products aren't perfectly good; it's more of an institutionalized evil, as seen with the (* self-censoring for GodwinsLaw *)...
- Avoid ReWork?
- Feature extension without re-deployment or modification (often via plugins, aspect-oriented, FirstClassTypes, functors, blocks)
- Portability (ability to use the same product without modification on a variety of platforms)
- Precision (just reuse the pieces you need; reduced requirement for 'baggage'. Often sought by modularity.)
- Correctness - a state in which software performs as required when operating within expected conditions (distinct from robustness, resilience, and reliability). Not all macro-level products can be qualified here because the "as required" clause is often not resolvable (e.g. what is 'correct' for a video-game? what are the requirements?). But many domains do have clearer requirements (e.g. software for a pacemaker) and almost all software domains can discuss 'internal' correctness at the sub-component level (e.g. the 'scheduler' is minimally 'correct' if processes authorized to share time do receive it). It should be noted that software can provide services and features in excess of its requirements; these features, even if incorrect, do not affect the correctness of the product unless and until they are added to the requirements (though the requirement for their correctness may be implicit).
- Provable Correctness - distinct from merely being correct; provable correctness is itself a valuable property to end-users in many domains (e.g. aviation, medical, aeronautics, deep-sea, robotics and manufacture, etc.). Can sometimes be achieved deductively with aide of 'design by contract' or advanced type-systems. Otherwise, can be partially achieved inductively via other forms of correctness testing. In systems that continue to receive service updates, provable correctness must be maintained across updates.
- This can go under either. Formal proofs are one way to achieve reliability ("why"), but it is one technique ("how") among many.
- Eh, as you've redefined it below, anything can go under either (depending on stated goals of the customer and the degree to which the developers/maintainers are allowed to qualify customers).
- That's why we need to rethink our categories here. Who is requesting the goal or metric may matter more than what is being measured.
- As stated here, provable correctness is a product of its own - something you can sell to customers in many domains (i.e. "I'm not just saying it's correct. I can prove it's correct.")
- Yes, but it may crank up the cost, hurting another factor.
- Indeed. After all, handing the client nothing at all is very cheap; anything beyond that cranks up cost, reduces grokkability, introduces potential for unreliability, is less likely to be portable, etc..
- This appears to be an exaggeration. There's lots of factors involved, and its often the case that most need to meet a minimum level to be acceptable to the customer. This should go without saying.
- What appears to be an exaggeration is your implication that aiming for provable correctness 'cranks up the cost' more than do other properties useful to the customer.
- In any case, Correctness is not and does not imply Reliability. Software can be reliable and completely wrong, or correct but unreliable, or reliable and correct, or incorrect and unreliable. Obviously 'reliable and correct' is best, but if you have to choose just one then which is 'better' depends on context (I generally prefer reliability > correctness so long as the degree of error is predictable). In terms of a WebServer, reliability might be measured by the system's average weekly uptime where it serves pages under a regular load (i.e. excluding periods where it is down or deadlocked), while correctness would be serving the right pages, executing the right server-side scripts, using the right CGI environment variables, etc. Simple type-systems target 'safety' only, rather than correctness or reliability, but safety in particular is useful for reliability (albeit not necessary for it). A way of improving reliability that would have nothing to do with safety would be to have a heartbeat monitor that periodically (e.g. once every two minutes) requests a few pages to see if the WebServer is working, then takes action to restart the WebServer if not.
- Predictability - ability for a user to predict program behavior. Only slightly related to grokkability - a simple AI or random-number based scene-graph builder could be easily grokked but still be unpredictable. In terms of correctness, a system can be incorrect but predictable if the degree to which the system is incorrect is predictable.
- Reliability - a state in which software is actually operating under expected conditions (distinct from correctness). It shouldn't be difficult for people to think of unreliable but correct software, such as a calculator that occasionally crashes.
- Proof of Reliability -
- 'Grokkability' - a property of being readily 'grokked', or understood. This is not really measurable, since it depends so much upon the education and experience of an individual, but a useful property. Perhaps best measured by having children use it (e.g. children trying out a prototype flight cockpit and simulator for an F-22, and skilled behaviorists tracking their activity and the time it takes to accomplish pre-determined tasks).
- Productivity - again, not easily measured, though expert vs. expert productivity is possible to compare for select tasks. Depends heavily on HCI.
The Evidence (under destruction)
- "Elegance" of code
- Modularity of code
- ExpressivePower - reduction in semantic and syntactic noise; degree to which code 'matches' programmer's intent instead of dealing with layers of translations, casting, insertions, bailing wire, duct-tape, and hand-waving to realize intent. Generally requires correct level of abstraction AND the correct abstractions.
- LiterateProgramming - code combined with documentation
- OnceAndOnlyOnce expression of ideas
- CodeChangeImpactAnalysis, reducing or making more predictable the DiscontinuitySpikes when adding desired features {disputed below}
- Elegant management of fault (e.g. exceptions vs. segfault, GracefulDegradation)
- Element ratios, such as lines per function, methods per class, tokens per statement, etc.
- Proof of safety (won't crash, won't do anything with 'undefined' consequences)
- Proof of performance (HardRealTime, embedded - limited space)
- Proof of security (won't let someone do something awful for a variety of known classes of attack)
- Proof of secrecy/privacy (won't let someone learn something awful for a variety of known classes of attack)
- Proof of correctness (design by contract; fulfills specified services; properly handles resources; performs no 'wrong' steps according to procedural constraints, handles predictable errors in accordance with requirements)
- Optimized machine and network performance (more proven assumptions = more access to correct optimizations)
- Known robustness (stochastically analyzed resistance to random faults (via detection, recovery) of various classes; includes known redundancy)
- Known resilience (ability to recover from certain classes of attack or damage and restore service once the cause is alleviated, delay and disruption tolerance)
- Support for Reflection (ability to examine what is going on inside the running system; includes logging and support for debugging)
- Design for Portability (run service on many machines and OperatingSystems without changes to code)
- Design for Accessibility (access service from many machines, mobile phones, web-browser, local and remote, multi-language support, reduced modality - not tied to machine or application, etc.)
- Design for Extensibility (scripting, plug-ins, RuntimeUpgradeableCore; ability to add new features, bonus if at runtime)
- Design for Configurability (to modify the GUI, apply skins, modify data to particular or known effect, change registry keys. Also, ability to maintain multiple different configurations and switch easily between them. Bonus if ability to do this at runtime rather than just between runs.)
- General Adherence to Rules and BestPractices? (DontModeMeIn, ZeroOneInfinity, ProgressiveDisclosure, Normalization, UnitTest, YagNi, etc.) - note that this is independent of any requirement to adhere to them, and is only as strict as one decides it ought to be, but often in pursuit of some other property such as correctness, resilience, avoiding DiscontinuitySpikes, increased accessibility. Some rules and are more objectively proven than others.
- Supporting KeyLanguageFeatures (when it comes to language design)
- Following DesignPatterns (within application) where the appropriate language-features are missing, based on the idea that established design patterns have already hammered out most of the kinks and walls often met by clever programmers with naive and untested solutions. (Similar in purpose to general adherence to rules and best practices.)
- Time from conception to market (even if broken or incomplete)
- Time to completion (feature-complete/time), or, for products that are never 'complete', average latency between a feature request and a service update that provides it (waiting feature-requests / throughput)
- 'Popularity' (Marketing, Sales, Monopoly, Vendor lock-in, Incumbency)
- Ratings (esp. in games market and products with existing competition)
- Reputation (very informal, but an indicator of popular opinion; word-of-mouth advertising or defamation)
- Glitz, glitter, eye-candy, audio, graphics, skins; also glitches, visible deficiencies, audio skipping, etc.
- Application speed of learning (via user tests; difficult to test except relative to a competing product, and need unbiased test group so more difficult if product has cornered the market and pool of unbiased applicants is small)
- Application productivity (via user tests; difficult to test except relative to a competing product, since productivity is affected by many things; while experience with the existing product will bias any such tests, this one at least can wait until users have learned all products thoroughly, or can compare experts with equivalent experience by some approximate measure - e.g. expert vs. expert productivity).
- Logged/reported Bugs; Latency between bug identification and bug extermination; Regressions (bugs reintroduced after adding a new feature); Average severity of bugs (e.g. crasher or stopper vs. minor inconvenience);
- Logged/reported Security Violations and Security Risks (esp. for systems software), also with regressions/severity/latency/etc. measures
- WhatIsSuccess
- WorseIsBetter
- Hours spent on maintenance
- Benchmarks and Performance (for computation-intensive applications, and application-platforms)
Other than possibly brief pro/con descriptions, please put discussion paragraphs below with a heading that corresponds to the bullet point.
(discussion here)
Why did somebody move CodeChangeImpactAnalysis to design? Amount of code that needs changing is a "result", not something you do at design time. Well, I suppose we could split it to test scenario CCIA before release and actual CCIA.
I believe the amount of code that needs changed to get a new capability or result is intrinsic to the design. A great number of design features, such as modularity, plugins, aspect-oriented programming for cross-cutting concerns, domain-specific languages and abstraction and OnceAndOnlyOnce so code-changes don't need to be duplicated, and even KeyLanguageFeatures and DesignPatterns to help do the RightThing - or a more flexible thing, such as VisitorPattern - the first time so you spend less time needing to duplicate code and less time needing to change code later to add new capability, are all aimed at (at least partially) reducing maintenance costs and total costs for code-changes. CCIA is something often considered greatly at design time; in many ways, it's among the reasons programming languages, libraries, and frameworks were designed and now exist.
- But that is true of anything. The internal design will of course *affect* any metric having to do with change (including time spent fixing bugs). But the design affecting the metric and the metric being classified as "design" are two different things. "Affect" is not sufficient. It is a results metric.
- If you believe that CCIA is a 'result' metric, then so must be proof of safety or correctness, "Element ratios, such as lines per function, methods per class, tokens per statement, etc.", security, OnceAndOnlyOnce in the code, modularity, centralization of cross-cutting concerns in the code, etc. These are also 'results' of the code and analysis over it. CCIA is just the result of analysis over code, just as much as StrongTyping. If you're going to split 'design' from 'result', you're going to need to find a consistent means of doing so. I see three justifiable possibilities:
- (1) Properties observable in the end-result are always 'result' rather than 'design'. This would include all properties that affect further 'development', such as typing properties, element ratios, CCIA, tokens per statement, security, OnceAndOnlyOnce, modularity, CouplingAndCohesion, code organization, etc.
- (2) Properties observed by users of the product are 'result' and everything else is 'design and development'. In this case CCIA, typing, element ratios and such would NOT be 'results', but would be 'design & development' issues.
- (3) Split the above two categories into three: design, development (including property-results of design and all previous development), and end-user results, and attempt to properly divide them.
- If you can find some other consistent split that seems practical, please let me know. Otherwise, please carefully reflect upon your motives for seeking placement of CCIA into the 'results' list.
Besides, CCIA is about stuff in the 'black box', observing properties over the code just as much as looking at safety, correctness, fault tolerance, or anything else. Customers (outside the aviation, medical, NASA, etc. industries) don't
overtly care about these things.
- I disagree. It is not a black box to the maintainer. Targeting developer maintenance is just as important as user-side issues. The program code is the "interface" to the developer. The design hopefully is geared to help in servicing the program. How easy a Ford allows a mechanic to change the transmission is a results metric, not a design metric. If you want to make a new classification: user-side or user-interface metric, be my guest. But, that is different than result metrics.
- A 'maintainer' is just a late life-cycle developer. Targeting developer maintenance is a design and development issue. CCIA isn't any more of a 'result' than is 'final code organization', 'design patterns actually used', 'proof of correctness', and 'total lines of code'.
Users observing results DO care about time-to-completion (feature requests / throughput - i.e. avg. latency between a request and an update that services it), which is affected by the cost of code-change and the cost of getting the code correct, but whether the reason it takes a long time to fix code is the lack of refactoring tools, the amount of code that needs to be changed, the difficulty in identifying
which code needs changed, the difficulty in determining whether a change doesn't break things, or the cost of fixing actual breakage, is all quite irrelevant to them.
However, if you feel like splitting things, and you don't believe the 'latency' measure is sufficient, you can add it to runtime as well. You can also add design correctness vs. result detectable error, design safety vs. result detectable safety failure, etc. I would not object.
It seems we need to come with a clearer definition or description that separates the two. Result metrics focus on "why" while design metrics on "how". Result metrics are what one is trying to achieve for the "customer" when making or choosing design metrics. (The customer may be future code maintainers, not just users.) For example, some suggest that methods should be kept small. This is allegedly to make it easier to grok the code and system in parts. The "why" is grokkability for the developer/maintainer. Design metrics are "how" rules for how to satisfy the "why".
In that case, any developer is always his own customer. The future starts now... no, wait for it, NOW! Darn. Missed it again.
If you seek to divide the evidence along this dimension, I'll help you do so, but calling it 'Design Versus Results' evidence seems misleading. Both vision AND results are "why" issues, and 'design' is always part of 'vision' (you never have a 'vision' without knowing where it fits into some greater 'design'). The 'how' is everything between the two.
A better page-title might be: WhyVersusHowMetrics?
Perhaps our dichotomies are too simplistic of a classification system altogether. This page has gotten away from my original intention. People injected their own personal view of TheOneRightWay? into it, melding together stuff I was trying to keep apart. Thus [1], I'll try again:
PageAnchor: Customer
Try 2: Engineering Metrics Versus Customer Metrics - Engineering metrics are the "how to achieve" side and customer metrics are related to "what to achieve". Customer-side metrics focus on what the "end-user" ultimately wants. For example, a customer wants a car that's cheap to buy, cheap to maintain, uses minimal fuel, has plenty of room, and has decent power [along with a hundred unnamed metrics and qualities, such as being street-legal, not blowing up after a rear-end collision, being reliable, having good visibility, and tons of other things that are invariably desired but rarely named] {Many of these can be rolled up into "safety"}. The engineer may propose various ways to achieve this and even have their own metrics, such as power-to-weight ratio. Whether these match up to user-side metrics/desires or not depends. An engineer may claim that rotary engines are the best way to maximize these goals, but another engineer may have a different opinion. Ultimate user-centric tests such as test-driving and Consumer Reports Magazine will settle such issues.
For software development and maintenance issues, things get a little tricky because the end-user may also be the engineer. If they rotary engine (car) proponent above had to also serve as the builder or auto-mechanic on his/her own designs and not just be a "drawing board" engineer, he/she may fill a similar role as the software developer or maintainer.
It's true sometimes the customer focuses on engineering-side metrics. They may read an article on rotary engines and then decide that they "must have rotary engine" as one of their requirements. Similarly in software, one may convince the customer that "provably correct software" is the way to go because one can use math-like precision to guarantee results. However, if they later learn how expensive and time-consuming it may be (hypothetical only), they may back away from endorsing this one particular way of achieving accuracy. The typical customer of course wants accuracy, but they also want to keep the costs down. If formal methods can't achieve such, then forcing it down their throat is not the way to go. You cannot make it a customer-side-metric just because you like the technology. You have to show the customer how it maximizes *all* their weighted goals in terms of the customers goals themselves. If you want to use your own engineering metrics, you need to convince the customer that your metrics are a good stand-in or ruler for theirs. This is the way it is because they are paying you. If you give the customer what you think they want instead of what they want, you may found yourself with a different customer or none at all.
What I want to emphasize is the separation between the nuts-and-bolts and what the customer actually wants. Somebody who wants to get to the moon only cares whether there's a choice between solid or liquid rocket fuels if and only if it affects their weighed goals (safety, cost, accuracy, time, etc.).
--top
Overall, I can agree with your sentiment as it is reworded here.
I would note, however, that as SoftwareEngineering becomes a real discipline, it becomes OUR responsibility - not that of the customer - to enforce certain parameters. An Architectural Engineer of a skyscraper would never be forgiven intentionally skimping on safety features such as state-of-the-art resistance to Earthquakes and fire simply because the customer, who wasn't planning on living in the building, wished to keep costs down. The same thing will become more and more true of software designers as the field matures. Not all software projects are safety or security or performance or correctness critical, but since these all are emergent properties (requiring pervasive implementation) that are desirable in almost all fields, it will likely become more and more the case that every software product needs these properties simply to be able to integrate with every other software product that might run or interact, even indirectly, with a secure/safe/correct system... which, in accordance with SixDegreesOfKevinBacon, will likely mean every product.
- Are there any topics here related to "properties needed to integrate"? We seem so far away from such a standard(s) existing right now that is almost not worth considering. -t
- Not worth considering from an application standpoint, certainly. But worth considering when comparing methodologies or performing language design for a new GeneralPurposeProgrammingLanguage or seeking to build the next generation OperatingSystem. I think that depends on your purpose, really - what should we be doing in the future? or what should I be doing yesterday?
For the moment, we can get away with whining, pouting, throwing our hands in the air about how difficult everything is, etc..
RealProfessionalsGetSued. Real professionals are expected to know more about their field than the customers do. And, as a consequence, real professionals can't blame the customer for everything wrong with the product. But, at the moment, we aren't real professionals; we're just a bunch of hackers with a bad case of
DisciplineEnvy, and (until we get our act together) that is how we
should be viewed by big business.
We are indeed supposed to know more about our field than the customers, but this is where communications comes in. We need to describe the trade-offs. If they want it cheap or sooner instead of reliable; as things stand, that's their decision. Our responsibility is merely to communicate the trade-offs, not select them. (You can walk out if you want and give the job to somebody else, and I've done that before, but it hurts the wallet.) Product, civil, and building engineers have safety regulations and trial lawyers breathing down their backs as an incentive. It would be nice if we could get customers to sign off on risky decisions. The Ford Pinto and O-ring Shuttle engineers sort of were off the hook because documents found indicated that managers ignored engineer recommendations or concerns. But if people don't die from our software, no such conventions or regulatory body or paper trail will likely exist, at least not in the USA. The problem is that people usually don't die from software. Thus, SafetyGoldPlating (in a good sense) is not valued by customers. Sure, they grumble about a report with wrong numbers, but they also grumble equally about costly software contracts.
You often have the choice to pay up front or pay out the back. Many tradeoffs that exist today don't seem to be consequences of EssentialComplexity; they are consequences of AccidentalComplexity, myopic Simplicity, and other forms of PrematureComplexity. But I understand that you can't easily justify passing off costs to a small-business customer - e.g. there is no way you could quickly tweak an OpenSource RDBMS like MySQL then push the updates back out to the public repository just to have the features you want for a particular business venture. OTOH, that isn't necessarily the case for much bigger business ventures... nor is it the case for the OpenSource persons who are starved for money no matter which choices they make. Problem is, these 'up-front' costs are often huge and beyond easy reach without more capital and time than a typical hobbyist programmer possesses - the low-hanging fruit has already been picked, and the more software designed on any modern platform, the further the next platform would need to catch up just to match what has already been done. Inertia is a b*tch. But we'll get past it. Somehow. ComputerScience as a field is really only fifty years young.
- There's a lot of unstated implications in this such that it is hard to make heads or tails of. What is a "public repository", for example?
- Sorry about the logical leap there. If you had the capability to change an RDBMS for a small-business customer, it would need to be OpenSource. And OpenSource projects have public repositories - SourceForge, for example.
And, besides, more people will start to "die from software" as more systems become reliant upon it, and thus we won't have that - what did you call it? -
"problem" anymore. ^_^ As a note, military now uses software quite heavily for targeting, communications, etc. Medical now uses software quite heavily to track prescriptions, remote operations, etc. Aviation now uses software quite heavily for navigation, autopilot, collision detection and avoidance. Automobiles now use software more heavily for fuel-injection, maintenance, braking, wheel-power-distributions, etc. Traffic now uses software quite heavily for lights and control. Firefighters, Police, the list goes on. Fortunately for software, it's just one component so deaths can rarely be attributed solely to it. It would be nice, though, if people collected enough stats to add software-failure to the death counter (
http://www.cavernsofblood.com/deathcounter/default.html).
Yes, and such software is usually much more expensive per feature. In many domains, management does not want to pay the "medical" rate for stuff. They want it cheap and nimble.
I believe it is more expensive per feature only because we haven't paid the big up-front costs to make it cheap per-feature for certain classes of features that are commonly desired from one project to another. As I was saying: you pay up front or you pay out the back.
Heavily-engineered systems tend to stay heavily-engineered and remain maintenance-heavy. Maybe you know how to do such systems right; but if so, that is not the norm.
Will you please define "heavily-engineered"? And would I be right to infer that you are assuming that obtaining such features as safety and provable correctness require "heavy engineering" and are necessarily maintenance-heavy? I find myself very skeptical of that pair of assumptions - I expect at least one of them is false no matter how you choose to define "heavily engineered". I do agree that codebase inertia is a rather fundamental property - a bit like 'mass' in physics - so that if something qualifies as "heavily engineered" it is unlikely to change.
It is an observational anecdote only. The counter-evidence is probably also anecdotal. "Heavily engineered" here are usually projects that involve critical line-of-business transactions (usually financial) or human safety. They have teams of at least 3 developers, formal version tracking, QA staff, and formal logged non-originator code reviews. They usually used compiled languages and strong typing. ("Heavily-engineered" could also mean lots of bell-and-whistle features in some contexts, but not meant that way here.)
This all very much reminds me of the dichotomy between assets and liabilities in financial accounting. For example, DesignByContract would be an asset, but for the project to be "balanced", its use be balanced by some kind of liability. In other words, for every piece of design, there must be a balancing, measurable result. --SamuelFalvo?
The equation, IIRC, is Assets = Liabilities + Owners Equity, or Owners Equity = Assets - Liabilities. There isn't any reason that Owners Equity couldn't be very, very positive. I'll admit I don't much respect the field of accounting and its voodoo arts for deciding values or costs of labor and services and everything else - any profession with competing models for hedging ought to do some serious reflection! But if you were going to place it, access to DesignByContract would be an asset - the liability being restricted choice of programming language (which might cost very little or a great deal depending on context). However, use of DesignByContract would be a liability - a statement of properties you are required to meet (and continue meeting), which is some metaphorical equivalent to services and such. I.e. Using DesignByContract voluntarily restricts your own behavior, and is thus a liability. The balancing asset would be the guarantee that those properties are implemented correctly in accordance with a code-proof - i.e. a guarantee of correctness, resistance to regression, etc. (Not that one can easily place dollar values on such restrictions or guarantees... but you can always hedge!)
I think you're confusing basic accounting with CreativeEconomics?. Economics, as a whole is the study of resource allocation. CreativeEconomics? is what happens when economists attempt to game the system for their benefit, trying to pass off their crimes against humanity as valid economics. At any rate, assuming no code ownership (which I find is increasingly prevalent in organizations today), owners equity will always be zero, so in my opinion, my original analogy still stands.
That being said, I like your precise example much better than mine! --SamuelFalvo?
There is always code-ownership, just not always exclusive code-ownership. Public domain = owned by everyone who manages to get themselves a copy. How well does accounting deal with open source?
{Easily. The asset value of OpenSource source code is zero. The (tangible & intangible) asset values of producing OpenSource software are derived from expertise, support or consulting contracts, associated publications, promotion, goodwill (in the classic sense), etc. The asset values of using OpenSource software derive from modifiability, reduced licensing costs, and whatever value using the software is to the organisation.}
What is 'correct' for a video-game?
What's wrong with "It plays the game the developers intended it to play."?
First, it doesn't touch on any service goals, such as entertaining those who play it. Second, which developers, among a group, get to answer that question?
Which question? The "Is the video game correct?" question or the "What is the game we are intending to build?"? The former would be answered the same way it would in other domains. They would pick a level formality (ranging from not answering to formal proofs) and do that. The latter requires an answer in any case, and would be done as it is now.
Correctness doesn't cover every goal for software in other domains, why require it to do so in gaming? I wouldn't consider the entertainment value of a game as part of its correctness. I don't find tic-tac-toe particularly entertaining anymore, but I would still consider software that plays it to be correct if it actually plays tic-tac-toe.
Correctness for a well-defined game, like Checkers or Chess or Tic-Tac-Toe, would be easy to determine. A Checkers game is correct if it puts the initial pieces in the right spots, allows you to move them along diagonals, allows you to leap other pieces, takes the pieces you leap off the board, kings your piece when it touches the end, otherwise supports and enforces rules, properly determines the winner and terminates the game when all is done, etc. The 'service' here is the automated provision of a particular game.
Correctness doesn't touch on reliability or maintainability or money acquired or any of those other features. However, it does relate to services and features - it is defined, after all, in terms of meeting service requirements. So, except where there are clear service requirements, there is no clear definition for 'correct'. Any software product that provides services in addition to its service-requirements is simply offering more than is required to be 'correct'.
The problem with most video-games is they possess no clear idea of the service they intend to provide. What is the service provided by 'The Sims'? How do you know when Harvest Moon is 'correct'? You are correct that if someone can (with authority) answer this question in a manner that lends itself to measurement or objective qualification, we could determine correctness. If it goes unanswered, however, we can only presume that the intended service is to impart some measure of entertainment upon players of a target audience, and, perhaps, to tell a story. For edutainment, one might also attempt to teach a skill. And, perhaps, creating then servicing an addiction by manufacturing then appealing to a lust for cheap rewards is an aim for certain games (e.g. World of Warcraft or Ecchi).
In any case, video-games are just one example of a field in which there is often no clear definition of 'correct'. I invite you to properly contest it in the general case, though, as it would be really neat to find a clear understanding of 'correct' for video-games and other things with vague requirements - something you could post in a gaming journal, perhaps.
I would define it just as was done for other domains, "a state in which software performs as required when operating within expected conditions". You appear to be including the step of finding out what those requirements are as part of this. While I don't want to downplay how important that step is, I don't include that as part of correctness. It's a prerequisite to be sure, but a separate issue.
Perhaps we're just viewing these from slightly different points of view. You're applying the definition of 'correct' as though it were universal and takes two arguments (requirements and product) and returns a logic-value (from MultiValuedLogic). I.e. correct :: Requirements -> Product -> Logic Value. I'm using the word 'correct' in a particular context, in this case 'video-games', implying that the product and requirements are available in the context. In this case, the Logic Value associated with 'correct' is simply not defined, or perhaps (a possibly better word) not resolved, until both the requirements and the product are resolved. As such, I say that 'correct' isn't well understood for video-games. I can't disagree with your view, but but I can say we have likely been talking past one another a bit.
Yes, the original definition I provided for 'correct' doesn't need to change at all, but it is still important to recognize that it isn't necessarily meaningful in a given context, and communicating this was my original intent.
RE: Understanding the reasons for X is not the same as understanding X... (supremely OffTopic... to be moved or deleted later)
[Well, back in the day, I got a ninety nine percent grade in accounting in high school and I hated it, so I dropped out of that class the next year instead of continuing. Understanding Monads is wonderful - but proving that it actually works with some supportive multi processor benchmarks/demos such as how Google has offered us information on how Sawzall works - would be far better than simply rambling on about how Monads does this and does that. In theory, and on some web page, Standard Pascal could be the greatest language in the world - until it is tested in the real world by some pragmatists like Pike, Kernighan, Richie, Thompson.. these people who had real world experience implementing compilers (and pages such as FutureOfProgrammingLanguages neither have real world pragmatic public tests yet - so be careful about what we predict on such pages. We might need a team as large as GNAT Ada to implement any of it. So RightBackAtYou - understanding programming languages is not about just having idealistic wiki pages about them.]
The above wasn't an attack on you, though you seem intent on making it into one. And I agree - an idealistic wiki page about programming languages (such as FutureOfProgrammingLanguages) does not mean an understanding of programming languages. OTOH, a bunch of pragmatic wiki pages about how to implement and analyze various aspects of languages and interactions - ways that have often been tested and proven already and exist in various languages around this world - do, if combined into one person, form the basis for a claim of understanding programming languages. And designing and implementing lots of LittleLanguages and compilers/interpreters helps, too.
[Already tested/proven and already exist in various languages - sounds a lot like 'reinventing and reusing old ideas and tweaking them for the future.]
That doesn't seem so relevant to this context. And tweaking is fine... it's the other half that is implied to not be so good: taking an old idea and thinking it your own. People who twitter "I am great! Look how high I am!" like little birds upon the shoulders of giants are amusingly arrogant.
[Please provide some exact URL's and places which I will change and update for you (I aim to please), to be less arrogant sounding. Also, if the URL's are clearly marked with humor, then don't bother providing that as a URL since that's clearly sarcasm and humor.]
Rather than start a fight here and direct you back through silly arguments involving wheels, tires, QompReactions, CapArrays, and the pages that started them - all without any real purpose or ultimate benefit to anyone - how about you simply tone down arrogance a notch or two on a more permanent basis? I mean, you can still be arrogant (LazinessImpatienceHubris), but you don't need to inflict it on everyone else.
RightBackAtYou - TopMind and others probably think you are very arrogant (self reflect).
As far as hypocrisy goes, I've made no claims of old ideas as my own... in fact, I search and find most ideas I think were original have already been done or are shortly put into progress by some company or another - sometimes the idea is just 'ripe', I guess (YouCantLearnSomethingUntilYouAlreadyAlmostKnowIt applied globally, much like various mathematical discoveries performed independently within years by different people across the world). It saddens me to know I could still get a patent many of these old ideas in a jiffy, just like Microsoft does. In any case, I suspect I've rarely come up with a fully original idea in my life. I just happen to think that coming up with ideas is not particularly valuable - it's making them work that is ultimately of consequence - only 1% inspiration, after all.
- It is making them work that is ultimately of consequence - which is why I harp on the fact that many of your pages are just ideas
And
TopMind has actually been quite reasonable in his more recent posts; I hope he continues to be so, because if he keeps that up, and holds to the promise he made in the
BookStop page, and avoids reversing
BurdenOfProof for any claims he makes, he'll earn my respect in a jiffy, perhaps even joining those whom I admire for their greater ability to maintain some semblance of intellectual argument in a heated discussion (such as
DaveVoorhis and
GunnarZarncke).
I prefer people who are intelligent, regardless of their arrogance - by the way. If one is arrogant that is okay. Most intelligent people, are fed up with the world and must be at least a little bit arrogant in order for their ideas to follow through (Dijkstra was claimed an Old Grumpy Critical arrogant man - so?). In the Fraiser Crane show, there was a quote "I prefer being called pompous and arrogant over..." which I'll have to look up. Your claim above is also very arrogant and hypocritical - since you are arrogantly criticizing me. Let's not start the NotNiceEnough page again.
It is very arrogant to claim that CapArray has no purpose.. and QompReactions was not created by me - but by arrogant critical people who hated the idea of Qomp. If they were humble, they would have said oh that is nice, passing by now.. Instead, arrogant people fired flames and created the QompReactions page (which, by the way, was very good - I don't wish for people to remain quiet - I prefer they speak up).
CapArray is a perfect example of an old idea that a young man thinks his own.
I clearly state that it has probably already been invented, but that I'd label the pattern anyway with a precise term called CapArray. Vectors and such vague terminology used in Java don't explain what the pattern or algorithm is. Might as well call them "triangles" or "funny arrays" if we're going to call them cute names like vectors (oh, how specific). Then you've got the scripting folks who use arrays in Ruby and such languages which don't label the algorithm as anything - instead they just call them associative arrays (without telling us - what kind of associative array? Assuming that we need the same algorithm, a hash list, for every program - which can clog a server up - since hashlists are overkill for many situations.) Then we've got arrogant people claiming that memalloc and realloc is good enough for me, no one needs this CapArray thing.. If CapArray is a duplicate page, then please point us to the pattern on this wiki which explains it - and we will merge it instead of moaning and complaining here on this page about how CapArray is so useless to him.
And then you said we're all jealous for not labeling it first. That was very amusingly arrogant. As is your entire rant here. Cute. And CapArray is pointless for reasons that have everything to do with runtime efficiency relative to far simpler solutions.
Far simpler solutions, such as using dangerous freehand Cee functions like memalloc and realloc.. Yes that often saves the day.
Hmmm? Is CapArray somehow avoiding any memalloc (or equivalent)? And, being a big fan of safety and correctness and concurrent programming, I'd never suggest use of freehand memalloc and realloc.
- Whether it uses memAlloc or 10 assembly calls at a low level for its implementation is not applicable when understanding the pattern and concept. If it uses setLength, and setLength indirectly calls memAlloc somewhere - this again does not matter when understanding the pattern. It's like discussing whether or not a copy() function uses assembly calls or operating system library calls. This, does not help, anyone understanding the high level abstract concept of a pattern.
The algorithm found in some Dobbs journal still isn't clearly patterned anywhere on the net (well please provide URL's if so, and we'll merge the info into the CapArray page if so). By the way: "Design versus Results" is about the vaguest, most useless, meaningless wiki page. It's possible it should be deleted too. As if people don't know the difference between results and design - how patronizing.
Not discussed anywhere on the net? Why don't you start here and follow the links. http://en.wikipedia.org/wiki/Dynamic_array
Yes, how patronizing. We all know what a dynamic array is. Freepascal and Delphi have had them built in for years now. That is what triggers one to avoid using built in dynamic arrays - since a dynamic array algorithm can vary. Apparently, you still don't understand - nor will you probably ever - I'd suggest you'd just use realloc, setlength, and hash tables. Or use a hashlist - no one is stopping anyone from doing that. Or just use a dynamic array. Whatever a dynamic array is. Well it isn't static - so what's the algorithm behind the dynamic array (rhetorical question).
If you had bothered reading, the algorithm behind the dynamic array growth was on that page, in the section on 'geometric expansion and amortized costs': grow bigger => resize by multiplier (but it isn't clearly labelled as some pattern we can easily coin, which is what this wiki is all about (so... "geometric expansion" doesn't count, I suppose.)), copy from older array, properly discard older copy of array. It works pretty darn well until concurrency or versioning or backtracking or transactions are thrown in - and no worse than the CapArray under those or other circumstances. So, maybe I don't understand the value you see in it; maybe I never will.
- "The" algorithm? There is "one" algorithm is there? The implementation of a dynamic array varies, which is the entire point being made. Rhetorical questions will be clearly marked in the future.. sorry.
It is a replacement for regular arrays when you need to access a[1], a[2], a[3]. It's not so great for inserts - but many of my programs do not require random inserts (and if they do, they are sometimes very rare - and it wouldn't matter - this is where profiling comes in handy). Wikipedia explains a whole bunch of dynamic arrays without ever clearly labeling the cap dynamic array pattern. A CapArray is a form of dynamic array and is just one choice we have. Instead of calling it a CapArray, people resort to long sentences and phrases like "this array here uses a technique that has some initial length set and then later it automatically resizes unlike a typical dynamic array". Why not just call it precisely by its pattern instead of reinventing/re-explaining it each time - is my point. That is what this wiki is all about - labeling common patterns.
Some who discover the CapArray don't even quite understand the CapArray and just see it as some basic dynamic array, too. i.e. maybe it needs even clearer explaining of how and when I've used it and found it handy. Sadly, I see hundreds of freepascal and delphi programmers (and Cee programmers) reinventing the CapArray in their programs using SetLength? and ReAlloc? - which causes bugs because humans don't always index the array properly, don't allocate enough memory, forget to free memory, etc etc. That is why I labeled the pattern and wrote about the pattern. When I first brought up the CapArray and CapString on mailing lists several years ago - the idea was rejected, ridiculed. A few days later it was accepted as self evident and obvious, and useful. This clearly maps to a quotation from A. Schopenhauer.
See also: TheoreticalRigorCantReplaceEmpiricalRigor
AprilZeroEight
CategoryMetrics