Where and how much does psychology matter in software engineering?
(Used to be named "PsychologyMatters")
This is the view that outside of issues of machine performance (speed and resource usage), that "software engineering" is primarily about psychology and fitting the human mind(s) of the developers. The better the fit, the better the productivity. The implications of this viewpoint include:
Attempting to make this BickerFest? into something more productive...
Hard vs. Soft Psychology
I've noticed several times in this discussion some advantage of 'hard' psychology being pointed out as though it were a point in favor of 'soft' psychology mattering. Doing so is a form of equivocation, and it needs to stop immediately if a discussion on WherePsychologyMatters is to be productive.
One might generally distinguish fields of psychology into two classes: 'hard' and 'soft'. These are perhaps over-general, but are well-accepted distinction among psychologists. Hard psychology is concerned with metrics and statistics, and includes studies of behavior, perception and pattern recognition, memory, conditioning and learning, response and reaction times, error rates while under stress or pressure or various drugs, etc. Hard psychology also has some strong relationships with information theory, computation theory, and language theory. Hard psychology does not concern itself with the subjective, which it deems inaccessible. Soft psychology concerns itself with aesthetics, tendencies, preferences, beliefs, feelings, etc. - e.g. making a user-interface look 'cool'. It tends to be less formal, and includes a great deal of speculation.
I don't believe anyone will contest that 'hard' psychology matters, and that one should take such things as reaction times and error rates and learning curves and color-blindness into account when designing tools. These are matters of psychology, certainly, and are a place WherePsychologyMatters. But they are also very objective, and shouldn't be conflated with 'psychology' in general if one is to then make conclusions regarding the value of 'soft' psychology.
This page purports that various aspects of what is (at least currently) 'soft' psychology 'matter'. That is, Top proposes that tools aimed for subjective 'preferences' and such provide significant real, objective benefits. Of course, 'soft' psychology lacks the sort of metrics, data, and predictive models possessed by 'hard' psychology that would be necessary to substantiate this belief. Until such are provided (thereby moving 'preferences' et. al. firmly into the 'hard' psychology camp), the notion that soft psychology 'matters' is itself a matter of pure speculation.
At the moment, we can't substantiate any claim about where, why, or how much "soft" psychology matters when it comes to tool design... but we also can't substantiate a claim that it doesn't matter. Therefore it being a 'Truth' or a 'Lie' remains an open question.
Hardening "Soft" Psychology:
Part of formulating and verifying any predictive model is controlling a variable and observing consequences. There isn't any reason one cannot do this with 'preferences' or 'tendencies' or anything else that is currently part of 'soft' psychology. One can tweak variables and make predictions as to how the population, subgroups of the population, and even individuals will respond to this tweak. Upon measuring the (observable) response (where stated impressions qualify as 'observable'), one can validate or invalidate the model and potentially tweak it to improve its quality.
Doing so iteratively, one could come up with models that will, with a known accuracy, precision, and confidence, successfully predict what individuals, groups, and populations will find (or at least claim to be) 'cool' or any other desired impression.
It is worth noting that expert confidence men, magicians, cold readers, martial artists, professional counselors, interior decorators, etc. are already impressively well trained in this sort of prediction... and that even untrained individuals have learned to predict much of this to a lesser degree simply through exposure. This suggests that much of 'soft' psychology can, in fact, readily be 'hardened'. Further, many of these individuals can work with whole audiences do their work very little advance knowledge of their audience, which implies that variation among individuals either isn't all that important or that individuals can be readily pigeonholed (e.g. into 16 classes of people, as per MyersBriggsTypes).
Unfortunately, very few of them know the ins and outs of those models they've built in the head (known from 'hard' psychology: brain structures are based on associative recognition and are awful at recall), and several classes of those those that might know (magicians, confidence men, etc.) aren't sharing.
If we were to sufficiently 'harden' the models these users possess, especially those that don't depend heavily on knowledge or models of particular individuals, we could probably create something like a MindControlWithDerrenBrown-influenced HCI. Given a learning machine, we might be able to further usefully keep an ExplicitUserModel so that the computer knows what you find 'cool', and knows what you want to do next, and knows which subtle reminders will influence you into avoiding common classes of errors while programming.
Until it is sufficiently hardened, it will be difficult to directly take advantage of "soft" psychology by any automated mechanism. But it might be a tad unfair to argue that all approaches then are 'pure' speculation: we can, with some unknown-but-known-to-be-much-better-than-random probability, usefully take advantage of our 'informal' exposure-based training regarding how humans (as individuals, groups, and populations) will respond to a wide variety of stimuli. Such educated guesses are no replacement for metrics (and I 'anticipate' Top would be among the first to agree), but they are better than nothing. I.e. what we imagine people will find cool, people will likely find cool. We might be wrong, or we might fail to be competitive with the 'coolness' in competing products, but even if we're wrong we're probably closer to 'cool' than to 'white noise'.
Not that any of this diminishes the relevance of already 'hard' psychology or not-psychology properties like safety, security, optimization, correctness, etc. - though one might point out that those are in some ways already accounted for: a calculator program that gives bad results, crashes often, or took several minutes to number crunch '1+1' is, very decidedly, uncool.
So, for which fields does 'psychology' - especially this 'soft' psychology - matter? One obvious answer is: selling a UserInterface. I.e. 'coolness' sells. So does 'familiarity' One can validly consider ProgrammingLanguages to be UserInterfaces between a programmer and a computer, and perhaps a 'cool' 'familiar' ProgrammingLanguage would sell better than one that is stuffy but 'better' by most objective means (more correct, more safety, more optimization, more security, better integrated concurrency and communications management, less verbose, catches more classes of typographical and logic errors, etc.).
While we can agree (assume) that soft psychology will likely influence popularity, what we cannot support is what Top proposes at the top of this page and among various others: that soft psychology somehow matters when it comes to productivity of a UserInterface.
Hard psychology is a bit more forthcoming here. E.g. given a 99x99 matrix, we can assert with a great deal of well-researched confidence that the latency between highlighting a point in that matrix and selecting it is much lower when selecting by mouse than is navigating to it by use of arrow-keys or entering the four characters specifying its location. This suggests that for latency relevant tasks, such as shooting virtual avatars in the head with virtual rocket-propelled grenades, a mouse is a better choice than a keyboard. OTOH, complex mouse gestures and patterns tend to become difficult for humans to get right... and so keyboard macros may be a better decision.
'Productivity' is determined based on the need to accomplish particular goals, of course, and it could be that soft psychology is more significant to accomplishing some goals than others. However, determining which goals those might be, and estimates at the degree to which soft psychology is 'more' relevant than the hard-psychology and math/information-theory/computation-theory stuff, seems to be more 'speculation' than anything else. We can't rely on our 'informal' training for this because we are not naturally encouraged to distinguish those difficulties and accomplishments that arise from our subjective nature from those that are either objective or arise due to our own incompetence.
SovietShoeFactoryPrinciple of "Soft" Psychology
Top often harangues on the SovietShoeFactoryPrinciple being a problem associated uniquely with focus on 'objective' measurements. But consider for a moment where weight is given instead to 'soft' psychology measurements: subjectives, opinions, views of how people 'find' a product. Going far in this direction, it isn't difficult to imagine products that do nothing productive at all, but instead hypnotizes, cajoles, threatens, deceives, and empassions users into believing and promoting the product as good. Essentially, you'd be encouraging the technical equivalent of a culture of yes-men and sophists: don't be good; it's much easier to spend money into making people think you're good and your competition is worse. Positive impression from the boss is more important than a useful opinion or a valuable warning. Dressing well is more important than doing good work.
Ensuring my flame-proof undies are securely in place, I might go so far as to suggest that 'religion' seems to fall in this category. So would fad, fashion, and various other cult phenomena.
On the other hand, it's difficult to argue with success, even if it is achieved at great cost to society in resources and lives and freedoms all with no measurable benefit whatsoever...
Re: This is the view that outside of issues of machine performance (speed and resource usage), that "software engineering" is primarily about psychology and fitting the human mind(s) of the developers...
I second this. A nice short exposition of items that are correct. I agree that PsychologyMatters a lot and I wonder how it is possible to make such a fuss out of this (below). Maybe this could even be expanded into FromHardToSoftComputerScience. -- GunnarZarncke
See also MyersBriggsForProgrammers.
I guess fuzzy statements without support appeal to people who embrace logically inconsistent concepts like EverythingIsRelative. If you can embrace one contradiction on appeal alone, it is hardly a stretch to accept any idea as 'correct' based only on appeal rather than due consideration or evidence. None of Top's bulleted points makes a falsifiable prediction, and every single one of them starts by assuming the unsubstantiated. The only falsifiable prediction he makes is: "the better the fit, the better the productivity" - which, while appealing to people who want to do things their way, stands completely unproven. This section, Top's just preaching his religion and wrapping it in slightly more ceremonial dressings than normal.
If you cannot objectively prove a benefit, even one supposedly derived of psychology, then you aren't in a position to rationally or reasonably claim that the benefit exists at all. In previous attempts to get Top to substantiate this position, he rails against the notion of "objectively proving psychology" - but that is exactly what he must be able to do if this claim is more than hot air.
By what authority? The psych view is a working hypothesis. It is at least as strong as the alternatives. DontComplainWithoutAlternatives. Where is your evidence for mind-independent betterment?
We couldn't even find objective evidence that nested blocks are better than goto's (ObjectiveEvidenceAgainstGotos). If there is no universal proof for the simple things, then the complicated proofs like thin tables or type-heaviness being objectively better is not very likely. And if by chance EverythingIsRelative, and that makes objective proofs impossible, I am just the messenger. I did not build the universe, I only observe its properties. If the truth bothers you, see your therapist; don't whine to me and call me names.
"Better than" is always relative to a set of goals, and there certainly is ObjectiveEvidenceAgainstGotos when a set of non-subjective goals is stated - such as degrees-of-freedom when it comes to error. Your argument regarding 'type-heaviness' is completely non-sequitur, and EverythingIsRelative is absolutely false like any other logical paradox - it can't even be true "by chance". And the truth doesn't bother me... If you got off your high horse and began to methodically obtain and speak truth, I'd have no cause to whine. But you don't speak truth - instead, you attempt to educate others in your fallacy, and THAT bothers me. Your tendency to turn unsubstantiated hypotheses into working premises is a reasonable cause for objection - objection that you rudely call "whining". And I didn't call you any names, except 'Top' - a name you've chosen to be called; implying I did is quite rude.
I asked on this page for evidence. You've used 'psychology' repeatedly as a foundation for arguments on other pages, and I've asked you before on these other pages for evidence. It seems you pretending I called you names, focusing on guilt and blame, and otherwise "bickerfying it" is your own decision - perhaps a mechanism for avoiding reasonable discussion. And if you want "pleasant little explorations of ideas", then write fiction... but the WikiWiki is not the place for it. Is 'PsychologyMatters' a fiction? maybe. That's why I ask for evidence.
The evidence:
I will agree the claim is speculative, but let us see the evidence for the other side, that there is One True Way and that pure logic or math can find it....
I'm calling you on this blatant ShiftingTheBurdenOfProof and ArgumentumAdIgnorantiam. Nobody needs to prove the opposite to call bullshit on your unsubstantiated claims, and demanding evidence of the opposite as a defense of pure speculation is only something a fraud, a charlatan, or a troll would do.
In science the leading theory/hypothesis may be a weak one, but its still the leading theory. To rank a theory, one has to look at the competitors. So far "psycho theory" may indeed be weak, but its competitors are at least as weak, if not more. You appear to want to personalize this into a "burden of proof" issue focusing on individuals so that you can verbally spank them because that gives you the most pleasure. Instead, lets look at the bigger picture and compare the competing theories. --top
Proper scientists don't promote weak theories as truth or assume them as premises in arguments - even leading theories. They seek evidence to confirm or disprove them. You seem to seek excuse and justification for avoiding BurdenOfProof while promoting pure speculation. Despite all your words and claims of favoring evidence, your actual behavior betrays your hypocrisy - you clearly don't concern yourself with evidence for your own views. I'd bet you'd make a better cult leader than a man of science.
Let's delay the issue of how sinister I allegedly am and just present the evidence for alternatives. I want to talk about software engineering, not about me.
I'm willing to delay the issue of how fraudulent you are when YOU present REAL evidence for YOUR CLAIM. Nobody here is claiming alternatives - except the default: "it hasn't been reasonably supported that PsychologyMatters in an objective way so it is irrational to assume it as a point or working premise in any argument". So nobody, except YOU, has a BurdenOfProof. Go find other pages where people are making explicit claims if you wish to demand evidence for them - asking for it here looks, to me, like hedging and sophistry. Asking me not to point out fallacy where I see it isn't a reasonable request.
Are you this annoying in real life, Lars? If you don't want to talk about the alternatives, just say so without turning it into a crime drama.
Check out Lars's personal wiki and make your own conclusions. Are you this much a fraud and troll in your business interactions, Top? When you go to a company with your ideas, do you demand they provide evidence to defend the opposite of what you suggest? That sort of shitty behavior isn't acceptable there OR on WikiWiki.
Where did I "demand"? You appear to be visualizing bad things in my writing that are not really there. Generally people sit down and discuss the pro's and con's of the alternative scenarios in a civilized way. This is perfectly normal. I'd like to explore the scenarios and ask that all sides put their evidence on the table so that we are all looking at the same thing so that we can compare and discuss. If this is being "bad", then I guess I am delusional and evil and brain-damaged all at the same time like you imply. Comparing and discussing is perfectly normal in my world as I perceive it. If there is a better alternative, I am not knowledgeable in it. From my perspective, you are either being overly sensitive, or have an agressive chip on your shoulder left over from a past discussion.
Pros and Cons are for plans, not theories. Theories need support absent contradiction. As far as where you keep ShiftingTheBurdenOfProof: "Where is your evidence for mind-independent betterment? And what's the opposite? Some single magic universal equation or type system? Where's the proof for it? let us see the evidence for the other side [...]". Theories don't benefit from compare and contrast sessions unless you're applying something like OccamsRazor to choose between two equally well supported theories... and that is clearly NOT a situation that applies in this PsychologyMatters page. Support your own theory.
Here's another viewpoint: Software is the developer's "user interface" into the software. It is analogous to an application user's interface to an application, say a spreadsheet. Most would agree that psychology plays a large role in user interface design. By this analogy (as far as it holds), then psychology matters for the developer's interface also. --top
Do you mean the model/language is the developer's user interface into the software? In any case, you say that psychology plays a role in "user interface" for applications - and it does, insofar as psychology drives user requirements - but I don't believe that true when it comes to requirements for correctness and such (e.g. if you were writing a desktop calculator, psychology plays no role in deciding whether the numbers should add properly). The model/language must meet real and non-subjective developer requirements to meet user requirements - so, following the analogy, developer psychology would generally matter less than user psychology because there is already a set of requirements in place (driven by the user). That the analogy can go either way makes much less valuable for exploration of viewpoints.
I thought we weren't considering the issue of wrong output here.
You determine the "best" solution relative to a set of cost, risk, and optimization requirements. Without a set of such requirements, there is no 'best'. Are you attempting to imply that PsychologyMatters, in some objective way, to determining these requirements? If so, I'm interested (as I've stated many times) in your evidence.
And similarly, I'm interested in your counter evidence or evidence for alternative explanations.
ShiftingTheBurdenOfProof. Again. I don't need to present counter-evidence for a claim that isn't substantiated. And I don't need to provide alternative explanations for 'observations' that haven't been provided or verified.
Nobody "needs" to do anything. Wiki is not a dictatorship. I'm just asking.
Asking questions in order to avoid providing answers isn't "just" asking. Now, will you provide cold, hard, objective evidence for this as-yet unsupported claim, 'PsychologyMatters', that you keep relying upon in your arguments? I'm just asking.
I don't have "cold hard objective" evidence. It is "soft" evidence. If that still bothers you, then see a therapist.
Speculation and faith doesn't qualify even as "soft" evidence.
This page seems to be yet more evidence that Top's view of software development is fundamentally at odds with that of a number of participants here. The latter seem to be computer scientists of one stripe or another, for whom theoretical and mathematical rigour take precedence. The former and his supporters (who are in short supply here, but no doubt can be found elsewhere) are pure practitioners who can be consistently relied upon to deliver working screens and reports today to solve a problem that was supposed to be finished yesterday -- whilst the theoreticians may still be debating what fundamental model is appropriate -- but the result will typically demonstrate the "kludginess", inefficiency and inflexibility of most ad-hoc implementations.
Not really. This page is yet more evidence that Top doesn't believe he requires evidence for his own claims. The other participant isn't promoting any theory, only demanding practical evidence for Top's pet theory - one that he has consistently applied in arguments but never substantiated.
It is indeed an age-old battle. But I do dismiss the notion that my solutions are inherently "kludgey". I strive for reasonably "clean" designs, and whenever they do get messy, I review the design to see if there is not a better solution. Often there is just EssentialComplexity caused by the domain itself. Forcing too much abstraction often makes a system inflexible because changes in requirements may not fit the overly-pruned abstraction. HelpersInsteadOfWrappers is a good example. Finding the right balance requires a "feel" for how things change and why they are the way they are. If the theory purists can show obvious advantages like less code or less change points, not the indirect roundabout obtuse metrics they are fond of, then I'd be happy to take a look. --top
Your solutions may not be "kludgey" at all. Indeed, you may be one of those practitioners for whom programming itself is a form of intuitive mathematical reasoning, subject to its own implicit rules and drives toward elegance. However, among practitioners who openly deprecate theory in favour of empirical results, non-kludgey results are arguably rare. That doesn't mean they don't get the job done, of course, but they'll make the pure theoreticians grind their teeth in frustration.
Perhaps "kludgey" and "elegance" are in the eye of the beholder.
PageAnchor: Natural_Laws
(extracted from above)
Re: ...And I don't really agree on the "natural laws" issue: all software also must obey the laws of physics in addition to artificial and often arbitrary limitations and interface imposed by the machine.
Outside of performance/resources, I disagree with the "natural laws" part. One could create a fictitious universe in software to achieve the desired goal. The results may have to resemble our universe to some extent in order for the user to relate to it, but the internals don't have to be. Same with math.
Software written in such an 'artificial' universe doesn't escape the need to obey the laws of our own universe. We can't create an artificial universe capable of reversing the flow of time, delivering a message before it is sent, or deriving more information out of a signal than allowed by entropy. Instead, software of the sort you describe must simultaneously obey two sets of laws: the laws of our universe (including those of physics, logic, information, computation, and complexity) PLUS the laws of this artificial universe. I.e. the efforts of constructing an artificial universe have, at best, only introduced extra constraints and requirements to follow - that is, above and beyond what the user demands.
Constraints are features when well chosen for acceptable reasons (optimization, security, correctness verification, modularity, reflection/debug-ability, protocols and interoperability, flexibility when comes time to make changes, etc.) Or, more accurately, constraints are a necessary price we pay for useful features. So there is merit in the notion of creating a 'universe' within software that constrains certain behaviors... indeed, programming languages (other than assembler) and especially their runtime, engines, or virtual machines, and frameworks, would all seem to qualify. Of course, poorly chosen constraints can exact a price without providing any associated benefit, so one ought to choose wisely.
In any case, regardless of the potential benefits of constrained artificial universes, you won't be able to bend or break the laws of ours. You say: "the results may have to resemble our universe to some extent in order for the user to relate to it, but the internals don't have to be", but the truth is that the internals must and will adhere to a subset of what is possible in our universe, and thus will also (in that manner) resemble our universe. A truly fictitious universe rarely has such constraints... magic, time travel, etc. are all possible. Software doesn't give us god-like powers, and doesn't allow us to create fictitious universes. A software universe might be 'artificial', like a toy box, but it is still a real one.
I'm not sure what you mean by the last part. Logic and math may not be inherently tied to our universe. Cosmologists can and do model alternative universes. (True, they usually leave some parts the same as ours.) As far as "can't make time flow backward", if one can create an algorithm for it, it can indeed be done. Hell, I make it move backward by sliding the position bar leftward on some AVI movie players (doesn't work on MS's). An operating system and a database is its own little universe, I would note.
Please don't start waving your hands and inventing false devil's advocate arguments that even you don't agree with. Sure, "if one could create it, it could indeed be done" is (trivially) true (and utterly vacuous). But even you know that you don't make time flow backwards in the program or its model by shifting the position bar on an AVI movie player - you only deliver a signal indicating that you wish to review in the immediate future an audio-visual signal that has played in the immediate past. Those concepts aren't at all the same. You can't create an algorithm to make time flow backward - laws of our universe prevent it. The closest we get in practice is working with ACID transactions, and those are rather limited in scope.
As far as logic being "tied to our universe": keep in mind that logic, and accepted laws of logic (axioms, law of excluded middle, etc.) prove themselves for valid use in our universe by the exact same mechanism as every other theory. I.e. under the hypothesis they are true, and given a set of knowns about our universe, these axioms are utilized to make falsifiable predictions, which are then verified or falsified. Given a long period of use, those that survive become 'laws'. So, while it may be the case that logic (in general) is not tied to our universe, our universe certainly seems to be tied to a particular logic. You might consider physics in the same sense: is physics (in general) tied to our universe? alternative rules of physics (within limitations of computation) can certainly be modeled in the same sense as alternative logics. I expect it more accurate to assert that our universe is tied to a particular physics. Based on human observations thus far, the rules of our universe (be they for logic or physics) seem to be particular, not general and encompassing of all possibilities.
Anyhow, it is my impression that you are focused on simulations of alternative universes. Keep in mind that the software for these simulations is what can't violate the laws of our universe. I.e. the software is written in our universe, runs in our universe, obeys the rules of our universe, and is incapable of violation. Further, software can't take advantage of alternative 'laws' or magic of any simulated universe because it must still run in our universe. Where you say "one could create a fictitious universe in software to achieve the desired goal," the truth is the opposite: a fictitious universe can only constrain your approach in achieving a desired goal. Constraints can be good and helpful (buying features, helping programmers stay on task, etc.), but they never achieve a solution on their own... and they are rarely 'good' when created for an artificial universe. The best that can happen is that these constraints help guarantee some features. The worst that can happen is that these constraints are just loose enough that you continue pushing your way forward against continuous and resistance until you hit some unforseen dead-end (this, as anyone who has had a anyone who has had a ScreamLoudlyBangHeadRepeatedly moment with a framework would tell you, being much worse than the constraints being so tight as to force immediate dismissal).
I never meant that simulations could bend the rules of our "real" universe. Thus, a simulation cannot make time flow backward *outside* the simulation.
Correct. But a simulation written in software ALSO cannot make or take advantage of time flowing backward *inside* the simulation. Nor can you do or benefit from other things impossible in our universe, such as deliver a message from one agent (inside the simulation) to another agent (also inside the simulation) before that message is sent.
The existential nature of where our universe and its rules come from aren't particularly relevant. And I'll state it again, outright - there is nothing software can do that will break a rule of our universe, and it is impossible to take advantage of any 'magic' or rules that would violate our own. From our perspective, we can only be constrained. The ability to "create a fictitious universe in software to achieve the desired goal" does not exist in this universe.
Well, its a matter of semantics surrounding "achieve the desired goal". But we seem to be wondering away from the relevant issue. To produce a result (computation), we are not restrained by the laws of this universe. One potential example is imaginary numbers used in electronics. They are a UsefulLie, a shortcut to a computation. A slide-rule is another UsefulLie that takes advantage of lower-level operations acting like higher-level operations when used on "compressed" scales (such as logarithms). Or, in a relational database we create "tables" even though those tables may not really be in the real world. They are just an artificial conceptual model even if they can produce results usable in the real world. These conceptual models are potentially wide open. OOP, relational tables, type systems, functions, etc. are all internal conceptual models. They are a "dummy little universe" for systems designers and programmers.
I suspect what you mean regarding "not restrained by the laws of this universe" has almost nothing to do with the ability to violate the laws of this universe. In the important, literal sense we ARE restrained by the natural laws of this universe - we can't write software that violates natural law. This is especially true when comes time "to produce a result" which, from your own SimulationRelationshipToParent page, would clearly violate the 'Ability Leakage' rule. I.e. if you CAN violate the laws of this universe inside the simulation, then you CAN'T obtain a result from it - you can't take advantage of it. Your examples of imaginary numbers and slide-rules, and the algebras and mechanisms guiding their use certainly don't violate any natural laws. Oh, and in a relational database, any 'table' we create has a physical representation in the real world - they "really 'be' in the real world".
Our models often have "things" that correspond more or less to the "real world", but that is because modeling the real-world with abstractions has generally proven useful, not a necessity. However, that still does not force us to model the real world. And "violating laws" may not necessarily be the goal. Imaginary numbers may not violate any laws of this universe, but they also don't model the real world. "Violating" is not really the important goal here. They are a "clever shortcut". If one finds that making anti-gravity in a model serves a purpose, they may use it similar to how imaginary numbers are used. (Perhaps our thinking is clouded by trying to stay too close to the "real world" out of comfort or habit.) Related replies in the "to parent" topic. --top
To the contrary, modeling the real-world with abstractions is a necessity. Our brains lack the computational power and memory, and our senses lack sufficient capability, to process raw sensory data without constructing abstractions atop them. We infer abstractions from sensory inputs, and we call that reality: keyboards, typing, clouds, cars, numbers of cars, numbers, even 'imaginary' numbers are all equally 'real' and objective when derived via induction and abduction from sensory inputs. Fundamentally, they must be equally real because they are all the exact same sort of thing: patterns with predictable properties inferred over sensory inputs and memory. And language, which communicates in abstractions, ties it all together. To say I'm 'typing' this 'reply' on a 'keyboard' is a 'truth', not a UsefulLie - I would not be a liar for saying so, but I would be a liar for saying otherwise. You seriously overestimate your own capacity for thought and communication if you believe abstractions are unnecessary. Sensory neurons fire in response to light, heat, pressure, agitation, and chemical stimuli. Beyond that, everything our senses tell us is inferred abstraction.
In answer: imaginary numbers model the real world just as much as any other sort of number or color or object or verb. They need only be attached to the real world through our senses via some repeatable derivation.
Here's what I'd readily concede to, regarding this "natural laws" discussion:
There isn't even one natural law that software can break, but some natural laws that often constrain physical systems rarely constrain software systems. OTOH, physics laws related to signal and information theory tend to constrain software systems more than physical ones... there aren't very many bridges constrained in their design by the laws observed in the development of information theory.
Some of the problems encountered by physical systems that software neatly avoids are very expensive ones related to fabrication, storage, transport, and duplication. These aren't 'necessary' problems imposed by natural laws (Feynman and other physical scientists well above my caliber insist that there is no known constraint preventing nanolithography or the Star Trek style object synthesizers). But they're real problems in today's world. As such, they're far more analogous to (in software engineering) the arbitrary limitations imposed by the machine, the language, the OperatingSystem, and other frameworks. One might assume that software developers have an easier time designing around these arbitrary limitations than do mechanical designers, making the internal software be anything they want it to be, and they'd be right - to a degree. Anyone who has wrestled with a language, a framework, or an operating system (all of which are more or less the same thing) and lost won't hesitate to tell you differently, and most software developers won't even try. "Our internal models can be just about anything we want as long as they produce correct answers" applies equally to any design project, be it for software or bridges or electrical transformers... but not just "anything we want" will "produce correct answers".
Physical constraints and logical constraints both can cause headaches, but this does not mean they are the same thing. Creating an internally consistent model that acts how we expect is indeed a difficult task. Related: SimulationRelationshipToParent
Infinitely-Powerful Brain Thought Experiment
An infinitely smart "brain" could write in spaghetti machine code and could read spaghetti machine code and change spaghetti machine code to fit new requirements very fast. Computers executing code don't give a flying flip about how it's organized (assuming speed is not the overriding concern). If the machine doesn't care, then what does care and why? We use higher-level languages and abstraction because we don't have an infinitely-powerful brain. The code must "communicate" without our limited brains. That's the main purpose of higher-level code. It has very little to do with making machines happier.
The "power" of a brain is studied as part of "hard" psychology... e.g. how fast to solve a problem, how many things can be remembered at once, average time between errors while writing under duress, etc.. Appealing to it as part of PsychologyMatters is frivolous because "PsychologyMatters" has only been contested in the context of "soft" psychology.
That limitation applies equally to the SufficientlySmartCompiler and the human programmer, and that's even before considering information limits regarding black-box compositions, incomplete requirements, and runtime input. Higher level languages allow us and our language interpreter/compiler/optimizer/linker to make useful assertions, assumptions, and predictions about properties of code, its composition, and future changes. I'm not going to anthropomorphize and say this makes the machine 'happy', but it certainly helps 'satisfy' some very real limitations.
Well, I am going to anthropomorphize in a way. It could be argued that compilers/interpreters (C/I) "like" certain constructs in order to analyze and optimize more effectively, and the same for humans trying to grok code, which is more or less trying to predict how it behaves at run-time, similar to what the C/I is trying to do. Thus, we have a lot in common. The main difference is that C/I's can be completely redone if we don't like the approach we took, but not so much the human mind. We are more or less stuck with primate wiring, the psychology of the human species.
Not really. Humans are perfectly capable of learning skills that weren't wired into them from the start. I was just reading the other day about humans learning to see with their tongues (http://discovermagazine.com/2003/jun/feattongue). And the relative cost of teaching such skills is almost invariably smaller than the cost of changing the languages or frameworks late in a project. You overestimate the flexibility of C/I and underestimate the flexibility of humans.
This brings us back to the question: Is it more economical that tools be bent to fit the mind or vise verse (MindOverhaulEconomics)? Compilers/interpreters (C/I) that study code for optimization hints have a "psychology" also, and you seem to be agreeing that this artificial psychology matters. This would imply that natural psychology also matters.
Algorithms and techniques (what you might call artificial skills) matter. Skills matter to humans, too. Processing power and memory and sensory devices and output devices and information are necessary to learn and utilize these skills. Languages and protocol and restricting behavior are mechanisms to allow information (in the form of assumptions, assertions, and predictions) in the face of RicesTheorem, GoedelsIncompletenessTheorem, black-box composition, runtime inputs, and incomplete requirements. But your attempt to equate skills to psychology is a fallacy on your part. Psychology is not a study of chess, driving, or the skills for optimizing code. At most, psychology would study what happens in a human brain to learn and apply these skills, and the differences between a neophyte and an expert brain.
If we want really fast C/I's, we need to design the language such that the C/I can study it easily. Similarly, if we want humans to be able to study patterns and constructs in order to mentally simulate/predict code run-time behavior (including mental code modification candidates to fit new requirements), we need to design the language to fit the way humans study it most effectively.
Also, "neophyte" versus "expert" is only part of the issue. There are inherent differences between the way different people extract and process information. See PhysiologicalAndPerceptualFactors for examples.
If SapirWhorfHypothesis is valid (and it seems to be holding against tests) then it is more likely that the language itself is the tool with which the humans will be examining the problems to be solved. If so, the goal is not to design a language to "fit" the way humans think, but rather to design a language that helps humans think by allowing them to make useful predictions and assumptions. Further, we wish to embed the language in an IDE that can analyze code and help the humans write. I posit that the exact same features help both C/Is and humans, doubly so in the context of an IDE.
Suppose we designed a language so that it is readily speed-optimizable using the known set of techniques called "A" known at the time. So we build this language around technology set "A" and get a pretty fast language. However, new techniques are discovered or technologies come along. For example, multiple CPU's are more common so that we can use parallel processing more than before. Let's call this "technology set B". However, it may require a very different language organization to take advantage of B than it did A. In A we focused on pipe-line management, but now we have more independent pipelines at our disposal. Excess pipeline management may even get in the way of forking portions off because we had to pre-commit a given pipe to certain predicted behavior, making it more difficult to re-assign a task to a different pipe in short notice. The "mechanical psychology" in the C/I that our super-fast language project needs to target has changed. Mechanical psychology mattered here. There is no "one right way" to make a language for C/I optimization speed. It depends on the "personality" of the hardware and the optimization technology available.
I have not posited that there is "one right way", but I will say "there are a great many wrong ways" and "there are 'worse' ways". If you don't believe there are 'worse ways', just take any design and add arbitrary AccidentalComplexity (gotchas and restrictions and funky behaviors on Tuesday afternoons) just for the hell of it.
If there are worse ways to do things, there are likely also to be better ways (unless we have found an optimal 'best' way... but how often does that happen?). These better ways are unlikely to be low-hanging fruit in any mature language or framework. Stronger or more widespread support for CrossCuttingConcerns and NonFunctionalRequirements (introducing concurrency in your example) often requires a different language and code organization exactly because such overhauls haven't yet been performed to reach the 'high-hanging fruit'. And, after doing so, B likely can still do everything A ever did (in terms of functional requirements). The overhaul doesn't even imply a performance loss in the non-concurrent case. If B is not worse when it comes to performance and other non-functional properties at doing what A did, I'd call B's additional support for concurrency to be "net better".
And while learning to use this new language for the case solved by "A" may involve some rework for the human that (by happenstance) is skilled in technology A, we have no valid reason to believe 'psychology' will somehow make learning and using technology B just for the case solved by A significantly (in the statistical sense) more difficult than learning and using technology A to solve the problem solved by A. I.e. there is no reason to believe that PsychologyMatters here after the other variables (the problem being solved and the relative quality of the techniques for solving that specific problem) are controlled. Since this example was intended to demonstrate that PsychologyMatters, you need to provide a reason to believe it matters here.