Software design and physical science (Newtonian and quantum physics, chemistry) are two different animals. As described under LaynesLaw, software can transcend physical limitations. We now have the ability to make a world in software that has almost any rules we want as long as we solve the problem at hand (right output). If removing gravity makes the job easier, do it! We are thus not bound to physics with software (or at least less and less bound over time due to MooresLaw and related technical improvements). The "science" of software may not have to be hard, unlike physical science. We are stuck with the laws of physics, but we can bend software laws in many ways as needed. It's somewhat similar to using Newtonian physics when modeling quantum physics would be overkill, except that we don't even need to approximate nature if we can find a better model for our goal. StringTheory makes a crappy assembler language for us to work with as humans (except maybe Stephen Hawking). But, we can play God with software and change the rules as long as the output is correct. -- top
If software gives us God-like powers, then perhaps I'll go write an application that solves the HaltingProblem, cures cancer, brings about world peace, deposits (legally) a couple million in my checking account, and causes <insert annoying pop-star here> to develop a permanent case of laryngitis. This is a bunch of pseudo-scientific claptrap, Top. Software is most certainly bound by well-known limitations; which have been demonstrated via axiomatic proof. But once again, when confronted with real evidence that your suggestions are so much crapola, you sing and dance, invoke LaynesLaw, and make the asinine suggestion that little in our discipline can be said with any certainty - therefore, anything goes, including any unsubstantiated claim you happen to make.
Obviously not. The inside the virtual world implicates a very finite bound on the rule. You can't simply hack into mankind's mainframe...yet.
Just because we can play God does not necessarily mean we are smart Gods with limitless knowledge and wisdom. A chimp given a "launch nuke" button is one extreme example. In other words, power and wisdom are orthogonal. Technology gives us this power, but not necessarily wisdom. Nor did I imply that software's power is open-ended. It is simply transcending the physical world more and more over time. But it has not reached infinite ability by any stretch. Maybe "weak god" is a better description. "Levels" might look something like:
Oh, wonderful. TopMind has ventured beyond PseudoScience into PseudoTheology?, inventing hierarchies of deities to try and find a nail to hang his argument on. {continued below}
[Not a hierarchy, but a set of deities with relations to various "wisdom" and "power" attributes no doubt. LimitationsOfHierarchies?. Nyuk Nyuk, just teasing top :P]
Software is moving into the second level. -- top
Software, like any other science/art/craft/profession/engineering discipline/whatever, has utterly little to do with religion. This analogy is so laughably bad, that I don't really know where to start. Maybe software is like PhilPrinceOfInsufficientLight, rather than a full-blown demon.
I think you are missing my point. I am suggesting that traditional science is failing to help us with software issues (DisciplineEnvy) because software is not bound to nearly as many constraints. I notice that in many debates somebody will say something like, "you can't do it that way because it would run too slow". I then ask if speed was not an issue if it would change their viewpoint. At that point they seem to lose interest in the topic for some reason, perhaps because it is no longer dealing with the here-and-now and into theoretical ether-land. Too many of our definitions and viewpoints are seem based on limits of our physical machines instead of what software could potentially do if cycles were much cheaper. For example, if I suggested an "expanding map" that would allow more than one value per "cell" (essentially a kind of MinimalTable), the most likely reaction will probably be, "But then traditional maps would run too slow." It is a "dumb idea" because it "runs slow". But that is a machine limit, not a language or software limit. I like to factor out speed to explore the real limits of software. -- top
<RANT long-winded="true" meandering="definitely">
Again, BullShit?. ComputerScience - the parts of that which admit axiomatic proof or falsifiable empirical evidence - is alive and well. The RelationalModel, of course, is founded on it (and you are fond of claiming other paradigms aren't and thus should be given less consideration). The biggest problem I see is that many people - for various reasons - choose to ignore the science. I may piss a few folks off here (which is a GoodThing, all considered), but many in certain noisy programming communities like to ignore science and extrapolate their own personal preferences to scientific doctrine, just because they think they're clever. But that is not science. The fellows whom we often refer you to - JohnReynolds, BenjaminPierce, etc. - are engaging in science. If you would read them, you'd find what they say interesting.
The other problem I see here is that many people - yourself included - confuse science with engineering. The selection of an appropriate data structure or algorithm for an given problem, assuming correctness is satisfied, is an engineering problem, not a problem of pure science. No good ComputerScientist would utter a claim like "tables are better than all other data structures" - such a claim is inherently unscientific and untestable, especially without qualification. No good SoftwareEngineer would make that claim either; analyses of what is good and not always have to be made in the context of the problem at hand, although some technologies admit themselves to more problems than others. That would be like a civil engineer saying that all bridges should be built out of Georgia pine - many bridges might be better served with concrete and steel.
Of course, many here on Wiki ought to be offended by that remark - including TopMind - as pronouncements like "language/technology X is the best language in the world" seem to be popular here. For Top, it's the RDBMS and the underlying RelationalModel. For others, its SmalltalkLanguage or CommonLisp or Erlang or Haskell - maybe even C++.
I started the page ComputerScienceIsaSoftScience (which shares quite a bit with DisciplineEnvy). I've come to modulate my view on this a bit; the science part of our discipline is in fine shape. It's the engineering part that is for the most part shoddy, and tends to ignore the underlying science. Were a fellow like TopMind (or even a brighter light such as a RichardGabriel) to burst into a civil engineering convention and make pronouncements about civil engineering topics similar in nature to what Top and RG (and numerous others) make about software, they'd be laughed out of the fucking room.
-- ScottJohnson
</RANT>
UnBullShit?. For one, I have never claimed that relational is objectively or scientifically better. For all the alleged duplication I am accused of, you should know my position on that by now. I just lobby for my pet paradigms and techniques the way that Lisper's and OO'ers do.
And on that grounds, yer just as obnoxious as the SmugLispWeenies. Only not as clever.
One wonders why that is.
Keep in mind. Costin and I and a few others here are not ObjectWeenies who want to replace OracleDatabase with GemStone or PrevaLayer? just because OO-whatever gives us a hard-on. We (at least I) object to all nitwits who think everything (or most things) should be done in a particular way. Costin, I'd wager, knows far more about relational than either you or I do; he is decidedly not in the pro-OO camp, and yet he thinks your nuts.
Type theory, SetTheory, and BooleanAlgebra are also techniques. BooleanAlgebra is not "science" either.
They are scientific theories; axiomatic systems which have been shown to be consistent; and which have also shown to be useful.
These techniques developed around somewhat simple premises (givens) and conventions and patterns (maths) are built off of them. They are like spoons, forks, and knives. They have proven useful but may not be the only game in town (chopsticks, sporks). Aliens may have eating utensils that we have yet to think of.
And such aliens are welcome to publish the details. There is no conspiracy of TypeTheoryists thwarting publication of well-thought-out theories which might compete with TypeTheory for applicability. Conjecturing about the existence of things, however, doesn't an argument make.
Plus, one can probably still build software without BooleanAlgebra etc. It is not a requirement.
Well, one could always program in MalbolgeLanguage, which uses base-3 logic. (I forgot - SQL provides that already - see NullConsideredHarmful. :). Coming back to practical ComputerScience, you pretty much need BooleanAlgebra 'cause that's how the underlying machine works. But use of base-3 logic, or base-75321 logic, wouldn't give you any advantage in computational power; AlanTuring himself proved that much many years ago.
Then why the diversion into discussing what level of "god" software is at?
Bridge building has a lot of potential objective metrics. Software has almost none. Code size (LinesOfCode) is probably the most objective (measurable), but even that is subject to subjective counting rules. We have speed (performance), and that is about it. A fallen bridge is a fallen bridge, but we can't rule out that BrainfuckLanguage is the most productive language for some people/alien. -- top
I think we can safely state that BrainfuckLanguage is not productive for most programmers. And were someone to do the study, I think that it could be demonstrated scientifically. The problem with this sort of science is not that it isn't possible; but that it's statistical in nature (and statistically valid surveys are often time-consuming and expensive).
Stepping out of ComputerScience and into the realm of politics, there was an interesting study done recently. Several groups of individuals were presented with a fictional (the participants knew it was fiction) account of a war-crime perpetrated by US troops in Iraq. In one group, the account was backed up with lots of hard "evidence" - photographs, witness testimony, etc. In the other group, there was no evidence; the account was an unsubstantiated rumor. The participants were then asked how they felt about had happened. They were also quizzed about their political leanings, and other socioeconomic data.
What the researchers found was significant, but not surprising: There was almost no correlation between the subjects' opinion and the amount of evidence presented. There was a high correlation between their opinion on the matter, and their political beliefs. The Republicans thought that the American soldiers in question were being railroaded, no matter how damning the evidence; and the Democrats thought that they were guilty of the deed, no matter how flimsy the evidence.
While it may be a stretch to extrapolate this result back from the realm of politics to science, I think the same principle applies. People believe what they want to believe, and they include or exclude evidence based on their pre-existing beliefs. Some do this more than others, obviously.
Well, if the software world is wide open, then you can believe what you want to believe and make a little world however you want it to solve the problem at hand in your way. There is no objective reality in software other than the rules you choose to make. Politics deals with the external world, software does not, other than the input and output. What is in between is open-ended. One could do computing by creating a "right wing" world in which computations are done by hunting down and jailing evil liberals. (Ann Coutler OS?) Efficiency aside, such a world could be TuringComplete, and thus solve any problem any other paradigm or language can. We already use a slightly milder form of such processing-intensive virtual realities to perform work: GUI's. The buttons and pretty icons are not needed to perform the necessary computations, but are instead there to help humans relate to the thing. -- top
But you're missing the point. (Not to mention once again hypothesizing rather bizarre worlds in a seeming attempt to prove some point)....
My point was that an externally invalid world view may not necessarily be internally invalid. This has big ramifications. People believing what they want to believe in the external world is different than the same thing in an internal world. One can be "wrong" in the internal world without the same ramifications, as long as their internal world is consistent (follows its own rules) and as long as the output to the external world is correct. GUI's are inefficient in many ways, but make (most) people comfortable. That is why we use them. Similarly a bad, stupid, naive, etc. external world view can perhaps be turned into a workable internal view (software world) that does real work. How is an Ann Coutler interface conceptually different than a GUI interface?
You seem to think that because a discipline isn't subject to axiomatic proof or "hard" scientific claims, that means that no (authoritative) claims can be made about it at all. And that therefore, all scientific claims are merely "opinions" and that your opinion - however uninformed or pulled-out-of-your-ass - is as valid as mine or that of DonKnuth.
Yeah, it's this kind of crap that makes any user interface designer or even web designer in general's job so miserable. Human behavior is a lot more than opinions and gut-feeling.
Were that to be true, then many fields of study would cease to be effective, including psychology. Yet psychology has seen tremendous progress in the past 100 years since Freud laid down the foundations of the discipline (including the repudiation of most of Freud's work). Likewise with medicine, economics, and numerous other "soft" fields; this despite being "soft" and tainted with political/moral considerations. "Objective reality" is not a prerequisite for scientific inquiry; the suggestion that it is (and that it doesn't exist, so therefore neither does science) is the grand lie of PostModernism. The ScientificMethod handles soft science just fine.
Re: "Objective reality" is not a prerequisite for scientific inquiry;
Really. To clarify, many in the PoMo camp claim that because scientific inquiry requires scientists to do the inquiring, that even the hardest science is influenced by human factors - and that therefore, the conclusions of science are always suspect. In other words, they essentially make the claim that a tree falling in the woods with nobody to hear it doesn't make a sound. But the ScientificMethod, by its use of independently repeatable and falsifiable experiment, provides a filter for all of this. Don't like the results of an experiment; or think the experimenter induced bias or error (intentionally or otherwise)? Run the experiment yourself. Analyze the methodology of the original experiment.
Psychology and economics are generally based on models. These models sometimes reflect actual reality sometimes don't. They are both immature disciplines. Economists still argue over the primary cause of the Great Depression even though they have had plenty of time to analyze it and the world was simpler back then.
Physics and chemistry are also based on models; though the models in the harder science tend to fare better when experimentally compared with reality. Models are necessary in order to make predictions, which is one of the reasons (some say the reason) that we do science in the first place. At any rate, all science disciplines adapt their models as new evidence requires. The sure sign of PseudoScience is a failure or refusal to do so.
Psychology often relies on anecdotes because there is nothing better to replace it.
Horseshit. Don't confuse the "psychology" peddled in liberal arts classes (or by armchair politicians) with the scientific discipline; the two have nothing to do with each other. Real psychology is a clinical science much like medicine. Anything else isn't real psychology.
But it is mostly just measuring customer satisfaction. IOW, ArgumentByVotes?. Psychologists gauge how the patient is doing, or claims to be doing and writes that down. Then statistical sampling is done to see if technique X resulted in better satisfaction grades. But, I don't see much analogy to that in soft eng. other than "what technique do you like the best?" type of questions (MyFavouriteProgrammingLanguage).
That ain't psychology. Or science.
I think the problem with "soft sciences" like psychology and economics is one or more of:
Examples
One of my favorite examples is library book searching. The physical nature of books and card catalogs limited the searches to mostly hierarchical taxonomies and perhaps author.
Unless you "denormalized the schema", which in the physical world means keeping multiple card catalogs, sorted in a different order.
But that's not a "god-like power" by any stretch of the imagination.
To somebody in the 1800's it might be. Columbus's ability to predict eclipses was "god-like" to the native Americans at that time. But he was just using math that was not known to the native Americans. Things are often god-like until you know how they work. A notebook computer might similarly convince Columbus. Perhaps our "God" is really some geek in a torn T-shirt running our universe as a simulation on a trillion-core PC using 4D chips in his 4D realm. He could perform (insert) supernatural acts into our universe like 40-day floods, parting seas, and talking bushes. However, once we get to see that he's just a geek with a big computer, he loses his "God" persona to us (although we may still take care to be nice to him for obvious reasons). --top
ArthurCeeClarke had something to say about this.
GUIs
Gui's present a fictitious paper desk- and/or control-panel-like visual interface. This is merely a UsefulLie, and command-line aficionados will claim it is not that useful to them. GUI's are a phony world. Software has no manila folders and buttons and slide knobs. These are purely for human relation.
Programming techniques may be doing the same thing. Does OOP reflect the actual world it is modeling? Probably not. Does relational? Probably not. The real world is not in tables for the most part. No GoldenHammer clearly fits the actual world better.
Re: "TopMind has ventured beyond PseudoScience into pseudo theology..."
You know, you may be right. But I am forced to because there is no real science in software so far. See DisciplineEnvy. However, I would not call it "theology" because it is not meant to suggest human actions (morals, rituals, etc.), but merely as a UsefulLie to explore, talk about, and hopefully explain software. What is the alternative? There is no real science for it yet, so my approach may be the next best thing. Think about it. Stop complaining about having to use leaves for toilet paper when the only alternative at present is your hand. DontComplainWithoutAlternatives.
Religion in fact may be a UsefulLie that allowed humans to encode useful survival tips and social structures. For example, they may not understand microbes, but thinking of flues as "demons that make children demons who can hop on nearby people and possess them also" supplies a mental model that reminds one to stay away from people with flues. The demon is an accidental approximation of germs. It is an abstraction that happens to generally model the real thing (germs). Dogma that increases the survival of a group will spread using a kind of Darwinian selection process. The most useful lies spread because they keep the population holding them alive to bear more children, who in turn learn the dogma. It is a UsefulLie that works! (Of course sometimes religious doctrine goes awry, but it is the net benefits that count in the end.) It is "natural selection of UsefulLies". [I just entered this theory into wikipedia, surprised that it was not already there.]
Similarly, software apps may merely be UsefulLies that allow us to use the computer to do work. Rarely are our software models "proved correct" in the purest sense. They just run a damned model that produces something we want to use. Having value or utility, and being "correct" or "ideal" may be different things. Theory and logic may improve or speed up achieving the utility or value, but that's not the same as being it.
Were EpiCycles "pseudo-science"? They provided predictive value better than anything else at the time. Science is a journey, and the journey sometimes takes the wrong road. I suppose if someone lied about the effectiveness of the predictive value of epicycles, it would be fair to call that "pseudo-science".
Pseudo-science would require science as a reference point, and if there is no "science" in software, then my statements about software cannot be pseudo-science in the general sense. Science is about testing theoretical models against the "real" model: our environment. If we can model environments that have no relation to the "real" environment, then there's nothing to test against, at least nothing "real". Software allows us to make our on realities such that any "falseness" has to be in reference to a stated "reality" (virtual world/universe).
-- top
Re: You seem to think that because a discipline isn't subject to axiomatic proof or "hard" scientific claims, that means that no (authoritative) claims can be made about it at all. And that therefore, all scientific claims are merely "opinions" and that your opinion - however uninformed or pulled-out-of-your-ass - is as valid as mine or that of DonKnuth.
Let's explore some examples of some alleged external proofs provided by DonKnuth, outside of performance. What absolutes do we have?
In domain-space, yes we do have more rigor. But I'm talking in general here. -- top
Quote from Slashdot poster 585321: The comparison with algorithms/programming is also weak, since it is perfectly possible to have two different algorithms (with different complexity, in all senses of the word) that solve exactly the same problem, but it is arguably not possible that you have two different physical theories explaining the same phenomenon that are both true at the same time (although I guess this is a rather philosophical question).
[In science you can have two models that explain the same phenomenon. For example classic atomic theory where atoms were like billiard balls with electrons (dots) spinning around the atom, versus the newer theory which explains electrons more like a cloud (and/or wave) and less like the familiar dot that we knew before. Algorithms are more like math than science. In science we use math, though - so there is some overlapping.]
Definition of Omnipotent
Suppose as a human you create a software simulation of another universe and sentient beings eventually arise in or are custom-created in this simulated universe (SU). As the "creator", you have full practical control over your SU: you can delete beings, go back in time (restore from backup), mess with their heads by moving their furniture around when their back is turned, give them rewards, etc.
However, this doesn't mean that you have perfect knowledge of your creation. You cannot possibly grok every simulated atom of your SU, being a human. Sure, you can stop the simulation and study in detail any particular atom, but you cannot grok every atom and its influence at every time-point. Even if you lived forever you may not want to bother inspecting every atom at every point in time, and it's possible you still may miss something subtle. After all, if you could run the whole thing in your head you wouldn't need a computer.
With this perfect potential control but imperfect knowledge of your domain, are you still "omnipotent" from the perspective of the beings in the SU?
As a working definition, I'm defining "omnipotent" as having unlimited power (for a given universe/world), not necessarily unlimited knowledge. Note that even a lone chimp in the same lab room as the simulation server has "unlimited power" in that he/she can potentially tinker with it and end it as she/he pleases, stomping on the motherboard, for example. But that doesn't mean the chimp will actually do anything "smart" with the simulation.
Another way of saying this is that Omniscience ("all knowing") is not the same as Omnipotent ("all powerful"). Within our software universe, we have omnipotence, but not omniscience (so far) because no human can grok the influence/interaction of every bit.
--top
Topics with related issues, and perhaps refactoring material:
Related Links
CategoryPhilosophy, CategorySubjectivityAndRelativism, CategoryScience, CategoryMetaphor