Too Many Variables For Science

Are there too many variables in software design and maintenance for science to be used effectively in practice?


No there are not. -

Based on discussion in TypelessVsDynamic:

[If you have problems with that grouping then I'd be happy to modify my statement to "Software Engineering and Computer Science site like C2". Software development classes, including education in patterns for algorithms, are part of CS curricula needed to graduate the schools I've observed, as opposed to IT classes which (in the four schools I've observed as data points) teach how to use computers and how to use specific software like spreadsheets and webservers and oracle databases. And unless you mean to imply "there is nothing to test against reality" is a common occurrence (a claim that this person, who tested most CS models in a lab during schooling, doesn't believe), then your "CS is not even CS when..." statement is hardly justified.]

Outside of performance and machine resources, current CS has provided almost no scientific help in determining which paradigms or languages are "better". It appears there are simply too many variables to apply the scientific process to. See MacroAndMicroRigor for a list of some of the factors that should be considered.

The closest I've seen is Ed. Yourdon's "satisfaction surveys" whereby he asks managers how successful their software projects were, and then sees which technologies and languages score higher. However, there were no sure-shot stand-outs.

--top

The number of variables, and (in particular) the difficulties in isolating and controlling them, is certainly an issue. However, of considerably greater difficulty is establishing a rigorous, unambiguous, generally-agreed definition for "better" outside of performance and machine resource usage. Even measuring something as obvious as "satisfaction" is fraught with pitfalls due to (among other things) the ambiguous and amorphous nature of its definition.

I agree, and this is why the state of ComputerScience is in the dulldrums. We already shook the low-hanging fruit out of it, and it served us well for hardware-centric issues, but the higher fruit from a human productivity standpoint is proving very difficult to reach.

[I can't speak for SoftwareEngineering, but ComputerScience remains in high demand and continues to make and test advances. It does trickle down slowly to the common practitioner, I agree, but so does all science. See RoleOfComputerScience.]

Are you measuring the degree by name, or something else?

[I discuss the discipline. A bachelor's degree in ComputerScience results often in a person who has learned from ComputerScience and only rarely in a person who will continue to study and advance it.]


Re: "outside of performance and machine resources"

[CS also makes useful and usually accurate predictions of program correctness, failure, vulnerabilities, etc., and offers models to study these properties. That's much more than just performance. But it doesn't define "better", so I agree it doesn't help there.]

I haven't seen any evidence that they've been proven practical outside of narrow niches.

[For the most part, CS helps SystemProgramming, including development of languages, frameworks, analyzers, relational databases, commit protocols, socket based communication, operating systems, fair scheduling, deadlock detection, etc. The tools you use to write, build, and run programs embody evidence of CS's practicality. From a person such as yourself, saying "I haven't seen any evidence that CS has proven practical" is about the same as a person microwaving a pita pocket while complaining "What's the big deal about magnetrons?"]

That might be true, but what has it done to help software engineering, such as code management and developer productivity? (And please don't bring up strong-typing. There's enough on that topic already.) --top

That there are even identifiable topics called "code management" and "developer productivity" is a result of academic research in software engineering, which is a sub-field of computer science. This naturally overlaps with the academic research into human behaviour (psychology and sociology) and business management.

I agree discussions exist among academics etc., but where are the successes and/or science results? Without specifics, this discussion will not lead anywhere useful.

Go to http://acm.org/dl and search for "developer productivity", "code management", etc.

Reading them is fee-based, but the abstracts suggest that most are just opinions or observations, not much different than anecdotes found here. It's not my job to find your evidence for your side of the debate. Please provide something *specific* that has already made an impact in the field. You appear to be hesitant now, but started out with such bold certainty. What happened? --top

I don't know what debate you claim is "my side", or what I started out with "bold certainty". On this page, I've only contributed the italicised paragraphs in this section, purely to indicate that there is a significant amount of scientific research into "developer productivity", "code management" and so on. It is primarily up to the applied engineering disciplines, not pure science per se, to find application for this research. What distinguishes science from casual conversation is not (by any means!) a lack of opinion or observation, but the rigour with which these are handled.

[See RoleOfComputerScience.]


Testing Many Variables

Lets examine this problem from this perspective. Software design probably has something roughly like 25 to 200 variables to optimize, depending on how you classify things. How does one go about testing systems (option packages) with that many variables in a systematic way? The only thing I can think of that's comparable right now is the human body. However, the components to the human body are fairly fixed such that one can specialize in say a given organ and study that given organ in more or less isolation. However, what if one could build millions of aliens with all kinds of different body, cell, and biochemistry configurations? Software is kind of like that: you can build many aliens that satisfy the requirements.

Rather than try to figure out how each one works, it may be easier to put the various bodies through tests, such as sports contests and mental contests. This is difficult to do for most domain software because there's too many features, and how to rank them can be subjective. But it does suggest the use of "user satisfaction surveys". The problem is testing enough variables this way with the same application. At best we could pick only a few language and methodology combinations. This may be better than nothing, but hardly thorough. And, still expensive and unrealistic. No functioning business could afford such disruption. And it doesn't address individual WetWare differences. -t

I propose testing software user satisfaction non-intrusively and in an automated fashion -- via phallometry.

For certain entertainment industries perhaps.

No, all. Without some user satisfaction being applied, how do you know to optimize A over B? In other words, how do you know the weights of factor A in relation to factor B? Make them all the same? That may be an error.

[You do know what "phallometry" is, don't you? Here's a hint: It will only work with 50% of the population.]

It's a tool for measuring how tall YOU are.

{What a fantastic reply. It's a wonder of WikiWiki that such one-liners can come months after the original comment, and still be funny.}

Three months from now you'll be snarked also ;-)


Weather forecasting has too many variables!

Especially with the problem of predicting the distribution of volcanic ash in the atmosphere.


See also: DisciplineEnvy, ArgumentByLabToy


CategoryEvidence, CategoryMetrics


EditText of this page (last edited August 18, 2010) or FindPage with title or text search