What Is Computer Science Good For

Thesis

The following topics are useful to know only for making very unusual kinds of software, though in the rare cases when you need them, you really need them:

Computer science's relation to the day-to-day business of designing software is analogous to physics's relation to consumer product design (i.e. choosing functional and aesthetic shapes and colors for gadgets). For example, learning about Hamiltonians or symplectic geometry or principal bundles won't help you design a good layout for the controls and display of a car stereo. Similarly, learning about universal Turing machines won't help you design or implement a good input form.

So if you're not a computer scientist, and you don't work in one of the very few areas that require computer science, what good is it for you to learn computer science?


A request

One option I am considering for next year is to go to graduate school in ComputerScience. So I am very interested in facts and ideas about what ComputerScience is good for. If you have ideas about it, would you please post some content? (No more insults, please.)

BTW, I am disappointed by the answers given under "Opposing view?" (as of 02-Feb-2006). They make it sound like ComputerScience is only good for solving a few odd problems that come up once in a great while on the job as a programmer, and the level of knowledge needed is pretty shallow, not the sort of thing that requires extensive study (except maybe how to analyze and prevent deadlocking in multithreaded systems). If there is a strong case that learning ComputerScience is good for something, I'd like to read it.

Many thanks,

-- BenKovitz

What's commonly called ComputerScience is in reality a mishmash of a great many very different things. There are classes in the programming craft (eg, DesignPatterns), in software engineering (eg, ), and computer science (eg, ).

For programmers, the computer science largely serves as a common language with software engineers developing tools for programmers' use.

This probably isn't the answer you're looking for, and please don't take this the wrong way, but as a university lecturer responsible for teaching grad students (SoftwareEngineering, not ComputerScience per se), might I suggest that you do a grad degree in ComputerScience because you love ComputerScience, not because you seek an application for ComputerScience. You'll appreciate it far, far more, and (speaking entirely selfishly) so will your teachers. ComputerScience is good because it's good, not just because it's good for something.

Now then: What's it good for? First, studying it in depth will tune your brain. Second, in studying it in more depth, you might discover things that will help you define or improve the state of the art. -- DaveVoorhis

Don't worry, I wouldn't do grad school except out of love for the subject (I agree that that is good advice). I am curious to hear people answer this question only in order to get some perspective, not to determine the end for which advanced study would be the means. The other main option I'm considering is math, which is similarly "useless" but has some well-defined applications (i.e. stuff you can't do without knowing some advanced math). -- BenKovitz


Opposing view?

Is there any disagreement about the above? If so, please post some specific ways in which average programmers/designers/whatever apply knowledge of the above topics in their day-to-day work.

In short, the more tools you have in your mental toolbox, the more problems you can solve, and the better you can solve them.

Anecdotes

Almost every enterprise Java app I see contains one or more interpreters for scripting and configuration, usually with a dreadful XML syntax and poorly designed semantics. Yes, that reflects inadequacies in the Java language itself, but it does show that an understanding of many of the items on that list are still necessary for day-to-day enterprise drudge work. -- AnonymousDonor

Many times I've run across big messes of for loops and if statements to do the work of a fairly simple state machine. The loops with complicated if statements are not just complicated, they are buggy. They're buggy because the programmers knew of no simple, systematic way to structure the computation so that it addressed all possible cases. These programmers had no way of even conceptualizing what "all possible cases" would be; they had no clear conception of the problem they were trying to solve. Once on the job I ran across a programmer who panicked at trying to parse a phone number out of a string and declared that it couldn't be done because people type phone numbers in different formats. A basic understanding of finite state machines solves all these problems, by giving you an appropriate mental "lens" through which to see these problems and make them easy. This knowledge takes only an hour to learn if someone shows you. -- BenKovitz


Opportunity cost

For the ordinary programmer/InteractionDesigner/whatever, learning the above computer-science topics would take up a lot of time that could be better spent programming/designing/whatever or learning something more relevant.

See EconomicsOfInformation for discussion of how all knowledge carries with it an opportunity cost.


. . . all knowledge carries with it an opportunity cost.

Disagree. Objectively not true. The term "all knowledge" is too sweepingly general to be correct.

Case in point: People who learn multiple languages (e.g. English, Spanish, Latin, German, Swedish ...) actually gain from the additional knowledge, as relationships that would otherwise be missed are revealed.

This also applies in computer language contexts. A person who knows CeeLanguage can add AssemblyLanguage and not only is no understanding sacrificed, the net understanding of code behaviors is actually enhanced.

Additionally: People who learn land and sea navigation (how to find where you are and where you're going) and who also learn geography (where everything else is) will find they now have complementary bodies of knowledge. To this can be added learning to drive, sail, and fly. Nothing is lost, everything is gained.

I know people who have educated themselves in all facets of construction. They can design, frame, roof, plumb, finish, glaze, and do every other aspect of building. The more they know, the easier it gets. According to them, that is.

Now, I will grant that the time spent learning something new is a cost, but the time recovered by virtue of its not being wasted in pursuing futile dead ends is certainly compensation.

The only thing that I can see that's truly lost in the acquisition of new knowledge is plausible deniability. Once one knows one can no longer credibly claim aloofness and unfettered "creativeness" by virtue of not having been "contaminated" by knowledge.

-- GarryHamilton

Not to be rude, Garry, but you seem to misunderstand "OpportunityCost". The phrase refers to a) the initial investment in something, and b) the lost opportunity to use the resources involved (time, money) to do something else. And non-trivial learning does indeed bear an opportunity cost; as it requires an investment of time - time which could have been spent doing something else useful (such as learning a different foreign language).

The phrase has nothing to do with the ultimate gain/loss that one realizes from the investment.

Suppose the government were to issue a bond which costs me $10k today, and pays me back $20k in one years time - an excellent ROI. The net benefit to me, of course, is $10k, ignoring the TimeValueOfMoney. The opportunity cost, though is $10k - money I cannot spend elsewhere during that time. That opportunity cost would be the same regardless of whether the payout were $20k, 0, or $20 million.

Just because something has an opportunity cost doesn't make it a bad investment. Most worthwhile investments do have one.

-sj

Knowledge has an opportunity cost in time but also in lost knowledge and lost creativity. When you learn a fact, it becomes much more difficult to learn an alternative fact that serves the same function. Who was it said that physicists no longer try to convince their fellows of the rightness of a radical new idea but merely wait for them to die and so outlast them? He meant it as a complaint but it was understood as inspired strategy.

So for instance, if you learn that war is good, it becomes more difficult to learn that it is evil to butcher hundreds of thousands of people. Or if you learn procedural programming then it becomes more difficult to learn object orientation. Or if you learn classical mechanics then it becomes more difficult to learn quantum mechanics.

Knowledge also bears a creativity cost because knowing a solution to a problem precludes being able to come up with a different solution to the same problem. This is why not only is some knowledge not worth acquiring, but some knowledge is not worth the having even after you've acquired it. And this applies to more than just known-obsolete knowledge, it applies to all knowledge you have a reasonable hope to make obsolete, and perhaps even beyond that.

If you want concrete examples of this in action, consider technologies (which are essentially knowledge) and the cost in investing in soon-obsoleted technology versus waiting for a bleeding edge technology to mature, or developing an alternative oneself. Also consider the cost of an engineering or IT shop that's heavily invested into some technology and how it hampers innovative solutions. Everything that's true for individuals is replicated in groups. -- rk


See also

RobPike's claim that systems software research is now irrelevant to industry: http://www.eng.uwaterloo.ca/~ejones/writing/systemsresearch.html


Discussion

Old question: I'm trying to boil this page down to some content (sans invective). Is the above pretty much what you're saying, Richard? -- BenKovitz

Wow, you sure read these interactions differently than I did. Maybe I wasn't paying enough attention, but I had been under the impression that RK thought that most of the elements of the two categories above were irrelevant to his notion of InteractionDesigners?' concerns (with exceptions, like I think he said InternetTrafficIsFractal was an exception, although I wasn't paying enough attention to have a clue as to why; I don't currently understand RK's thinking on any of this, in general).

Regarding that, I'd like to see RK's definition of "computer science" and of "software engineering", since those two terms have been ill-defined in general for decades, but RK seems to mean something very specific, so I've been kind of wondering what his definitions are. (There aren't any really truly standard specific and rigorous definitions for either term that are also generally accepted, despite claims otherwise here and there over the years, on this wiki and in the literature.) -- DougMerritt

Actually, I didn't strongly expect that RK intended what I proposed. That's why I asked. I worded it without proposing or asking definitions in order to avoid the semantic arguments and get to the substance. -- BenKovitz

You're right, neither category is useful for making software. Both are irrelevant; they're just irrelevant for different reasons. InternetTrafficIsFractal is relevant only to network software engineers because it means there's no point to implementing QoS. It is completely irrelevant to 99% of software engineers and 100% of programmers. This is the general pattern of computer science, that each area is deeply relevant to a miniscule fraction of software engineers and not at all relevant to any programmers.

By computer science and software engineering I mean something approximating those terms' meanings in other fields. So for science, the discovery of pure knowledge in accordance with a rigorous social process, and by engineering the application of this knowledge to some purpose. It's fuzzy but most of the fuzziness lies in how rigorous is rigorous enough, and that's a question that's always plagued science. Good definitions are usually bug for bug compatible with the concepts they define.

On a completely different subject, it's a common attitude of some programmers to want designers to produce only implementable designs so as to reduce their own workload as programmers. Of course, this isn't the correct tradeoff for a designer to make. The correct tradeoff is to minimize false negatives, to ensure that all implementable designs are actually produced, not to ensure that only implementable designs are produced. -- RK


It was written, "...every bit of knowledge in the domain of software invests you into a paradigm of thinking that will rapidly become obsolete."

I'm not entirely clear what you mean about "every bit of knowledge in the domain of software invests you into a paradigm of thinking that will rapidly become obsolete." Do you mean current thinking about software development, to the degree that it requires knowledge of (say) algorithms and data structures, will rapidly become obsolete? I have my doubts about this - it reminds me of the claims once made about COBOL replacing programmers (i.e., the folks at the time who knew algorithms and data structures), which was true only to the degree that COBOL retained different kinds of programmers and attracted new ones, and those who liked algorithms and data structures moved elsewhere. I will grant you that typical COBOL programming, and probably a significant portion of business programming in current languages, does not require any knowledge of ComputerScience. However, business software is merely one category of problem domain, with characteristics that are by no means common to all domains.

In other words, some software development tasks certainly don't require knowledge of ComputerScience - painting CrudScreens, for example, certainly doesn't (is it programming or layout design?) - but that doesn't diminish the overall industry demand for programmers knowledgeable in ComputerScience. As a case in point, note the significant quantity of UK development job postings that require a "first" or a "2:1" (i.e., a high GPA) ComputerScience degree from a top ranking university. Graduates with lower qualifications cost less, so why demand it if it isn't deemed necessary?

First to establish very clearly what you're saying. I interpret "current thinking about software development" to refer to not computer science nor even software engineering but to the mishmash of programming lore that relies rather heavily on known-obsolete ideas and down and dirty engineering details. In that light, yes this is already obsolete and will soon become deprecated. Software development is a rapidly moving field and you can't stay on top of it while retaining a grip on its most backwards elements.

Programming in Smalltalk, I've never found anything taught in Data Structures to be the least use whatsoever. I've never implemented an ordered collection or sorted collection in my life and I don't foresee ever having to do so. If I ever did, I'd be sure to offload the job onto a competent software engineer rather than doing an incompetent job of it based on whatever hearsay I happened to pick up. Wasting massive amounts of time replicating the knowledge that software engineers have already assimilated also doesn't appeal.

The whole point here is that software engineers are busily at work creating tools that divorce programmers from software engineering. Languages, VMs, concurrency techniques, garbage collection, everything that has to do with software engineering is being systematically automated so that programmers don't have to deal with it. They may have had to deal with it in the past, and programmers working in the more backwards areas (the mainstream is 20 or so years behind the avant guarde) still have to deal with it even now, but the trend is clear.

Finally, while I divide software developers into programmers and software engineers, I don't think characterizing the former as lower-class or associating them with CRUD screens is doing anyone any favours.

I think I understand now. There was an element of LaynesLaw at work here, as I was interpreting your use of "programmers" and "software engineers" quite differently. In my Canadian company, we generally avoided those terms due to ambiguity, but preferred to divide development roles into "tool developer" and "application developer." Either role could be called "programmer" or "software engineer," but I think "software developer" covers the range of skills and roles without leading to meaningless quibbles. For any given project, the tool developers used knowledge of ComputerScience to develop libraries, utilities, and sometimes languages, or integrate and/or enhance existing ones into a cohesive set of development tools. The application developers used the tools to concentrate solely on the problem domain and produce one or more applications. Both roles were jointly responsible for choosing off-the-shelf tools, source code, or other resources as appropriate.

Depending on the scale and requirements of the project and the degree to which we could re-use prior development, both roles might be embodied by a single developer wearing two hats, or by multiple developers of either role. Some developers liked to work both roles, others preferred one or the other. The clear distinction of roles was highly effective, but the roles worked in very close symbiosis - the tool developers regularly fed new tools into the process, and the application developers fed their own requirements back to the tool developers.

At times, we tried using application developers without a strong computer science background. While they could get the job done, they were nowhere near as effective as those with computer science knowledge. Among computer science graduates, shared knowledge and common theoretical language made communication much easier, and I suspect computer science education simply made the application developers more disciplined and better able to deal with the inevitable complexity of computing in general.

I suppose an argument could be made that over time the tool developers would be needed less and less, but that was not our experience. There was always either the opportunity to improve the development process by refining existing tools, or an obvious need for new tools.

By the way, I categorically do not deprecate or diminish the job of creating CrudScreens. For applications that require them, creating them is a necessary part of the application developer's role. Supporting and facilitating their development is a necessary part of the tool developer's role. I merely used CrudScreens (above) to identify a job function of a role that I was beginning to understand was what you meant by "programmer," and for which I prefer the term "application developer."

The divide between software engineers and programmers doesn't coincide with tool makers and application developers, but it is close enough for a first approximation. Some of the deviations include embedded software where even applications must meet exacting requirements. And at the other end, tools aimed at non-programmers, or at programmers functioning in a non-programmer role (eg, GUI building).


The dictionary definition: engineering: n.

The application of scientific and mathematical principles to practical ends such as the design, manufacture, and operation of efficient and economical structures, machines, processes, and systems.

Now one would hope that software engineers apply ComputerScience, and more generally Math, rather than invent their own half-baked, ad-hoc, thingies. It would serve no purpose for engineers to be totally ignorant of ComputerScience.

As with all science/engineering dualities the integration between SoftwareEngineering and ComputerScience is not without problems (kind of ImpedanceMismatch). See SoftwareEngineeringVsComputerScience. Not all the stuff that keeps computer scientists busy (mostly university professors and few in research labs) will turn out to be relevant 20 years from now. Even of the stuff that will be recognized as relevant 20 years from now, only very little is good material for engineering the systems of today. So there's the problem of continuous transfer of knowledge and feedback between science and engineering which is not working smoothly. Many computer scientist publish misleading papers that promise the Earth for solving engineering problem, when in actuality their research is unusable.

However, to disregard computer science in its entirety is completely foolish. This wiki, the internet and a ton of society's infrastructure that we take for granted today would not be possible at all without some hard ComputerScience. Plus if you decide that it's safe to be ignorant, cause all you do is PutTheDamnDataOnTheDamnScreen?, you'll never know when your ignorance will byte you in the ass.

The problem of establishing how little is too little and how much is too much remains open.


This feeds a notion that things are changing so fast that one can be ever-developing and never-producing a finished product. During development, new innovations make the application archaic before it can be productionized. This is the reason software is often rushed into production before it is fully tested and finished, so that at least the previous innovations can be utilized for a little time before they are replaced and then deprecated by the passing of a few months (soon to be weeks, then days).

This is the myth of InternetTime which is empirically false. Things are not changing very fast. The internet has only shaved a year or two off of the innovation uptake cycle. So from a, say, 10 year uptake, it's down to 9 or 8 years. Since this is compounded, it matters a great deal over the long term. But it matters not at all for any individual product.

It depends upon how big the product is, how long it takes to develop it, and whether new innovations are applicable. I am referring to small products taking about 6 months to produce and upon which current innovations impinge. It would not be so for processing financial transactions for a mega-bank (which would probably be done in Cobol or PL/1) for which innovation does not apply.

Now you're imagining that some products (which you call "small") are innovated more rapidly than other products. Can you provide some examples? Damn, I forgot to mention the difference between minor innovations and major innovations. The 10-year adoption length is for major innovations. And no, I don't know the difference between the two categories but it has nothing to do with length of development.


See also IsComputerScience

JanuaryZeroSix

CategoryScience CategoryDiscussion


EditText of this page (last edited September 14, 2007) or FindPage with title or text search