Mind Overhaul Economics

Is it cheaper to change the machine/model to fit our minds or vice versa?


I am a RelationalWeenie who does not seem to "get" OO. If there is any objective superiority to OO (which there may not be), then it would probably take years of close mentoring for me to see it.

Does it make economic sense to have developers bust their butt to overhaul their mind in order to fit a different paradigm (which may turn out to be a fad anyhow)? Is it even possible? Perhaps if OO was the rage when I started my education, I would have dropped out and did something else. Perhaps I should give up development and become a database administrator. But, I like development in general.

Perhaps with the influx of inexpensive 3rd-world labor, it is more economical to simply hire like-minded people rather than retrain those who don't fit in with the current "state of the art"? Trash the COBOLers too while we are at it? Are the technical issues more important than their domain knowledge? Why does specific technical knowledge seem to wag the dog more than domain knowledge? Are managers just stupid, or is the tech knowledge really more important than domain knowledge? Those COBOLers they often toss probably know the business as well as anybody in the company.

Regarding the COBOLers, why not upgrade with a language and paradigm similar enough to COBOL so that they can do "new" things like GUI's and Web without a total mind overhaul? I saw "CobolScript" the other day for web apps.

(Note that I don't consider my "OO problems" due to my age. I am not even that old. It just does not fit the way I think, and OoLacksMathArgument. OO is too hacky to me. I can't find guiding principles and consistency in it. I am not a COBOLer, by the way.)

If OO lacks math, then so does procedural. You don't have things such as "procedures" or pointers in math. Claiming that OO lacks math is like claiming that people who build houses lack math when they reuse existing house designs to make more cookie cutter houses based on that similar design. OO is more of an engineering thing than a math thing. Not all engineering is pure math. An engineer not in the programming field will reuse designs just like we do in OO.

This assertion belongs in that referenced topic in my opinion, not here. The main point of this topic is to consider what happens if one doesn't "get" a paradigm/tool rather than revisit why or what they don't "get". There are already plenty of existing topics on the benefits or problems of OOP specifically.


These are hypothetical "productivity" bar charts. Let's say year 4 is the first year using the new technology. Year one is an arbitrary starting point which assumes the "legacy" technology is already well-ingrained.

With overhaul:

  01 ||||||||||||||||||||||||
  02 |||||||||||||||||||||||||
  03 ||||||||||||||||||||||||||
  04 ||||||   <--- start new technology
  05 ||||||||||||
  06 ||||||||||||||||
  07 |||||||||||||||||||
  08 ||||||||||||||||||||||
  09 ||||||||||||||||||||||||
  10 |||||||||||||||||||||||||
  11 ||||||||||||||||||||||||||

Without overhaul:

  01 ||||||||||||||||||||||||
  02 |||||||||||||||||||||||||
  03 ||||||||||||||||||||||||||
  04 ||||||||||||||||||||||||||
  05 ||||||||||||||||||||||||||
  06 |||||||||||||||||||||||||||
  07 |||||||||||||||||||||||||||
  08 |||||||||||||||||||||||||||
  09 |||||||||||||||||||||||||||
  10 |||||||||||||||||||||||||||
  11 |||||||||||||||||||||||||||

After say 7 years their productivity might get back up to prior levels, but they have spent 6 years losing ground. One might argue that productivity will be even higher than it would be after that, but for one this is not always certain. Second, there is FutureDiscounting to take into account. Third, they may be close to retirement or a career change, in which case the alleged high-productivity period that makes up for the transitional loss never comes.

I go through this process several times a year, generally, so I dispute that "7 years" is anything but a random figure. It clearly varies wildly with both the individual and with the technology in question. And a lot of other things too, for that matter.

I've been programming in C++ for a decade, and I'd say I'm up close to maximum efficiency with the language and OO design in general. If, suddenly, my manager decides that I should be using Java or C# or Perl or Scheme or something else instead, then my productivity would drop and take years to get back to where I am now. But The drop wouldn't be from 26 bars to 6 bars, it'd be from 26 down to 20, and then it'd prolly take me another three-four years until I was such a master at New Paradigm X that I'd be as efficient as I was with my Old Paradigm. The move from 25 bars to 26 bars is very subtle, but I don't want to get into that here.

However, if I was switching from writing stand-alone apps to writing web page forms, doing that in C++ would prolly be more challenging than learning C# and asp.net. Likewise, if you told the COBOL guy to work on MS Word 11, his productivity would be a lot lower. Right tool for the job and all that, which is I think of higher importance than this whole Mind Overhaul discussion which seems to be centered on the religious war between different approaches to what would be the same solution.

[I also do this several times a year. Being a good programmer is all about being able to adapt and learn new things. You should be able to switch between OO, Procedural, Functional, and Relational thinking whenever the task at hand requires it, or could benefit from it. If you think it'll take you seven years to learn anything, quit, you're not cut out for it, find a new career.]

I was not suggesting that it takes "7 years to learn", but that it takes a while to become as productive as before. With regard to "should be able to switch between paradigms", what if somebody is heavily productive under one paradigm but mediocre under another? That does not make them "bad", just less adaptable to some kinds of changes. For example, suppose person A can produce 10 units under paradigm X for a given amount of time, but only 4 under paradigm Y, but person B produces 7 units under both paradigms.

But you can't really suppose that, because there aren't any good studies to let you know how to measure such things, so it just comes down to the old well known fact that people are idiosyncratic about such things. A GuruProgrammer? under one paradigm typically is extremely productive under each new paradigm that he learns as well. Those who aren't GuruProgrammer?s must just find their own niche that works for their own particular strengths, as always. There is no general argument to be made here.

You are implying that good-in-one == good-in-all. I am not sure I agree with that. There are a lot of hints that people tend to do best under paradigms or languages which fit the way they think.

Not implying, I said straight out that it is "typically" the case (that is, not always, but often or even usually). I have heard anecdotes from people who say "oh, I don't think in pictures, I think and program only verbally", etc, and I've seen this claimed about calculating savants, etc, so I am aware that there are "lots of hints" otherwise, but I don't believe it in general, not on the subject of GuruProgrammers?. I mean, think about it. Think of one who has learned more than one paradigm. Are they lousy in the second and third ones? More likely we think of people who are only known to have learned one paradigm thoroughly (e.g. Chris Date, who has written about non-relational topics, certainly, but doesn't seem to care much about them), which doesn't help this discussion.

[Anyone may develop preferences for a paradigm, but it shouldn't affect their ability to program in the others. If you only know one paradigm, you simply can't call yourself a good programmer, period. It's not that freaking hard, and each one teaches you new things that improve your overall use of all of them. Until you know several, you don't have any real skills.]

Yes; that stronger claim sounds better than what I said, although I think you phrased it too harshly. In the example of Chris Date that I brought up, from the little I know about him I would expect him to be strong in non-relational areas too, if he cared, although he's so ornery that he'd no doubt start major controversies in whichever area he entered. But good controversies. :-)

The question isn't what he thinks about OO, I know that, that's why I mentioned him. The question is, if he decided to do OO programming anyway, would he be good at it? I think he would be, despite his views.

I still think that people will lean toward a preference in paradigms and techniques that are the most efficient or "natural" to their own head. A bunch of SmallTalk proponents will probably do better with SmallTalk than they would with Eiffle compared to Eiffle proponents, and visa versa. Anyhow, we have no formal research to back up correlations between one personality trait and another, or that knowing more paradigms significantly improves their designs. Thus, perhaps we should just state our individual experience and move on.

For the moment, ok.

Further, I think most agree that somebody shouldn't be booted out of the industry entirely just because they happen to only do well under one technique. There's probably a continuous range of people who adapt well and of those who don't. While adaptability certainly should be rewarded with more pay, I don't think it should be the primary factor (NarrowStaffSelectionFactors). There's lots of important factors to weigh by. This topic is about calculating the economic costs and benefits of switching paradigms/languages/techniques. A manager may be faced with the question of switching technology, and has to weigh whether the switch is worth it and whether he/she needs to replace staff. This topic can raise questions to ask even if it cannot answer them all.


Re: "Perhaps if OO was the rage when I started my education, I would have dropped out and did something else."

It is also an open issue of whether people think like language X because they have been doing it for so long, or if they picked X in the first place because it was a pretty good fit for them? Perhaps an OO-head would drop out of a computer degree if they had to use only COBOL, Pascal, or Fortran. (Not that I think highly of COBOL and Fortran.) A classic ChickenOrEgg question.

I'm not so sure. I started out in Java because I wanted to learn programming and I took my school's programming class, which teaches java and OOP. Though I don't program for a living, I mean, I won't program for a living in the future, my career path had very little to do with java or OOP :P. About a year ago I took a job that requires coding strictly in C and fortran (because my boss has to be able to read and understand all of the code after I graduate, and C and fortran are what he knows and all he wants to know). So my programming history is entirely not chicken-and-egg. I found the changeover pretty easy, maybe that's intended, maybe it isn't. But I do know that when looking up/reading about other languages, I have few problems understanding them and I'm positive it wouldn't take me 7 years to learn them. After maybe 3 or 4 months I was as productive in C as in java. I remember looking up functionalProgramming and wondering what the difference from what I knew was, and having difficulty telling any fundamental difference between the example language and the ones I knew (I eventually found a few).

Let me put it this way. My car mechanic solves problems the same way I do. I write programs the same way I do physics, pretty much (a little bit more unit testing involved to make up for the lack of the real world to compare results to). My car mechanic doesn't use C, java, haskell, lisp, or smalltalk, OOP, FP, TOP, or anything else to do his work. Yet, we work in fundamentally the same way, even though I *do* use some of these things.

By the way, I hate both C and Java. Fortran is okay, but I'm going to try Prolog and see if it fits a bit better with my mathematical leanings. --JasonEspinosa


Re: I am a RelationalWeenie who does not seem to "get" OO. If there is any objective superiority to OO (which there may not be), then it would probably take years of close mentoring for me to see it.

It doesn't take years of close mentoring. The easiest way to see it is as follows: you have a button which the default color is gray. You need a button with a default color of black and you need to change the default behavior of the button. You don't want to muck up the source code for the existing working button. You have two solutions: 1. copy and paste all the button code to new files and modify the source. 2. inherit the button and change things about the button without mucking up the button. The advantage of method 2 is that you don't copy and paste, you reuse code without copy and pasting. If there are any errors to fix in the original button, they are fixed in one place. If you copy and pasted the code then you now have duplicated the code causing you to maintain two code bases and merge all error fixes. Create a third button with copy and paste and now you have three duplicate code copies to maintain. Any technique that emulates inheritance using some other method is using OO techniques, so claiming that you can do all this without OO is just emulating some of OO.

GUI's in general are not the difficult parts of OOP. Plus, procedural events and/or markup-based GUI's could potentially do the same thing. You seem to imply that all non-OOP solutions would require copy-and-paste here, which is false. But, this topic is not about how to make GUI's flexible. Variations for non-trivial things tend not to fit hierarchical classifications in practice. It may be good enough for most GUI's, but not for many domain entities/objects. Set theory or variations on it are a better fit than hierarchical inheritance. (OOP can do non-hierarchical "inheritance", but it's usually either convoluted, or re-inventing relational from scratch.) See VariationsTendTowardCartesianProduct. One should study the nature of domain variations first, and THEN worry about applying a programming paradigm; not the other way around. -t


OO and relational are orthogonal. One provides a way to structure code. The other provides a way to structure data. There is no need to overhaul one's mind to adopt either of them.

Discussion moved to AreOoAndRelationalOrthogonalDiscussion.


See: PeopleWhoDontGetOo, OldDogsNewTricks, SoakTime, EconomicsOfAdvancedProgramming


CategoryEconomics, CategoryHumanFactors, CategoryDecisionMaking, CategoryEmployment, CategoryOopDiscomfort, CategoryProductivity


EditText of this page (last edited March 28, 2013) or FindPage with title or text search