Staffing Economics Versus Theoretical Elegance

Let's see if we can get to the bottom of this long-running debate theme that's been tugging at this wiki for years. (See GreatLispWar for an example.) I believe that excessively abstract programming is un-economical to a good many organizations because it makes it more difficult to find staff that can read their existing code. Poorly abstract code (such as copy-and-paste programming) may slow down maintenance, but not being able to grok it can outright halt production, which is often viewed as the greater evil by management (for good or bad). Two factors are fighting against each other: 1) long-term average productivity, and 2) risk.

Even some of my own code has been called "overly abstract" in at least 3 organizations I worked/contracted at, thus I'm not trying to protect my style: it's a real concern in real organizations. I shall call it the "abstraction halt" as a working term.

Many organizations value people skills and domain knowledge just as much as technical ability in coders such that they are not going to go out and hire better caliber programmers and sacrifice the other two factors. Managers eventually realize that a certain level of abstraction is the Goldilocks Region of abstraction such that it produces a sufficient level of productivity without risk of an abstraction-halt. -t

Maybe your code was "overly abstract" (though I don't know what that means, actually), or maybe your critiquers were idiots, or maybe they were merely trying to tell you -- in as polite and ego-preserving a fashion as possible -- that your code was unreadable or awkward. (You didn't try sticking all the code in a table, did you?) None of these provide any indication as to what is, or is not, an appropriate level of abstraction. Done well, a given level of abstraction should always improve readability, if for no other reason than it -- by the definition of "abstract" -- means there's less irrelevant detail to consider.

By the way, shouldn't this be StaffingEconomicsVersusTheoreticalEvidence?? Elegance is merely one facet of theoretical evidence, which also includes parsimony, simplicity, composability, flexibility, reusability, readability, extensibility, power, expressiveness, safety, and so on.

One can have a theoretical model of WetWare also. I admit I am at a loss for a better single word to place on the concepts that tend to generate the controversy of issue, but have yet to find a better alternative to "theoretical elegance". -t

But nobody's arguing in favour of theoretical elegance, so how can it be versus staffing economics or anything else? Theoretical evidence can, on the other hand, be considered either in opposition to, or support of, empirical evidence.

For now, we'll probably have to consider each issue on a case by case basis. The term "theoretical evidence" is too broad for our use. The economic-meets-wetware evidence I often use can also be considered "theoretical evidence": I establish economic and wetware models as approximations of reality and likely human behavior and then optimize the parameters to maximize the "good factors" of those models. But since those models have not been empirically established, they are only "theory". Psychology is often "theoretical evidence" also.

I mean "theoretical evidence" solely in terms of ComputerScience. Theoretical elegance is something else -- usually (in ComputerScience) meaning algorithmic symmetry or beauty.

"Symmetry or beauty" is pretty much the same as "elegance" in my book.

Yes, that's what I wrote. Nobody's arguing about staffing economics versus symmetry or beauty. I've never seen anyone on this wiki argue for anything on the basis of symmetry or beauty. Theoretical evidence is about parsimony, simplicity, composability, flexibility, reusability, readability, extensibility, power, expressiveness, safety, and so on, and gives only glancing notice to elegance.

Parsimony is about the only metric we agree on, but very few agree that "code size" should be the overwhelming factor. The rest are often subject to personal views or conflict with each other such that they cannot all be optimized at the same time. Something that's flexible can also confuse, for example. Grokkability (readability?) is important. See GreatLispWar. Lisp is very flexible and "compact" in terms of syntax and language rules. But it has confused typical developers for 50 or so years. Some of those goals conflict with grokkability.

Parsimony is not just about "code size", but about overall simplicity. I make no claims as to the importance of, or any other ranking of, the various categories of theoretical evidence, other than to state that theoretical evidence is important and significant evidence.

If a maintenance programmer cannot figure out the abstractions(s), the project is dead in the water; stuck! The other issues may create annoyances or temporary slow-downs, but rarely get one outright stuck. A business will often choose to have say 5 slow-down events over one outright stuckage event. Stuckages get the developer and his/boss hauled onto the carpet to explain what went wrong to the upper echelon.

Level of abstraction is but one type of theoretical evidence, and by no means the only one. Yes, sometimes degree of readability or grokkability is at odds with the level of abstraction. An abstraction based on CategoryTheory, for example, would probably be beyond anyone without the appropriate background in CategoryTheory. Deciding what is an appropriate basis for using or rejecting a tool or technique is something a good project manager does on a case by case basis.

Yes, case-by-case basis. However, there are some general common patterns and business true-isms; and techniques that risk confounding typical future maintainers are an orange flag. Using the existing staff as your reference point is taking a fairly big risk. -t

How do you objectively identify the "techniques that risk confounding typical future maintainers"?

However, I agree that using existing staff as your reference point is taking a big risk. You risk undue limitations. For example, whilst many currently-employed programmers won't have been educated in FunctionalProgramming and abstractions like HigherOrderFunctions, almost every new ComputerScience or SoftwareEngineering graduate will have extensive exposure to both.

Judging based on estimated future trends is riskier than basing them on the current situation. I see no evidence the pool of skills is significantly changing. The proportions I typically encounter in non-IT businesses are roughly split evenly between:

If you can identify an industry trend that's changing this mix, please do.

I can only speak for trends in the UK, but there are currently two significant industry-impacting movements:

The first is a growing government recognition that ComputerScience and programming are fundamentally important, to the point that school children will now be taught ComputerScience and programming as part of the national curriculum, starting in September 2014. (See http://www.telegraph.co.uk/technology/news/10410036/Teaching-our-children-to-code-a-quiet-revolution.html) From the employer's side, I attended an employment conference a few years ago where I did a straw poll of about fifteen major national and international employers -- none of whom were IT per se (though one was a CPU manufacturer) -- and all required ComputerScience or EE graduates and strong technical skills in addition to business knowledge.

The second is a general movement, over roughly the last decade -- in both IT education and IT practice -- from the rather amorphous "user support" and soft skills like systems analysis, to a highly-technical focus on databases, data management, data analysis, customer insight and business intelligence. Students who once would have been taught to support Excel and draw ER diagrams -- or would know even less, coming from a non-technical background -- are now taught a variety of programming and reporting tools like Qlikview, SAS VA, Cyberquery, Cognos BI, etc., and are expected to be able to use them effectively.

Well, here in the USA, the Republican Party strongly balks and attempts to block any federal "government interference" in commerce, including education. The education profile of IT workers has generally not significantly changed over the last 3 decades (since the microcomputer revolution). The proportion of CS graduates to other types of IT education has generally remained the same in my observation. Some states are considering making programming a standard high-school course in that state, but so far it hasn't ramped up. I'm not sure that would change anything anyhow, since it's only an introduction. -t


One of the problems with lots of levels of indirection is making an exception to the rule. If you use mostly copy-and-paste, then an exception to the rule is simply a spot change. However, function calls require a way to specify that a particular instant gets slightly different treatment. Simple OO inheritance is not powerful enough because the difference may not be "tree shaped". And many languages don't make it easy to have named optional parameters such that one has to change every existing function call to get add new behavior (parameter profile). There are ways to make code better able to handle exceptions to the rule (variations on a theme), but everybody has a different favorite way and some ways confuse different coders, especially if not coded well (including poor variable names, lousy comments, etc.) -t

Making an exception to a rule is hard no matter how you have to do it, though needing to make an exception in an abstraction suggests the abstraction wasn't right to begin with.

That's always a problem with abstractions. The existing case set may not be sufficient to build an abstraction able to handle future changes in the abstraction. The business world is filled with "soft" and dynamic abstractions such that you WILL often get it "wrong" on the first try. Related: EightyTwentyRule, AreBusinessAppsBoring. One is not modelling something stable like physics or math, but rather the world view of the customer/managers; and they change.

No, that's a problem with exceptions, or with programmers lacking understanding of the role of abstractions. Abstracting a customer category is generally doomed to failure -- it's not abstract, it's just an arrangement of characteristics. Abstracting a style of report or a type of form is likely to be highly successful -- it is abstract.

I disagree that abstracting CRUD well is a trivial undertaking. The potential for variation or tweaking to fit specific requests is pretty large such that a CRUD tool that handles everything will be so large a mass wad of configuration as to overwhelm most. Instead, most CRUD tools assume certain UI styles and conventions to cut down on being a God CRUD Tool.

Who said it was a trivial undertaking? Anyway, the key to abstracting CRUD well is composition, not configuration.

There appears to be some confusion here. What did you mean by "type of form"? Hierarchical "types" are often too inflexible in my experience. Future variations on a theme are hard to predict here also. The deeper one's Grand Abstraction, the more fragile or confusing-to-change it may be.

There are different types of forms. E.g., dialogue box, master-detail, master-detail-detail, master-master-detail, and so on ad infinitum. Attempting to encapsulate all current and future form requirements via a "mass wad of configuration" is doomed to a quick failure. Instead, forms can be effectively composed out of smaller parts, like assembling a house out of LegoTM bricks, except instead of bricks you have panels, searchbars, navigator bars, selectors, menus, popups, toolbars, etc., and ways of combining and recombining them into reusable components. Most GUI toolkits already embody this strategy; it's a matter of extending the strategy to include forms, reports, and all the usual CRUD-oriented UI components.

Hierarchical decomposition (parts, sub-parts, sub-sub-sub-parts, etc.) often sounds like a wonderfully simple idea on paper, but integrating them all to work smoothly as a unit is often not a trivial task, especially when the screen view is very different from the underlying database tables, as often happens with legacy systems. For example, it's generally not efficient (nor ACID-friendly) for each field/widget to communicate with the database on it's own. Thus, it must somehow coordinate or participate in coordination with other widgets and objects to communicate with the database in the appropriate way at the appropriate time, including validation concerns. And sometimes updates to the content of one widget will need to reflect on another. These kinds of cross-cutting concerns are not trivial to coordinate and are rarely purely hierarchical. Lego bricks are an unrealistic analogy, except as a rough starting point. Further, managers/customers often want to cram as much as possible on one screen/page. This requires fine-level control over many features that "generic" or "overly smart" widgets may not have or do well. (Cramming may be a UI design smell, but it's what they want and they have your paycheck.) One finds that multi-dimensional (cross-cutting) concerns are usually the bottleneck even outside of CRUD and that any hierarchical/nested/compositional arrangement is only a starting reference point--a scaffolding upon which to wire up and tune the cross-cutting concerns, which is where most of the coding, tweaking, debugging, and hair-graying work takes place in terms of time. -t

Yes, programming is challenging. I've been developing CRUD-screen tools since the 1980s. It is possible to "integrat[e CRUD components] to work smoothly as a unit", and I agree that it's not a trivial task, but it's certainly do-able. I did it. I based a successful career on it.

Successful at what? Implementation specific CRUD-intensive applications, or devising lasting and "good" CRUD frameworks? Either way, I have no way of inspecting their quality to see if they are as nice as you claim.

Successful at both. I developed lasting and "good" CRUD frameworks and used them to build CRUD-intensive applications. Whether you would consider them "nice" or not is irrelevant. My colleagues, employees and I found them "nice" enough to be competitive.


In a way, I think that I'm agreeing with Top here. There are many shops where it's easy to out-abstract the rank-and-file programmers. Sometimes that's for a bad reason: management doesn't care, or is just looking for PlugCompatibleInterchangeableEngineers. Sometimes, it's because you simply can't hire enough smart geeks: you need hundreds if not thousands of programmers to build your operating system and/or New World Order. And sometimes, the best programmers for the jobs are poor to mediocre geeks who have other skills you need.

I once interviewed for a position at a Finite Element Analysis shop; the guys who make the software that figures out how fast you can run your engine before the piston heads come flying out. They were a FORTRAN shop considering a move to C++. The vast majority of their coders were people with PhD's in mechanical engineering and/or physics backgrounds. Maybe one in five considered themselves software developers, and they were there to write the glue and try to refactor the spaghetti that the engineers wrote.

This was exactly as it was meant to be. Certainly the programmers didn't know enough to program the solutions the company needed; they needed domain experts. You could out-abstract them easily, even though they may even be smarter than you, because that's just not how they think. And especially in engineering apps like this (and expecially fluid flow simulators), you still care more about calculations per second than elegance of expression, and would rather have fast spaghetti than slightly slower beautiful code.

Where I work, OTOH, is a credit card processing network. Our domain experts don't have to write the code themselves, and the speed of our code isn't so critical--calculation time is dwarfed by disk and network I/O. We can afford to hire very good programmers, because we can work with several dozen, not several hundred of them. With all that, reaching for a higher abstraction isn't such a bad idea, so long as the tests still pass.

If you don't need Mongolian Hordes of programmers, if your can afford to have your coders be programmers first and domain experts second, and you aren't in a maximum performance or minimal hardware environment, then higher abstractions can quickly reduce the cost of adding your next feature. Lots of us on this board live in that world, or at least imagine that we do. Those not in that world are going to care more about other concerns, like not confusing the people that they have.

--RobMandeville

For clarification, I agree that every organization or staffing need case can be different (ItDepends). I'm not claiming that staffing flexibility or inflexibility is always the main bottleneck to abstraction decisions; it just has been in the majority of organizations I have worked for. I am merely asking that one be mindful of staffing issues when making abstraction-related decisions because in some cases, perhaps many, they can be a bottleneck to such. Thanks for your feedback. --top


I believe part of the problem is that some evaluate the code itself before or instead of looking at the big picture from the business perspective (owners, customers, managers, etc.). If you look at the code for "good design" alone, then you are evaluating its value through your own eyes, not that of the organization or other maintainers. How other actual maintainers will react to the code is a very important, if not the most important factor in judging code. -t


Target Skill Level

You have to examine how much productivity you're gaining by "coding down to" the lowest denominator. I've been on a project where there were several programmers who were deeply, fundamentally confused by the code; they simply couldn't understand it. They might have been capable programmers in some other life, but they didn't grok it there, and they made no actual effort to learn. They complained for months, and contributed nothing to the project. Meanwhile, the three programmers who understood what was happening produced tens of thousands of lines of working code. If we had written down to the lowest common denominator, we would've been abandoning many things that allowed the successful delivery of complex functionality in favor of--what? Dynamic web page coding out of 2001?

The company could've done better just hiring the three programmers who were capable of understanding the abstractions. We were sufficient to get the work done (as demonstrated by the fact that we got work done), and would've been better off without the friction of contractors complaining that they didn't understand why we would write unit tests or what a "validator" is.

I never suggested targeting the LOWEST common denominator. I don't know where you got that idea from. There is a "goldilox" level somewhere in the middle that is the best balance. I don't know the details of your particular project so cannot comment on that case, such as comparing potential alternatives. I also don't know whether the reason for some member's difficulties were due to project requirements or coding/design style choice imposed by others.

Keep in mind there are generally 3 skill types valued by typical organizations in roughly equal proportions:

If the target developers do well in the people and domain area but are lackluster in the tech area, then an organization may prefer that outsiders and consultants make sure the design style is grokkable and maintainable by said developers, even if the abstraction level is quite low.

{How should one determine the "goldilox level"?}

I don't have any solid research, only experience-based anecdotes. But nobody around does so far. Both sides can give their anecdotal experience, and then LetTheReaderDecide.


Moved from TopMind.

I find it horrifying how much of "computer science" depends solely on AdVerecundiam (ArgumentFromAuthority). This wiki is filled with ivory-tower academics, often devoid of real-world experience and knowledge of economics, who dismiss empirical science and testing using a wide range of excuses, often ranking designs on criteria they personally find intellectually titillating, not focusing on what owners and customers want out of a tool. Battling AdVerecundiam creates a lot of heat and tension, but it's good for the industry. I feel I can make IT a bit better if I can add a little science back into SoftwareEngineering, even if it steams up The Church a bit. (See StaffingEconomicsVersusTheoreticalElegance and TopOnWhyTopIsHated for example.) --top

The problem here may be that most of us prefer to discuss engineering but you prefer to discuss economics, and the two don't join very well here (or anywhere). As for AdVerecundiam, I suppose it might seem that way if you're an economist and know little about theoretical ComputerScience or practical SoftwareEngineering. It's like an accountant demanding a mechanical engineer explain why car clutches are made of metal instead of wood. The answer might be surprisingly difficult for an accountant to appreciate if he has little academic background in the relevant physics and material science -- especially if wood is cheaper to use than metal -- and might sound like AdVerecundiam to an ear not educated in mechanical engineering. As with all sciences, knowledge in ComputerScience is constructed from a chain of references to previous work, in which evidence is built from (along with logic and empirical data) citations to references. To the outsider, this can appear to be AdVerecundiam, especially if the outsider's priorities differ from those of the typical insider.

Software *is* heavily about economics whether you want it to be or not. Economics interweaves with engineering. As a thought experiment, consider how you'd do a particular project if you had a trillion dollars and 10,000 years to spare on it versus how you would do it under typical circumstances.

And clutches can be empirically tested per failure rates, with NUMBERS even!, so that we don't have to take the word of a con artist trying to protect his or her expensive services. And the accountant or market analysts can consider what defect rate is acceptable to customers. If you can't find a way to explain and measure the metal/wood issue in terms of something that customers and/or shop owners care about, then you are either lacking knowledge, or are a very poor articulator and shouldn't blame the customer/owner. (If safety issues come into play, that's another matter, but is STILL measurable with numbers, such as stress tests.) None of your "evidence" produces numbers, just wordy bullshit. If there is a chain of logic, YOU lost it and are unable to produce ItemizedClearLogic. If ComputerScience has the answer, you are not extracting it in a proper format. Further, what's often called "computer science" is not science; it lacks some key requirements of science. You cannot dismiss business and economic concerns just because you don't like thinking about them. Related: IfYouWereSmartEnoughYoudJustKnow, IsComputerScience.

Thank you. You've confirmed my point.

If you had one, it is as obtuse as your other writing. If there is a "chain" of clear logic, list it rather than be some fuzzy impression in your head. (I've fleshed out the accountant analogy a bit.) Note that we should probably consider a patient accountant that's willing to spend the effort needed to do it right, not the typical accountant who has a tall in-box. Also, they may not be able to find all the relevant metrics by themselves, but rather rely on experts and end-users for suggestions, to be later pruned, ranked, and clarified.


Re: "The problem here may be that most of us prefer to discuss engineering but you prefer to discuss economics" -- Why is that? I see engineering as constraint and tradeoff management, and economics is a big part of that. We are jugglers of features, resources, and expectations. -t

We're mainly programmers, interested in programming, code, technical aspects of software tools, SoftwareEngineering, programming languages, APIs, protocols and everything related to those things. We're certainly concerned with constraint and tradeoff management, but only in terms of programming, code, technical aspects of software tools, SoftwareEngineering, programming languages, APIs, protocols, and everything related to those things. Therefore, we're (for example) interested in whether or not a particular language construct is sufficiently expressive or flexible to let us easily (or more easily) construct anything we might conceive, or whether or not a particular mechanism is reliable enough to be used in a hostile and unreliable network. And so on.

The economic concerns that interest you -- which seem mainly to suggest reduction or avoidance of language features in order to meet the human resource limitations of certain employers, or the application of certain well-known paradigms without introducing new techniques or strategies, or accepting trade-offs that suggest we not write code the way we'd like to code -- are economic concerns related to ProjectManagement, not programming. As programmers, at best they do not concern us, and at worst they suggest imposing limitations on how we prefer to work. Thus, at best we don't care about your concerns, and at worst they are hostile to us. So, as programmers, we're not interested. If this were a wiki populated with career project managers rather than coders, I suspect you'd find considerably more positive interest in your favourite topic areas.

When you are taking notes for yourself at a lecture, you probably use a certain shorthand that you know well. You know your shortcuts and abbreviations and custom notations. However, if somebody else tried to read those notes, they'd probably pull their hair out trying to decipher them. Writing software is similar. If you focus on and write for your own personal whims, preferences, and laziness; you essentially are being selfish. Me me me! If you want to read my code, you learn my way, because I...am...special! -t

One of my colleagues -- who for many years maintained the compiler back-end for a well-known language implementation, which is all machine-level nuts-n-bolts stuff -- once jokingly described my code as being layers of abstraction that never did anything concrete. However, when he had to port one of my Java projects to C#, he praised its readability and simplicity. If one is working with developers of equal calibre, there is no reason to assume that use of higher level constructs is "being selfish", even if those developers -- like my colleague -- don't normally use such abstractions. If necessary, a good programmer can learn them in a negligible amount of time.

I've occasionally been called upon to develop code that will be maintained by end-users or equivalent. In that case, I make my code as concrete as possible, in order to suit its human audience. However, unless I know otherwise, I always make my code as simple as possible by using the highest level abstractions that are available, under the assumption that my code will be maintained by developers who can understand high level abstractions. In roughly three decades of professional development, that approach has worked successfully.

Re: "If one is working with developers of equal calibre..." Like I mentioned elsewhere, in many places the skill level is highly mixed. Domain skills and documentation/communication/people skills (and perhaps hardware and networking) are also valued such that the tech side may not be the final primary qualifier. But I agree that each shop is different and the staffing profiles vary widely. Target the likely audience, not the ideal audience.

I haven't yet worked in a place where the skill level was so variable that I couldn't code with the highest level abstraction capabilities of whatever language was prevalent, and not be well assured that every coder in the place could read and maintain my code. As I mentioned, on occasion I have written code to be maintained by tolerably technically-savvy end users, and for them I limited the code to what I knew they knew.

And, what is the probability that one will have to port Java to C#? YagNi suggests you don't up-front code for such scenarios unless it's fairly likely, otherwise one can play WHAT-IF all day to justify GoldPlating. Recently I've been working on a custom project tracker for a small work-group. I can see a potential future need for version control, a more flexible permissions control system, a custom field-selector report writer, among other things; but those are not current needs and adding all those would bloat up the project.

That's interesting, but I'm not sure what it has to do with the above. Particularly, why ask -- if only rhetorically -- what the probability is of porting Java to C#, and how is it relevant?

If you say, "Look how well technique X made change Y easier". The probability of needing Y still matters. When making decisions we don't know the actual events of the future, and can only estimate. In this case Y happened, but that does not mean the probability of Y is 100% in the general sense. One sample point does not a trend make. I think you would agree that it's probably not worth designing code around an event that only has a 2% chance of happening (unless perhaps it's catastrophic) such that you agree that probability matters, at least to some degree.

There was no anticipation when writing the Java code that it would be eventually ported to C#. It could equally likely have wound up being ported to Modula 2, or Python, or Haskell, or PL/I, or Perl, or assembly language, or Lisp, or FORTRAN, or not being ported at all. I didn't code the application with any intent that the code would later be ported, but if I had known it would eventually be ported, I would have written it exactly the same way. Use of higher abstractions generally makes code more readable -- and therefore more easily portable, and by that I mean more easily re-writeable -- than lower abstractions because higher abstractions generally express intent more concisely than lower abstractions.

"More readable" varies widely between individuals and probably teams. And I've been bitten at times by higher abstractions when requirements changed way outside of their fit, at least domain-related abstractions. See EightyTwentyRule.

HigherOrderFunctions and the like aren't domain-related abstractions, they're programming language constructs. As such, any requirements change that precludes using one programming language construct such as a HigherOrderFunction is almost certainly a requirements change that requires a general re-write. As for "more readable", one programming language construct is only less readable than another if you haven't learned it yet. 'For' loops are as mysterious as HOFs to the programmer who has learned neither.

HOF's make a language or system closer to machine language, and if you build a kind of machine-language interpreter in your app language, then all you have to do is build another machine-language-like interpreter in the target language to simplify conversion. In that sense, you are correct. However, one of the reasons assembler and machine language was mostly abandoned for non-machine-intensive business and end-user applications was to introduce discipline and some degree of regimentation to coding. Some abstraction and indirection ability may have indeed been lost, but that's the price of civilizing development and staffing.

How are HOFs "closer to machine language"? I've never seen a machine language construct that directly implements closures, let alone HigherOrderFunctions. Having "higher order" in the name suggests that they're more abstract and therefore further away from the machine, rather than closer to it. Could you explain?

We are getting overly general here. How about a specific pseudo-code example of them making conversion among languages easier.

How are we getting "overly general here"? You suggested HOFs are "closer to machine language", not me. If you aren't willing to defend that (rather bold and unusual) statement, would you perhaps consider retracting it?

As I pointed out, HigherOrderFunctions can make code simpler. Simpler code is easier to read. Code that is easier to read is easier to port, because code that is easier to read is easier to understand and code that is easier to understand is easier to re-write. See, for example, http://www.javaworld.com/article/2092260/java-se/java-programming-with-lambda-expressions.html

Fine, I'll consider removing it because I don't want to spend the energy defending it right now.

And that article looks like lab-toy kind of examples (ArgumentByLabToy), or at least doesn't reflect actual needs I encounter. After all these years, you should have known I'd complain about that.

Of course it's a lab toy. Do you really expect a magazine article to post the source code for a 1,000,000 line production application and tell you to hunt for the lambda expressions yourself? Do you expect them to ask you what example application would meet your needs? ("Please build me the project management application I'm building right now for Sam in Accounting, so I can see how HOFs would benefit me.") They could do that, but then every use of a lambda expression would look pretty much like those illustrated in the above URL. That's why lab-toy examples are just fine -- they're the same thing as production examples with the needless bits removed. If you can't derive how they might apply (or not) to your domain, then how do you learn from texts and examples at all?

I didn't ask for a general illustration. And perhaps I am just too stupid to apply such examples to my domain in a way that improves the app because I just can't, and the only examples you guys seems to apply that are domain-relevant appear to assume poor languages or poor API's. "HOF's are a great band-aide for crappy tools" is perhaps the lesson because I'm not learning anything else from such attempts. But why should I or any reader assume they do have some clear benefit for general custom business apps? Either you don't know the domain well enough to comment on it, or you are not smart enough to apply HOF magic to it either (outside of the band-aid pattern). We are both stupid, it seems, and the elusive Scenario Unicorn remains at large.

I don't know how a general illustration would differ from a specific illustration. Does a general "lab toy" example like 'myButton.add(e -> myHandler(e))' differ appreciably from a real-world, domain-specific example like 'okButton.add(event -> insertCost(event))'?

If you feel HOFs are only "a great band-aide for crappy tools", do you also feel that FOR loops are "a great band-aide for crappy tools"? They're both programming language constructs for which there are alternatives.

There are already discussions on the WetWare of blocks versus GOTO. I don't need to repeat that here. And the competitors to HOF's are pretty good, while the competitors to loops are not (per personal observations related to coder WetWare).

The main competitor to a FOR loop is a WHILE loop. The only competitor to a HigherOrderFunction that comes close to doing what a HigherOrderFunction does is a FunctorObject. A FunctorObject does not implicitly close over its defining environment, and requires that context be explicitly passed to it, usually via its constructor.

I thought you meant the existence of loops in general. (And we could perhaps do away with FOR loops, per another discussion that I forgot the location of.) As far as HOF's, its best competitor may depend on the situation rather than be a one-to-one swap-out. That's why I want realistic business examples/scenerios to explore so that we can don't have to dance with general claims anymore.

See above where I wrote, "Above, you wrote, '... that article looks like lab-toy kind of examples ...'"


Another Opinion on "Excess" Abstraction

"Why Ruby on Rails won't become mainstream"

http://beust.com/weblog/2006/04/06/why-ruby-on-rails-wont-become-mainstream/

BEGIN QUOTE

I’d like to take some time to explain why, in spite of all its qualities, Ruby on Rails will never become mainstream.

As you probably guessed, my conviction doesn’t come from technical grounds.

Have you ever come across Smalltalk or Lisp programmers? You know, these people who, no matter what you tell them, will always respond that "Smalltalk did that twenty years ago" or that "Nothing has been invented since Lisp". They listen to you patiently with an amused light in their eyes and when you’re done talking, they will just shrug away your points and kindly recommend that you read up on a thirty-year old technology that was the last thing they ever learned and that has been dictating every single technical judgment they have offered since then.

I believe that in ten years from now, people will look back at Ruby on Rails and will have the same reaction. I’m not sure what Web frameworks we will have by then, but I’m quite convinced that a lot of the current Ruby on Rails fanatics will have the same kind of attitude: "That’s nice, but Ruby on Rails already did this ten years ago, and better".

Interestingly, they might even be right. But by then, it won’t matter because despite its technical excellence, Ruby on Rails will still be a niche technology that only experts know about...

I find [Ruby's] syntax and concepts extremely elegant and powerful at the same time...But it’s a complex language that contains a lot of advanced idioms which will be very hard for PHP and Visual Basic programmers to absorb.

[On Rails] Sometimes, too much magic is too much magic, and it can definitely be the case that the flow of code is too direct or too clever to be understandable by regular developers. Developers were able to do the jump from imperative to object-oriented programming, but it was a hard fight. I don't believe the Web world will ever be ready to embrace the Rails cleverness.

[Emphasis added.]

END QUOTE

That's an interesting article, though now quite old. What appears to have kept Rails from the mainstream is not its "cleverness" -- as much of it is now found in various mainstream languages and frameworks -- but slow performance, awkward integration with enterprise systems, poor and/or unprofessional documentation (how many Fortune 500 IT departments are going to trust their infrastructure to a system with docs written in a hallucinatory style by a guy named "Why the Lucky Stiff"?) and support-communities that seemed entirely populated by socially-inept 15-year-old boys.

Those who are best able to master symbolic languages generally tend to also be "socially-inept" and poor at writing and human interaction. Thus, the two factors may be related in terms of R-on-R. Symbol math and "people math" both consume a good many brain resources to do well, typically. Thus, it's difficult to master both.

That's an interesting hypothesis. If generally true, we should expect the Haskell community to display an exceptional degree of socially-inept writing and behaviour, given that Haskell is considerably more abstract than Ruby or RubyOnRails. In actuality, the Haskell community is notably mature, socially supportive, and articulate. By (anecdotal) contrast, the Borland Delphi and Microsoft Access forums I participated on some years ago were rife with immaturity, and nobody would accuse Delphi or Access of being abstract, let alone symbolic. In short, I doubt there is any consistent correlation between a language's level of symbolic-ness or abstraction and its community behaviour.

Is there a TiobeIndex equivalence of people skills?


I'm amazed that a page discussing the merits of abstraction doesn't contain the word "documentation" anywhere on it.

Why do you feel it should?


See also: TopOnAbstraction, RocketAnalogyProblem, ParadigmPotpourriMeansDiminishingReturns, GoalFrameOfReferenceMismatch


CategoryMetrics, CategoryEconomics


EditText of this page (last edited September 16, 2014) or FindPage with title or text search