Let's see if we can get to the bottom of this long-running debate theme that's been tugging at this wiki for years. (See GreatLispWar for an example.) I believe that excessively abstract programming is un-economical to a good many organizations because it makes it more difficult to find staff that can read their existing code. Poorly abstract code (such as copy-and-paste programming) may slow down maintenance, but not being able to grok it can outright halt production, which is often viewed as the greater evil by management (for good or bad). Two factors are fighting against each other: 1) long-term average productivity, and 2) risk.
Even some of my own code has been called "overly abstract" in at least 3 organizations I worked/contracted at, thus I'm not trying to protect my style: it's a real concern in real organizations. I shall call it the "abstraction halt" as a working term.
Many organizations value people skills and domain knowledge just as much as technical ability in coders such that they are not going to go out and hire better caliber programmers and sacrifice the other two factors. Managers eventually realize that a certain level of abstraction is the Goldilocks Region of abstraction such that it produces a sufficient level of productivity without risk of an abstraction-halt. -t
Maybe your code was "overly abstract" (though I don't know what that means, actually), or maybe your critiquers were idiots, or maybe they were merely trying to tell you -- in as polite and ego-preserving a fashion as possible -- that your code was unreadable or awkward. (You didn't try sticking all the code in a table, did you?) None of these provide any indication as to what is, or is not, an appropriate level of abstraction. Done well, a given level of abstraction should always improve readability, if for no other reason than it -- by the definition of "abstract" -- means there's less irrelevant detail to consider.
- On paper that "sounds right", but in practice it doesn't work that way. Different coders are tripped up by different things, and over time one gets a feel for what's "safe" and what's not. I don't know why such grokking patterns exist, they just do. That's just the way human coders are. And it wasn't TOP (TableOrientedProgramming) that was the most frequent complaint, but merely too many levels of functions or general indirection. (I tended to avoid TOP outside of ExBase, where I learned early it was not appreciated. A good many ExBase contractors were just fine with TOP and used it themselves.)
- Weak coders are tripped up by many things. I don't think abstraction can be singled out as a cause for up-tripping, or even considered more likely to trip up weak coders than, say, wholly concrete up-trippers like C format(...) strings, interacting with Web services, configuring an Apache2 .htaccess file, and so on.
- Different people are tripped up by different things. Most intranets are on IIS, not Apache, in my experience, such that internal coders didn't have to deal with Apache config. IIS used to be almost plug-and-play, but because of breaches, Microsoft is gradually making it more like Apache, such as switching off services by default, such that web service configuration experts are becoming separate from programming. Or one just hosts "on the cloud" where the common web-oriented behavior is ready to go. And if one specializes on C they learn C's idiosyncrasies. However, most internal custom business app shops don't use C, but rather Microsoft and a handful of other tools, such as Delphi, Paradox, Oracle client tools and PL/Sql, etc. For non-mission-critical apps, Microsoft is by far the most common tool set, for good or bad (VB, Dot-Net, Access, Office VBA). One has to kind of target a lowest-common-denominator of abstraction, or at least around the 75% percentile, roughly. In other words, don't use a technique that is likely to trip up more than 1/4 of typical developers who would walk in the door based on existing company profile and hiring practices. -t
- You're actually bothering to quibble about a list of almost completely random examples? WHY? Who cares what's used where? I simply picked examples likely to be recognised by every reader, but Office VBA has just as many gotchas as anything else. The point isn't the examples themselves, but the concept they illustrate -- that abstraction is no more likely to trip up weak coders than concrete complexity. It's complexity that trips them, not abstraction.
- I have to partly disagree. I've seen coders able to handle "spaghetti code" well, but were tripped up by abstractions I thought were relatively straight-forward. The quantity of parts does not seem to be the primary factor.
- Spaghetti code isn't any more "complex" in a technical sense than a ball of string, because it's the same thing -- GOTO -- over and over. There's no variation or variety. It's simplicity taken to the point of ludicrousness.
- I suppose "complex" is a tricky word to tie down the exact meaning of, barring a pet metric. And by "spaghetti code" I didn't mean GOTO in particular (it's an overloaded term). It could be over-use of global variables with goofy side-effects, having multiple functions with different signatures that do the same thing, etc.
By the way, shouldn't this be StaffingEconomicsVersusTheoreticalEvidence?? Elegance is merely one facet of theoretical evidence, which also includes parsimony, simplicity, composability, flexibility, reusability, readability, extensibility, power, expressiveness, safety, and so on.
One can have a theoretical model of WetWare also. I admit I am at a loss for a better single word to place on the concepts that tend to generate the controversy of issue, but have yet to find a better alternative to "theoretical elegance". -t
But nobody's arguing in favour of theoretical elegance, so how can it be versus staffing economics or anything else? Theoretical evidence can, on the other hand, be considered either in opposition to, or support of, empirical evidence.
For now, we'll probably have to consider each issue on a case by case basis. The term "theoretical evidence" is too broad for our use. The economic-meets-wetware evidence I often use can also be considered "theoretical evidence": I establish economic and wetware models as approximations of reality and likely human behavior and then optimize the parameters to maximize the "good factors" of those models. But since those models have not been empirically established, they are only "theory". Psychology is often "theoretical evidence" also.
I mean "theoretical evidence" solely in terms of ComputerScience. Theoretical elegance is something else -- usually (in ComputerScience) meaning algorithmic symmetry or beauty.
"Symmetry or beauty" is pretty much the same as "elegance" in my book.
Yes, that's what I wrote. Nobody's arguing about staffing economics versus symmetry or beauty. I've never seen anyone on this wiki argue for anything on the basis of symmetry or beauty. Theoretical evidence is about parsimony, simplicity, composability, flexibility, reusability, readability, extensibility, power, expressiveness, safety, and so on, and gives only glancing notice to elegance.
Parsimony is about the only metric we agree on, but very few agree that "code size" should be the overwhelming factor. The rest are often subject to personal views or conflict with each other such that they cannot all be optimized at the same time. Something that's flexible can also confuse, for example. Grokkability (readability?) is important. See GreatLispWar. Lisp is very flexible and "compact" in terms of syntax and language rules. But it has confused typical developers for 50 or so years. Some of those goals conflict with grokkability.
Parsimony is not just about "code size", but about overall simplicity. I make no claims as to the importance of, or any other ranking of, the various categories of theoretical evidence, other than to state that theoretical evidence is important and significant evidence.
- Outside of code-size or something very similar, it's difficult to define objectively. "Overall simplicity" as a phrase in English is highly subjective if code size is not included. And I disagree they are all nearly equally "significant". I'd like more evidence to support such a claim. Further, most are tied to grokkability anyhow. If the maintainer cannot grok it, they are not going to get much reuse out of it, for example. If they cannot grok the "safety" features enough to use or fix them, they may hack a work-around that is uglier than not having the safety feature to begin with. KissPrinciple thus sometimes facilitates safety even more than explicit safety features. Pretty much all roads go through grokkability first, making it a cornerstone. -t
If a maintenance programmer cannot figure out the abstractions(s), the project is dead in the water; stuck! The other issues may create annoyances or temporary slow-downs, but rarely get one outright stuck. A business will often choose to have say 5 slow-down events over one outright stuckage event. Stuckages get the developer and his/boss hauled onto the carpet to explain what went wrong to the upper echelon.
Level of abstraction is but one type of theoretical evidence, and by no means the only one. Yes, sometimes degree of readability or grokkability is at odds with the level of abstraction. An abstraction based on CategoryTheory, for example, would probably be beyond anyone without the appropriate background in CategoryTheory. Deciding what is an appropriate basis for using or rejecting a tool or technique is something a good project manager does on a case by case basis.
Yes, case-by-case basis. However, there are some general common patterns and business true-isms; and techniques that risk confounding typical future maintainers are an orange flag. Using the existing staff as your reference point is taking a fairly big risk. -t
How do you objectively identify the "techniques that risk confounding typical future maintainers"?
- Unfortunately, one has to go by experience and anecdotal info from trusted colleagues. I know of no formal studies, either way.
However, I agree that using existing staff as your reference point is taking a big risk. You risk undue limitations. For example, whilst many currently-employed programmers won't have been educated in FunctionalProgramming and abstractions like HigherOrderFunctions, almost every new ComputerScience or SoftwareEngineering graduate will have extensive exposure to both.
Judging based on estimated future trends is riskier than basing them on the current situation. I see no evidence the pool of skills is significantly changing. The proportions I typically encounter in non-IT businesses are roughly split evenly between:
- No degree: self-tought or college drop-out (including associates degree)
- Trade schools (with names like "IT Systems & Business Intelligence Institute")
- Other majors (math, engineering, biology, philosophy etc.)
- Software engineer or computer science degree
If you can identify an industry trend that's changing this mix, please do.
I can only speak for trends in the UK, but there are currently two significant industry-impacting movements:
The first is a growing government recognition that ComputerScience and programming are fundamentally important, to the point that school children will now be taught ComputerScience and programming as part of the national curriculum, starting in September 2014. (See http://www.telegraph.co.uk/technology/news/10410036/Teaching-our-children-to-code-a-quiet-revolution.html) From the employer's side, I attended an employment conference a few years ago where I did a straw poll of about fifteen major national and international employers -- none of whom were IT per se (though one was a CPU manufacturer) -- and all required ComputerScience or EE graduates and strong technical skills in addition to business knowledge.
The second is a general movement, over roughly the last decade -- in both IT education and IT practice -- from the rather amorphous "user support" and soft skills like systems analysis, to a highly-technical focus on databases, data management, data analysis, customer insight and business intelligence. Students who once would have been taught to support Excel and draw ER diagrams -- or would know even less, coming from a non-technical background -- are now taught a variety of programming and reporting tools like Qlikview, SAS VA, Cyberquery, Cognos BI, etc., and are expected to be able to use them effectively.
Well, here in the USA, the Republican Party strongly balks and attempts to block any federal "government interference" in commerce, including education. The education profile of IT workers has generally not significantly changed over the last 3 decades (since the microcomputer revolution). The proportion of CS graduates to other types of IT education has generally remained the same in my observation. Some states are considering making programming a standard high-school course in that state, but so far it hasn't ramped up. I'm not sure that would change anything anyhow, since it's only an introduction. -t
One of the problems with lots of levels of indirection is making an exception to the rule. If you use mostly copy-and-paste, then an exception to the rule is simply a spot change. However, function calls require a way to specify that a particular instant gets slightly different treatment. Simple OO inheritance is not powerful enough because the difference may not be "tree shaped". And many languages don't make it easy to have named optional parameters such that one has to change every existing function call to get add new behavior (parameter profile). There are ways to make code better able to handle exceptions to the rule (variations on a theme), but everybody has a different favorite way and some ways confuse different coders, especially if not coded well (including poor variable names, lousy comments, etc.) -t
Making an exception to a rule is hard no matter how you have to do it, though needing to make an exception in an abstraction suggests the abstraction wasn't right to begin with.
That's always a problem with abstractions. The existing case set may not be sufficient to build an abstraction able to handle future changes in the abstraction. The business world is filled with "soft" and dynamic abstractions such that you WILL often get it "wrong" on the first try. Related: EightyTwentyRule, AreBusinessAppsBoring. One is not modelling something stable like physics or math, but rather the world view of the customer/managers; and they change.
No, that's a problem with exceptions, or with programmers lacking understanding of the role of abstractions. Abstracting a customer category is generally doomed to failure -- it's not abstract, it's just an arrangement of characteristics. Abstracting a style of report or a type of form is likely to be highly successful -- it is abstract.
I disagree that abstracting CRUD well is a trivial undertaking. The potential for variation or tweaking to fit specific requests is pretty large such that a CRUD tool that handles everything will be so large a mass wad of configuration as to overwhelm most. Instead, most CRUD tools assume certain UI styles and conventions to cut down on being a God CRUD Tool.
Who said it was a trivial undertaking? Anyway, the key to abstracting CRUD well is composition, not configuration.
There appears to be some confusion here. What did you mean by "type of form"? Hierarchical "types" are often too inflexible in my experience. Future variations on a theme are hard to predict here also. The deeper one's Grand Abstraction, the more fragile or confusing-to-change it may be.
There are different types of forms. E.g., dialogue box, master-detail, master-detail-detail, master-master-detail, and so on ad infinitum. Attempting to encapsulate all current and future form requirements via a "mass wad of configuration" is doomed to a quick failure. Instead, forms can be effectively composed out of smaller parts, like assembling a house out of LegoTM bricks, except instead of bricks you have panels, searchbars, navigator bars, selectors, menus, popups, toolbars, etc., and ways of combining and recombining them into reusable components. Most GUI toolkits already embody this strategy; it's a matter of extending the strategy to include forms, reports, and all the usual CRUD-oriented UI components.
Hierarchical decomposition (parts, sub-parts, sub-sub-sub-parts, etc.) often sounds like a wonderfully simple idea on paper, but integrating them all to work smoothly as a unit is often not a trivial task, especially when the screen view is very different from the underlying database tables, as often happens with legacy systems. For example, it's generally not efficient (nor ACID-friendly) for each field/widget to communicate with the database on it's own. Thus, it must somehow coordinate or participate in coordination with other widgets and objects to communicate with the database in the appropriate way at the appropriate time, including validation concerns. And sometimes updates to the content of one widget will need to reflect on another. These kinds of cross-cutting concerns are not trivial to coordinate and are rarely purely hierarchical. Lego bricks are an unrealistic analogy, except as a rough starting point. Further, managers/customers often want to cram as much as possible on one screen/page. This requires fine-level control over many features that "generic" or "overly smart" widgets may not have or do well. (Cramming may be a UI design smell, but it's what they want and they have your paycheck.) One finds that multi-dimensional (cross-cutting) concerns are usually the bottleneck even outside of CRUD and that any hierarchical/nested/compositional arrangement is only a starting reference point--a scaffolding upon which to wire up and tune the cross-cutting concerns, which is where most of the coding, tweaking, debugging, and hair-graying work takes place in terms of time. -t
Yes, programming is challenging. I've been developing CRUD-screen tools since the 1980s. It is possible to "integrat[e CRUD components] to work smoothly as a unit", and I agree that it's not a trivial task, but it's certainly do-able. I did it. I based a successful career on it.
Successful at what? Implementation specific CRUD-intensive applications, or devising lasting and "good" CRUD frameworks? Either way, I have no way of inspecting their quality to see if they are as nice as you claim.
Successful at both. I developed lasting and "good" CRUD frameworks and used them to build CRUD-intensive applications. Whether you would consider them "nice" or not is irrelevant. My colleagues, employees and I found them "nice" enough to be competitive.
In a way, I think that I'm agreeing with Top here. There are many shops where it's easy to out-abstract the rank-and-file programmers. Sometimes that's for a bad reason: management doesn't care, or is just looking for PlugCompatibleInterchangeableEngineers. Sometimes, it's because you simply can't hire enough smart geeks: you need hundreds if not thousands of programmers to build your operating system and/or New World Order. And sometimes, the best programmers for the jobs are poor to mediocre geeks who have other skills you need.
I once interviewed for a position at a Finite Element Analysis shop; the guys who make the software that figures out how fast you can run your engine before the piston heads come flying out. They were a FORTRAN shop considering a move to C++. The vast majority of their coders were people with PhD's in mechanical engineering and/or physics backgrounds. Maybe one in five considered themselves software developers, and they were there to write the glue and try to refactor the spaghetti that the engineers wrote.
This was exactly as it was meant to be. Certainly the programmers didn't know enough to program the solutions the company needed; they needed domain experts. You could out-abstract them easily, even though they may even be smarter than you, because that's just not how they think. And especially in engineering apps like this (and expecially fluid flow simulators), you still care more about calculations per second than elegance of expression, and would rather have fast spaghetti than slightly slower beautiful code.
Where I work, OTOH, is a credit card processing network. Our domain experts don't have to write the code themselves, and the speed of our code isn't so critical--calculation time is dwarfed by disk and network I/O. We can afford to hire very good programmers, because we can work with several dozen, not several hundred of them. With all that, reaching for a higher abstraction isn't such a bad idea, so long as the tests still pass.
If you don't need Mongolian Hordes of programmers, if your can afford to have your coders be programmers first and domain experts second, and you aren't in a maximum performance or minimal hardware environment, then higher abstractions can quickly reduce the cost of adding your next feature. Lots of us on this board live in that world, or at least imagine that we do. Those not in that world are going to care more about other concerns, like not confusing the people that they have.
--RobMandeville
For clarification, I agree that every organization or staffing need case can be different (ItDepends). I'm not claiming that staffing flexibility or inflexibility is always the main bottleneck to abstraction decisions; it just has been in the majority of organizations I have worked for. I am merely asking that one be mindful of staffing issues when making abstraction-related decisions because in some cases, perhaps many, they can be a bottleneck to such. Thanks for your feedback. --top
I believe part of the problem is that some evaluate the code itself before or instead of looking at the big picture from the business perspective (owners, customers, managers, etc.). If you look at the code for "good design" alone, then you are evaluating its value through your own eyes, not that of the organization or other maintainers. How other actual maintainers will react to the code is a very important, if not the most important factor in judging code. -t
Target Skill Level
You have to examine how much productivity you're gaining by "coding down to" the lowest denominator. I've been on a project where there were several programmers who were deeply, fundamentally confused by the code; they simply couldn't understand it. They might have been capable programmers in some other life, but they didn't grok it there, and they made no actual effort to learn. They complained for months, and contributed nothing to the project. Meanwhile, the three programmers who understood what was happening produced tens of thousands of lines of working code. If we had written down to the lowest common denominator, we would've been abandoning many things that allowed the successful delivery of complex functionality in favor of--what? Dynamic web page coding out of 2001?
The company could've done better just hiring the three programmers who were capable of understanding the abstractions. We were sufficient to get the work done (as demonstrated by the fact that we got work done), and would've been better off without the friction of contractors complaining that they didn't understand why we would write unit tests or what a "validator" is.
I never suggested targeting the LOWEST common denominator. I don't know where you got that idea from. There is a "goldilox" level somewhere in the middle that is the best balance. I don't know the details of your particular project so cannot comment on that case, such as comparing potential alternatives. I also don't know whether the reason for some member's difficulties were due to project requirements or coding/design style choice imposed by others.
Keep in mind there are generally 3 skill types valued by typical organizations in roughly equal proportions:
- Technical skills
- People skills (diplomacy, explaining clearly, etc.)
- Domain knowledge
If the target developers do well in the people and domain area but are lackluster in the tech area, then an organization may prefer that outsiders and consultants make sure the design style is grokkable and maintainable by said developers, even if the abstraction level is quite low.
{How should one determine the "goldilox level"?}
I don't have any solid research, only experience-based anecdotes. But nobody around does so far. Both sides can give their anecdotal experience, and then LetTheReaderDecide.
Moved from TopMind.
I find it horrifying how much of "computer science" depends solely on AdVerecundiam (ArgumentFromAuthority). This wiki is filled with ivory-tower academics, often devoid of real-world experience and knowledge of economics, who dismiss empirical science and testing using a wide range of excuses, often ranking designs on criteria they personally find intellectually titillating, not focusing on what owners and customers want out of a tool. Battling AdVerecundiam creates a lot of heat and tension, but it's good for the industry. I feel I can make IT a bit better if I can add a little science back into SoftwareEngineering, even if it steams up The Church a bit. (See StaffingEconomicsVersusTheoreticalElegance and TopOnWhyTopIsHated for example.) --top
The problem here may be that most of us prefer to discuss engineering but you prefer to discuss economics, and the two don't join very well here (or anywhere). As for AdVerecundiam, I suppose it might seem that way if you're an economist and know little about theoretical ComputerScience or practical SoftwareEngineering. It's like an accountant demanding a mechanical engineer explain why car clutches are made of metal instead of wood. The answer might be surprisingly difficult for an accountant to appreciate if he has little academic background in the relevant physics and material science -- especially if wood is cheaper to use than metal -- and might sound like AdVerecundiam to an ear not educated in mechanical engineering. As with all sciences, knowledge in ComputerScience is constructed from a chain of references to previous work, in which evidence is built from (along with logic and empirical data) citations to references. To the outsider, this can appear to be AdVerecundiam, especially if the outsider's priorities differ from those of the typical insider.
Software *is* heavily about economics whether you want it to be or not. Economics interweaves with engineering. As a thought experiment, consider how you'd do a particular project if you had a trillion dollars and 10,000 years to spare on it versus how you would do it under typical circumstances.
And clutches can be empirically tested per failure rates, with NUMBERS even!, so that we don't have to take the word of a con artist trying to protect his or her expensive services. And the accountant or market analysts can consider what defect rate is acceptable to customers. If you can't find a way to explain and measure the metal/wood issue in terms of something that customers and/or shop owners care about, then you are either lacking knowledge, or are a very poor articulator and shouldn't blame the customer/owner. (If safety issues come into play, that's another matter, but is STILL measurable with numbers, such as stress tests.) None of your "evidence" produces numbers, just wordy bullshit. If there is a chain of logic, YOU lost it and are unable to produce ItemizedClearLogic. If ComputerScience has the answer, you are not extracting it in a proper format. Further, what's often called "computer science" is not science; it lacks some key requirements of science. You cannot dismiss business and economic concerns just because you don't like thinking about them. Related: IfYouWereSmartEnoughYoudJustKnow, IsComputerScience.
Thank you. You've confirmed my point.
If you had one, it is as obtuse as your other writing. If there is a "chain" of clear logic, list it rather than be some fuzzy impression in your head. (I've fleshed out the accountant analogy a bit.) Note that we should probably consider a patient accountant that's willing to spend the effort needed to do it right, not the typical accountant who has a tall in-box. Also, they may not be able to find all the relevant metrics by themselves, but rather rely on experts and end-users for suggestions, to be later pruned, ranked, and clarified.
Re: "The problem here may be that most of us prefer to discuss engineering but you prefer to discuss economics" -- Why is that? I see engineering as constraint and tradeoff management, and economics is a big part of that. We are jugglers of features, resources, and expectations. -t
We're mainly programmers, interested in programming, code, technical aspects of software tools, SoftwareEngineering, programming languages, APIs, protocols and everything related to those things. We're certainly concerned with constraint and tradeoff management, but only in terms of programming, code, technical aspects of software tools, SoftwareEngineering, programming languages, APIs, protocols, and everything related to those things. Therefore, we're (for example) interested in whether or not a particular language construct is sufficiently expressive or flexible to let us easily (or more easily) construct anything we might conceive, or whether or not a particular mechanism is reliable enough to be used in a hostile and unreliable network. And so on.
The economic concerns that interest you -- which seem mainly to suggest reduction or avoidance of language features in order to meet the human resource limitations of certain employers, or the application of certain well-known paradigms without introducing new techniques or strategies, or accepting trade-offs that suggest we not write code the way we'd like to code -- are economic concerns related to ProjectManagement, not programming. As programmers, at best they do not concern us, and at worst they suggest imposing limitations on how we prefer to work. Thus, at best we don't care about your concerns, and at worst they are hostile to us. So, as programmers, we're not interested. If this were a wiki populated with career project managers rather than coders, I suspect you'd find considerably more positive interest in your favourite topic areas.
When you are taking notes for yourself at a lecture, you probably use a certain shorthand that you know well. You know your shortcuts and abbreviations and custom notations. However, if somebody else tried to read those notes, they'd probably pull their hair out trying to decipher them. Writing software is similar. If you focus on and write for your own personal whims, preferences, and laziness; you essentially are being selfish. Me me me! If you want to read my code, you learn my way, because I...am...special! -t
One of my colleagues -- who for many years maintained the compiler back-end for a well-known language implementation, which is all machine-level nuts-n-bolts stuff -- once jokingly described my code as being layers of abstraction that never did anything concrete. However, when he had to port one of my Java projects to C#, he praised its readability and simplicity. If one is working with developers of equal calibre, there is no reason to assume that use of higher level constructs is "being selfish", even if those developers -- like my colleague -- don't normally use such abstractions. If necessary, a good programmer can learn them in a negligible amount of time.
I've occasionally been called upon to develop code that will be maintained by end-users or equivalent. In that case, I make my code as concrete as possible, in order to suit its human audience. However, unless I know otherwise, I always make my code as simple as possible by using the highest level abstractions that are available, under the assumption that my code will be maintained by developers who can understand high level abstractions. In roughly three decades of professional development, that approach has worked successfully.
Re: "If one is working with developers of equal calibre..." Like I mentioned elsewhere, in many places the skill level is highly mixed. Domain skills and documentation/communication/people skills (and perhaps hardware and networking) are also valued such that the tech side may not be the final primary qualifier. But I agree that each shop is different and the staffing profiles vary widely. Target the likely audience, not the ideal audience.
I haven't yet worked in a place where the skill level was so variable that I couldn't code with the highest level abstraction capabilities of whatever language was prevalent, and not be well assured that every coder in the place could read and maintain my code. As I mentioned, on occasion I have written code to be maintained by tolerably technically-savvy end users, and for them I limited the code to what I knew they knew.
- That differs from my experience. That is, the experience/skill level of developers has been highly variable in the 2+ dozen or so companies that I have worked for or contracted at. I should point out that most were not software companies: software development was not their end-product. A software company probably puts more emphasis on raw technical skill and less on the other factors I've mentioned because most won't be facing non-technical clients/customers/users/managers/owners directly. (IT shops are also generally more specialized such that the database expert isn't also expected to do GUI design, etc. They are thus pre-screened more narrowly and deeper. Somewhere we've had this discussion already.)
- I've mainly worked for or with software companies and/or departments where the software developers were highly capable and where software was either the primary product or its production was the primary focus.
- Well, that perhaps explains much of our difference in experience. I've been gradually shifting toward systems analysis such that the domain and domain managers predominate many of the hiring decisions I witness. The orgs I work for seem to prefer departmental experts rather than a centralized development shop because they want the developers to be closer to the domain. If you are being judged on a wider variety of skills, then there will be less focus on coding alone.
And, what is the probability that one will have to port Java to C#?
YagNi suggests you don't up-front code for such scenarios unless it's fairly likely, otherwise one can play WHAT-IF all day to justify
GoldPlating. Recently I've been working on a custom project tracker for a small work-group. I can see a potential future need for version control, a more flexible permissions control system, a custom field-selector report writer, among other things; but those are not current needs and adding all those would bloat up the project.
That's interesting, but I'm not sure what it has to do with the above. Particularly, why ask -- if only rhetorically -- what the probability is of porting Java to C#, and how is it relevant?
If you say, "Look how well technique X made change Y easier". The probability of needing Y still matters. When making decisions we don't know the actual events of the future, and can only estimate. In this case Y happened, but that does not mean the probability of Y is 100% in the general sense. One sample point does not a trend make. I think you would agree that it's probably not worth designing code around an event that only has a 2% chance of happening (unless perhaps it's catastrophic) such that you agree that probability matters, at least to some degree.
There was no anticipation when writing the Java code that it would be eventually ported to C#. It could equally likely have wound up being ported to Modula 2, or Python, or Haskell, or PL/I, or Perl, or assembly language, or Lisp, or FORTRAN, or not being ported at all. I didn't code the application with any intent that the code would later be ported, but if I had known it would eventually be ported, I would have written it exactly the same way. Use of higher abstractions generally makes code more readable -- and therefore more easily portable, and by that I mean more easily re-writeable -- than lower abstractions because higher abstractions generally express intent more concisely than lower abstractions.
"More readable" varies widely between individuals and probably teams. And I've been bitten at times by higher abstractions when requirements changed way outside of their fit, at least domain-related abstractions. See EightyTwentyRule.
HigherOrderFunctions and the like aren't domain-related abstractions, they're programming language constructs. As such, any requirements change that precludes using one programming language construct such as a HigherOrderFunction is almost certainly a requirements change that requires a general re-write. As for "more readable", one programming language construct is only less readable than another if you haven't learned it yet. 'For' loops are as mysterious as HOFs to the programmer who has learned neither.
HOF's make a language or system closer to machine language, and if you build a kind of machine-language interpreter in your app language, then all you have to do is build another machine-language-like interpreter in the target language to simplify conversion. In that sense, you are correct. However, one of the reasons assembler and machine language was mostly abandoned for non-machine-intensive business and end-user applications was to introduce discipline and some degree of regimentation to coding. Some abstraction and indirection ability may have indeed been lost, but that's the price of civilizing development and staffing.
How are HOFs "closer to machine language"? I've never seen a machine language construct that directly implements closures, let alone HigherOrderFunctions. Having "higher order" in the name suggests that they're more abstract and therefore further away from the machine, rather than closer to it. Could you explain?
We are getting overly general here. How about a specific pseudo-code example of them making conversion among languages easier.
How are we getting "overly general here"? You suggested HOFs are "closer to machine language", not me. If you aren't willing to defend that (rather bold and unusual) statement, would you perhaps consider retracting it?
As I pointed out, HigherOrderFunctions can make code simpler. Simpler code is easier to read. Code that is easier to read is easier to port, because code that is easier to read is easier to understand and code that is easier to understand is easier to re-write. See, for example, http://www.javaworld.com/article/2092260/java-se/java-programming-with-lambda-expressions.html
Fine, I'll consider removing it because I don't want to spend the energy defending it right now.
And that article looks like lab-toy kind of examples (ArgumentByLabToy), or at least doesn't reflect actual needs I encounter. After all these years, you should have known I'd complain about that.
Of course it's a lab toy. Do you really expect a magazine article to post the source code for a 1,000,000 line production application and tell you to hunt for the lambda expressions yourself? Do you expect them to ask you what example application would meet your needs? ("Please build me the project management application I'm building right now for Sam in Accounting, so I can see how HOFs would benefit me.") They could do that, but then every use of a lambda expression would look pretty much like those illustrated in the above URL. That's why lab-toy examples are just fine -- they're the same thing as production examples with the needless bits removed. If you can't derive how they might apply (or not) to your domain, then how do you learn from texts and examples at all?
I didn't ask for a general illustration. And perhaps I am just too stupid to apply such examples to my domain in a way that improves the app because I just can't, and the only examples you guys seems to apply that are domain-relevant appear to assume poor languages or poor API's. "HOF's are a great band-aide for crappy tools" is perhaps the lesson because I'm not learning anything else from such attempts. But why should I or any reader assume they do have some clear benefit for general custom business apps? Either you don't know the domain well enough to comment on it, or you are not smart enough to apply HOF magic to it either (outside of the band-aid pattern). We are both stupid, it seems, and the elusive Scenario Unicorn remains at large.
I don't know how a general illustration would differ from a specific illustration. Does a general "lab toy" example like 'myButton.add(e -> myHandler(e))' differ appreciably from a real-world, domain-specific example like 'okButton.add(event -> insertCost(event))'?
- I don't disagree that one can use HOF's to do things. That's not the point of contention. Likewise, GOTO's "work" in the absolute sense, but that doesn't mean they are the best tool for the job in terms of SoftwareEngineering and maintenance issues. I thought that was clear and am highly puzzled why you assumed it was about mere run-ability.
- Who said anything about "mere run-ability"? My point is that what you call "lab toy" examples are generally wholly sufficient to illustrate a concept. For example, we don't need to see the source code of a full-blown custom ERP system in order to appreciate how a HigherOrderFunction (or whatever) can simplify (say) an event handler, because it's going to be exactly the same code -- except perhaps for identifier names -- whether it's a real custom business application or a textbook example.
- I'm not sure why you wanted to make such a point. I don't see its purpose in this context. If you want to create general intro page for HOF's with links examples, please do. There's also a link to a sorting "comparer" HOF example somewhere IIRC.
- Above, you wrote, "... that article looks like lab-toy kind of examples (ArgumentByLabToy), or at least doesn't reflect actual needs I encounter. After all these years, you should have known I'd complain about that." My point was in response. You implied that "lab-toy kind of examples" are insufficient. I disagree. Not only are they sufficient, they are preferable. Realistic business examples are identical to textbook examples in all the areas that matter. For a given programming construct, a realistic business example is indistinguishable from a textbook example, as I showed with the difference (not) between 'myButton.add(e -> myHandler(e))' and 'okButton.add(event -> insertCost(event))'. Indeed, using realistic business examples invites far too much diversion into arguments over requirements (e.g., "You didn't need to code this at all; you should have bought off-the-shelf product X!"), rather than focusing on the code.
- It appears you are comparing that ability to straw-man-crippled OOP (or other tools) shaped by who knows what experience you've had. Anyhow, let's finish the GUI scenario at the GUI topic rather than spread it into here also. When that's finish, if you want to introduce an ERP example to work on, that's fine. But don't scatter such debates all over please, for they blame such messes almost entirely on me, perhaps because I'm the only one with a handle.
- This threadlet has nothing to do with GUIs, and everything to do with your claim that "lab-toy kind of examples" are insufficient. My example was only intended to show that what you call "lab-toy kind of examples" are, in fact, perfectly illustrative of industrial code. For example, an event handler in a contrived textbook example will look exactly the same as an event handler in a real-life industrial ERP system. For that reason, I have no intention of going to the effort "to introduce an ERP example to work on". There is no point in doing so, and it would invite irrelevant debate -- over accounting practice, perhaps, or over choice of build-vs-buy -- that would merely distract from the core issues of programming languages and code. In short, textbook "lab-you kind of examples" are ideal for discussing programming languages and code without getting distracted by irrelevancies.
- So the reader should just take your word for it that ERP apps are better off with that pattern instead of something else? Lab-toy examples can mislead because they brush aside various real-world complexities and are NOT a sufficient substitute for realistic code or scenarios. You can insist it's so a hundred times, but that doesn't make it so.
- Why would an event handler -- or (say) a FOR loop, or any other programming language construct or idiom -- in an ERP system be any different from that in any other application? Forms are forms, database access is database access, search algorithms are search algorithms, and data structures are data structures regardless of the application domain. You might be surprised to learn how much videogame code looks like ERP code, which looks like all other application code.
- It doesn't look like a pattern we could use in the 25 odd different shops I've been at as a contractor or employee. The granularity of difference (variation-on-a-theme) is usually below the algorithm level such that swapping in and out HOF's won't fly. We had this discussion already somewhere, and my detractor ended up concluding something along the lines of "your projects don't have enough "computer-science algorithms" in them to take advantage of HOF's". I'm not entirely sure what was meant.
- You don't use event handlers? If HOFs don't look like a pattern you could use, then perhaps that's because you feel the domain you work in doesn't require them. You appear to be referring to HofPattern, which describes contextualisation of standard algorithms. Maybe you don't use standard algorithms like A* for pathfinding, or QuickSort for sorting a collection. There are other uses for HOFs (and "higher order" programming in general) but it is conceivable that you're not used to using HOFs, so you automatically consider alternatives rather than thinking "I wish this language had HigherOrderFunctions!" It's a bit like matrix algebra. Matrix algebra is fundamental to implementing 3D graphics. If you do anything in 3D, you'll use matrix algebra. If you never develop 3D graphics applications, matrix algebra might seem useless, leading you to assert that matrix algebra "doesn't look like a pattern we could use in the 25 odd different shops I've been at as a contractor or employee." Fair enough, but that's only a statement about your employment history and the domain(s) in which you work; it's by no means a comment about the industry in general. 3D graphics primitives -- and reasonable use of HOFs -- are common and useful, even if you never use them.
- That's fine, LetTheReaderDecide which situation best fits their own shop. And as mentioned at HigherOrderFunctions discussions, for custom software, only one or two "equivalent" algorithms are selected after a group evaluation, and the shop pretty much sticks with those. There is rarely a need to keep switching them in and out in a plug-and-play manner like a thumb-drive. IF/CASE case statements are perfectly fine to select between the few standard algorithms or sub-algorithms SETTLED ON. It's not the bottleneck of change management. Thus, I don't question their need to exist in software, but rather mass adding/changing/deleting of such algorithms. Additions are generally rare and far apart.
- As was pointed out repeatedly in the related discussions, there is no "switching them in and out in a plug-and-play manner" with standard algorithms. HigherOrderFunctions provide a way of defining the specific context of a generic algorithm. It permits using, say, the same employee scheduling algorithm implementation to handle both room bookings and staff allocations. It permits using, say, the same routing algorithm implementation to plan logistics and determine nominal employee travel costs. In most cases, implementing these scenarios using "IF/CASE case statements" in the core of an algorithm would be arduously awkward, and risky, assuming the source code is even available.
- Well, I generally don't work on resource allocation software so won't comment on approaches. (Most such examples I've encountered was off-the-shelf, not custom made.) Usually the data is in the database, so either it's processed in the database, or copied to RAM for processing. If copied to RAM, then the conversion can happen at the copy point such that I don't see where HOF's would come in: we use the same fricken code with a database-to-RAM translation layer. But we are wondering off topic. If you wish to continue, then create ResourceOptimizationAndHofsDiscussion? or the like.
- Resource allocation software is irrelevant, and whether to build or buy is a project management decision, also (in this context) irrelevant. My examples were intended only to be illustrations. It sounds like your strategy is heavily focused on database-oriented processing, which is certainly feasible if the performance requirements aren't very high and the problems you need to solve can all be represented by a common schema.
- What do you mean by "high performance"? Possibly related: AreRdbmsSlow. Most domain data is in RDBMS in the shops I've been in (except for maybe documents, which are often on a file system).
- Typical examples requiring better-than-typical-SQL-DBMS-performance: videogames (especially multiplayer), weather forecasting, network traffic analysis.
- Well, okay, different domains need different things. I don't dispute that. Custom business apps as I encounter them (several orgs) are usually RDBMS-centric. Above you implied that HOF's are for everything everywhere. Can we agree that with HOF benefits ItDepends?
- HOFs aren't "for everything everywhere" any more -- or less -- than FOR loops are "for everything everywhere". They're a programming construct that can make it easier to define solutions to certain problems in code.
- But that's true of a lot things. We can dream up special syntax or keywords or loop-types to improve certain situations we encounter until the cows come home. Where's the point where we say "no more!". CreepingFeaturitis. Is it worth it to add another paradigm into our app to reduce total code volume by 1%?
- Programmers don't think in paradigms. They think of language features. No programmer is going to look at a Java lambda and say, "Sheesh, that's yet another paradigm I have to learn!" They'll say, "Ooh, simpler event handlers!"
- It doesn't necessarily mean they will expand on it; they may just copy the syntax pattern to avoid the long-cut that Java's imperfect design/libraries kind of force on you.
- Some might. I don't see that being a problem. Other developers will study the underlying paradigm in more depth. Most will use it without much thought, as yet another handy language feature.
- Perhaps, but a good many will be thinking, "Why this goofy syntax? Why can't I just hang an on-click method off of my button object like normal OOP?"
- Not in Java or C#. In these and other languages, typical event handlers are activated by registering them or adding them to a container. Thus, idioms like button.onClick.add(eventHandler) are familiar, whilst "an on-click method" is not. It may initially seem awkward to developers coming from older languages and environments that allowed only one event handler per event type per object, but this is a trivial conceptual hurdle.
- When I first saw that approach, I and others found it awkward to expose the fact that they are registered with a listener etc. The wiring is hanging out of the box. Yes, people got used to it; QwertySyndrome. But it's still a poor design in my opinion. People getting used to it does not turn it into good design. (C# cloned a lot of Java intentionally to capture Java's audience back after they jumped VB for Java.) I know you disagree, you don't have to state that again. And my suggested approach doesn't preclude OTHER handlers, as already mentioned. It just simplifies the most common usage pattern, which good interfaces should. And again, this is NOT the proper topic to re-debate that. Please make a PageAnchor at a GUI topic if you wish to continue on GUI's.
- When programmers first saw FOR loops, they found them awkward too. Re GUIs, how do you intend to support multiple, arbitrary, unrelated handlers for a given event? See the paragraph starting with, "What I mean is that buttonY should be able to register an event handler with buttonX" on NodeJsAndHofGuiDiscussionTwo.
- As I mentioned elsewhere, perhaps we can do away with FOR loops and replace them with a regular function (perhaps a base library function even). It's hard to do untainted experiments/surveys on such anymore at this point in history.
- FOR loops are only an example of the fact that every language construct is unfamiliar until it's familiar. To beginning programmers, everything is unfamiliar. Unfamiliarity is thus a very weak argument. By the way, Lisp defines all constructs -- including FOR loops -- as functions. Of course, for that to work, you have to be able to pass code blocks as parameters.
- Hogwash! "They'll get used to it" can be used to justify any awkward or non-optimal solution. In the long term, IT staff and related education is more productive if their tool conventions are judiciously chosen and the idioms are as orthogonal as reasonably possible. And while I agree that Lisp is very flexible, it is also too flexible in that it allows one to too easily reinvent similar idioms, losing the benefits of consistency. Flexibility and consistency do tend to be at odds with each other. I cannot necessarily spit forth clear general rules on how to always balance those, and thus use related experience to make judgement calls when making many recommendations.
- Indeed, "they'll get used to it" can be used to justify unpleasantness, but then people complain. Aside from you, where are all the complaints about HigherOrderFunctions? Where are all the complaints about Java and C# lambda expressions?
- Much of my view is from anecdotes and field experience. I've stated that already. Please stop forgetting; it's quite annoying. You can find rants and complaints on the web if you Google around. Also, many don't think much about alternatives; they grumble a little and then learn specific patterns for specific tools to get a paycheck, even if the feature is annoying; that's the way most work. If using assembler or GOTO's paid better, they'd hop on it (no pun intended).
- I have googled around and haven't found any legitimate "rants and complaints" about HigherOrderFunctions or Java/C# lambda expressions. You claim your view is based on anecdotes and field experience, but you haven't shared a single specific anecdote. Thus, I don't find your complaints very credible. Increasingly, they sound like a rationalisation for your own personal objections to language features you don't want to learn.
- Seems you flunked Google 101: PublicOpinionOnFunctionalAndNodeJs. And you haven't provided similar detail for your side of the argument. In other words, a hypocrite. Besides, what use is an anecdote such as: "On March 7, 2009, Lisa Stevens said in the AT&T cafeteria at Foomont, CA, "HOF's are screwy, why can't they just use normal OOP?". Almost nobody tracks conversations like that anyhow. Your request is ludicrous and unrealistic. It appears you are being a dick for the sake of the enjoyment of being a dick.
- What on PublicOpinionOnFunctionalAndNodeJs is about HigherOrderFunctions or Java/C# lambda expressions? Re-read the above. Language features -- HigherOrderFunctions and Java/C# lambdas -- are what this end of this threadlet is about, not paradigms. ThereAreNoParadigms. There was only one glancing mention of the FunctionalProgramming paradigm and no mention of evented I/O frameworks until you brought them up. Of course, philosophical paradigm wars and quibbles about products are as old as ComputerScience and as easy to find on the Internet as cat pictures. By the way, why did you lump NodeJs in with FunctionalProgramming?
- I don't know what the hell you are specifically looking for. I cannot read minds. By the way, I have not worked directly in a Java shop, and none of the already-written production C# I've seen uses them in the shop-written app code.
- What I'm specifically looking for is precisely what I asked for, above: "Aside from you, where are all the complaints about HigherOrderFunctions? Where are all the complaints about Java and C# lambda expressions?"
- Many don't know they are using screwy languages or API's, per GUI discussions. They are just told they "must" use them and thus shut up and use them for certain patterns to get a paycheck, just like in the days they were told they "must" use OOP for domain modeling. Same shit.
- And that would stop them from complaining? It seems unlikely.
- Based on my experience, it's often a form of WhenInRome: "What's this goofy syntax in this example? Oh well, I'll copy it and modify the parameters using a best guess to finish the project".
- You don't think programmers, being a naturally whingy lot, wouldn't complain about having to do that?
- I thought I just stated that they do complain.
- Where is your evidence of those complaints?
- I gave it and you rejected it for biased reasons. See EvidenceDiscussion.
- Really? Where did you give it?
- It's up your ass, check again.
- This helps your credibility and the strength of your argument, how? It appears only to confirm the non-existence of your evidence.
- You play word games with "anecdote" etc. to "make" it non-evidence, slimeball!
- How so? I've used "anecdote" in it's conventional meaning -- a story.
- It's overloaded, as English often is. And "story" is overloaded/vague also.
- If it's overloaded, then I'm sure you can divine the meaning used here without much difficulty.
- Not with your writing style. How about, "There was a man from Nantucket, who looked at HOF's and said 'fuck it!' Objects are cushy, HOF fan-boys are pushy, so stab this HOF and chuck it."
- I'm sure that response will do much to enhance your credibility.
- C# lambdas were introduced in C# 3.0. Were you using C# 3.0? Were you using LINQ? They're often used with LINQ.
- None of the C# shops I've been at use Linq. The DBA's refuse to help with queries if they are written in Linq. Besides, using a library that uses HOF's and using HOF's directly in apps are two different things. I've always maintained that HOF's may be fine for SystemsSoftware. Please stop forgetting such.
- [Your reference to SystemsSoftware appears to be an irrelevant tangent. LINQ is a library that's primarily exposed to the user in the form of extension methods taking functions, i.e., higher-order functions. C# also offers a vaguely-SQL-like expression syntax for constructing LINQ queries, but this syntax is of equivalent expressive power and somewhat less composable than using the extension methods directly -- they're trivially composable as method calls may be chained -- so standard practice when using LINQ tends toward regular application of the LINQ extension methods, which necessarily accept functions as arguments. -DavidMcLean?]
- The fact that app devs CAN alter the code does not mean they will. It's still de-facto SystemsSoftware to the majority of them.
- [Again, this appears to be an irrelevant tangent. I didn't suggest that app devs would "alter the code" implementing LINQ; there is no call to do so. Standard use of LINQ involves regularly making calls to the extension methods that compose its API, which are primarily higher-order functions. -DavidMcLean?]
- They are syntactically hidden from the app dev.
- [Um, if you use the SQL-like expression syntax, sure, but like I said that syntax is less composable than the method-call syntax, and use of the methods directly is supported, expected, and exceedingly common. The higher-order functions are perfectly visible when you use the extension methods directly. -DavidMcLean?]
- We have no evidence of usage rate among such options either way. Linq exists, that I'm not questioning.
- [The extension-method syntax is used sufficiently often to be relevant: It's used throughout the codebases I work on personally, it shows up in code samples all over MSDN, there are myriad Stack Overflow questions and answers that use it, and so on. -DavidMcLean?]
- That could possibly suggest it creates more confusion than normal. Things that are quick and easy to grok generally don't need a lot of documenting or Q&A, and thus their lack of mention is not necessarily reflective of their usage rate. They could also be mostly SystemsSoftware developers. SystemsSoftware developers need Q&A forums also.
- [Everything needs documentation. That's normal. Everything gets asked about on Stack Overflow. That's also normal. To presume a particular technique is inherently complex or confusing based on the presence of documentation is absurd. -DavidMcLean?]
- I'm only saying it's a poor measure of actual usage because other factors could be at play. For example, can you determine even approximately what percent of those are app-side developers versus SystemsSoftware developers versus academic projects?
- [LINQ is primarily valuable for application software, being a tool for querying databases and other collections. It's honestly not important for what domain it's being used, though. Why would that make a difference? -DavidMcLean?]
- As an end-user of it, yes, but how does this change anything relevant? SQL writers are end-users of RDBMS, which are SystemsSoftware.
- [Sure thing. Writing LINQ queries is analogous to writing SQL queries. When you write a LINQ query, you express it in terms of higher-order functions. So, when you write LINQ queries, you are an end-user working with higher-order functions. -DavidMcLean?]
- Like you forgot above "They [HOFs] are syntactically hidden from the app dev.".
- [If by "forgot" you meant "explicitly acknowledged", then sure, but again LINQ has both the SQL-expression syntax as baked into C# and an API based on extension methods that take functions. -DavidMcLean?]
- I see no evidence that extension-methods are heavily used. There are more LISP-dialect articles.
- ["extension methods" are a C# feature that allows methods to be "added" to a type on-the-fly, similar to the monkey-patching feature in Ruby but somewhat safer due to being lexically namespaced. LINQ is one of the major .NET libraries that uses them -- indeed, it's documented as the "most common" use of the feature on MSDN -- but by no means the only one. -DavidMcLean?]
- Yes, it "uses them" under the hood = SystemsSoftware. Anyhow, I'm tired of arguing about LINQ. I don't give a shit about it. Think whatever you want about LINQ, I don't care.
- [Um, they're not under the hood. They're the API the user works with. The fact that they're implemented as extension methods, instead of "normal" methods on every IEnumerable class, might be considered "under the hood", I suppose -- although you do still need to identify that that's the case, because you must provide a "using" directive that imports the LINQ methods when you want to use them. The fact that they're higher-order functions definitely isn't "under the hood". -DavidMcLean?]
- As for not providing "similar detail" of evidence, the entirety of these debates about "higher order" programming -- and their various pages -- contain evidence supporting the use of HigherOrderFunctions, lambda expressions, and so on, including sample code.
- I mean evidence that such features don't confuse staff. The default is not that it does not confuse staff, but rather "unknown". You are thus no less obligated than I am to provide staff-confusion-related evidence. (And your other non-mind-related evidence is poor also, but that's another sub-topic.)
- I am under no obligation to (dis)prove your hypotheses, unless you can provide compelling arguments in their favour. Otherwise, it would be up to me to disprove every randomly-suggested but reasonable-sounding hypothesis like "'Higher order' programming increases electricity consumption". It's a reasonable hypothesis -- it might even be true -- but there's no reason for me to consider it until there's strong evidence in favour of it. It's certainly not up to me to disprove it just because someone thought it up. I'm sure you can think up with an infinite number of similar hypotheses -- like "'higher order' programming confuses staff" -- but I am under no obligation to (dis)prove any of them. It's your hypothesis, so the BurdenOfProof is yours and yours alone. Until you've proven your hypothesis -- or have provided compelling evidence in favour of it, or have even provided evidence that it should be tested -- it shall remain "unknown" and therefore not my concern.
- You assert it's fine to use FP because it won't confuse staff. Shops should wait until evidence comes out that it does NOT confuse typical staff before shoving it on their staff. The DEFAULT is not your position and your position is thus no stronger than mine. (I'm fine to LetTheReaderDecide which narrative fits their own shop, by the way.)
- My claim is only that "higher order" programming simplifies code, which has been shown.
- In certain situations, yes, but 1) Most those situations appear to be where the libraries and/or language are limited or poor. 2) Parsimony is not the only factor involved. If you can scientifically prove that textual code parsimony should be the ONLY factor to consider, then please do it. Further, the sub-topic is staff, not code size.
- No, your alleged issue or "sub-topic" is "staff". That being so, it's up to you to prove your case. Otherwise, we'll ignore it as being at best irrelevant and at worst spurious.
- It is certainly conceivable that a pure OO language could exist where HOFs and lambda expressions are unnecessary because pure OO constructs make them redundant. No such language has been demonstrated. Particularly, in the case of being able to define arbitrary, multiple, independent independent event handlers for a given event, no reasonable alternative has been shown to adding event handlers -- possibly defined as lambda expressions -- to a collection of event handlers.
- I don't know at this time what your criticism is of GUI event handlers, but please leave such criticism in that currently on-going topic(s) rather than scatter it around the wiki. As far as "proof", see below.
- Regarding anecdotes, you appear to be confusing anecdotes with hearsay. An anecdote is a story, like, "I was on a team building a project tracker for the Accounting department. Lisa, the project manager, decided to use Lisp. Here's what happened...", etc. Your example, "On March 7, 2009, Lisa Stevens said ..." is hearsay; you're not telling a story you experienced, but repeating a second-hand report from someone else. Another name for it is "gossip". However, with a little detail even gossip might provide some evidence. So far, you've offered none.
- I'm not seeing your point here. What comes after "here's what happened"? I cannot read your mind. If you mean statistical evidence that language X fails projects more than language Y, that's a different animal, and may be too large a granularity to tease out problems with specific features of languages anyhow (if that's what you are getting at). Try a COMPLETE mock example from head to toe this time.
- What comes after "here's what happened..." is your first-hand, autobiographical description of what happened, as you observed it. My point here is that if you're going to allege anecdotal evidence that language features confuse staff, then you need to provide anecdotes rather than hearsay, gossip, or nothing but claims that such anecdotal evidence exists.
- You appear to be avoiding the issue. What's a full example of an except-able anecdote?
- Avoiding what issue? These are good: http://www.theregister.co.uk/2006/07/05/it_support_anecdotes/ Here's a whole Web site of them: http://thedailywtf.com/
- No you [redacted harsh insult], not dog's testicle stories, I mean finish the example ABOVE or something similar. Dogs, sheeesh. I don't know what kind of verbiage is acceptable to you and so want an example rather than vague complaints about the "real" meaning of "anecdote".
- Scroll around a bit. The dog's bollocks anecdote isn't the only one (obviously -- I gave two links) not that it matters what the topic is. I'm sure you can write anecdotes, because you've done so before. The first paragraph on LieOrStreet, for example, is a brief anecdote.
- Sorry, but I don't see how those relate to this specific issue. I cannot envision any worthy extrapolation and that's why I wanted an on-topic example. If you don't wish to provide one, that's fine, but this sub-topic is then left unfinished. I don't see why you cannot simply finish the sample you started.
- Really? You want me to write a fictional anecdote so you can see how an anecdote should be written? I'm not going to do that, but here's a reasonable example of what I have in mind: http://thedailywtf.com/Articles/The-Database-Gazes-Also-Into-You.aspx
- See EvidenceDiscussion.
- Regarding "being a dick": I'm not sure why you think my requests for evidence mean I'm "being a dick". Do you think it's rude to ask for evidence, or to point out that failing to provide evidence results in a lack of credibility and the impression that evidence doesn't exist?
- You were rude for accusing me of ulterior motives ("Increasingly, they sound like a rationalization for..."). Don't be a hypocrite.
- It's hardly an ulterior motive when you appear to be reluctant to provide any evidence that "higher order" programming confuses staff. It makes it look like your primary motive is personal rather than objective.
- Hypocrite, you've provided no evidence that it does NOT confuse typical staff. The reader could likewise conclude that you just want to sell expensive education or consulting services for arcane practices to line your greedy pocket.
- I am not obligated to provide such evidence. As I pointed out above, you can hypothesise all you like; as long as your hypotheses lack evidence, there's no reason for me to consider them beyond criticising their lack of evidence. And I don't know of anyone selling "expensive education or consulting services" just to learn about lambda expressions and higher-order functions, do you?
- So you don't stand by the hypothesis that HOF's don't confuse typical staff? Either way, it's fair for me to state my observations from my experience. That's acceptable evidence on this wiki. If you want to change the wiki rules, then take it up with Ward rather than bitch at me all day. (And most sly greed is not admitted to.)
- The only hypothesis presented for HOFs, lambda expressions, etc., is that they simplify code. Your hypothesis against HOFs, lambda expressions, etc. -- that they cause confusion -- has been slated for lack of evidence. That's not the same thing as presenting a hypothesis that HOFs, lambda expressions, etc., are good because they don't cause confusion.
- This topic is about staffing issues. If you want to ONLY talk about code size, then you are off topic. And I know you are frustrated that no formal studies or proofs exist on FP and staff confusion, but that's the way it is. Complaining about such repeatedly and redundantly over and over repeatedly and redundantly is quite annoying and a poor writing style. Please end that practice. LetReaderDecideEvidenceAgreement is fair and reasonable. But still, the default is NOT that staff is not confused, but rather "unknown" whether they are or are not confused.
- If this topic is about staffing issues, then perhaps there should be no mention of "excessively abstract programming" (from your first paragraph) or "higher abstractions" (or the ensuing mention of HOFs, etc.) until you've provided evidence that "excessively abstract programming" or "higher abstractions" have an impact on staffing issues.
- See EvidenceDiscussion.
- LetReaderDecideEvidenceAgreement
- Incidentally, a GUI-related discussion can be found at NodeJsAndHofGuiDiscussion.
If you feel HOFs are only "a great band-aide for crappy tools", do you also feel that FOR loops are "a great band-aide for crappy tools"? They're both programming language constructs for which there are alternatives.
There are already discussions on the WetWare of blocks versus GOTO. I don't need to repeat that here. And the competitors to HOF's are pretty good, while the competitors to loops are not (per personal observations related to coder WetWare).
The main competitor to a FOR loop is a WHILE loop. The only competitor to a HigherOrderFunction that comes close to doing what a HigherOrderFunction does is a FunctorObject. A FunctorObject does not implicitly close over its defining environment, and requires that context be explicitly passed to it, usually via its constructor.
I thought you meant the existence of loops in general. (And we could perhaps do away with FOR loops, per another discussion that I forgot the location of.) As far as HOF's, its best competitor may depend on the situation rather than be a one-to-one swap-out. That's why I want realistic business examples/scenerios to explore so that we can don't have to dance with general claims anymore.
See above where I wrote, "Above, you wrote, '... that article looks like lab-toy kind of examples ...'"
Another Opinion on "Excess" Abstraction
"Why Ruby on Rails won't become mainstream"
http://beust.com/weblog/2006/04/06/why-ruby-on-rails-wont-become-mainstream/
BEGIN QUOTE
I’d like to take some time to explain why, in spite of all its qualities, Ruby on Rails will never become mainstream.
As you probably guessed, my conviction doesn’t come from technical grounds.
Have you ever come across Smalltalk or Lisp programmers? You know, these people who, no matter what you tell them, will always respond that "Smalltalk did that twenty years ago" or that "Nothing has been invented since Lisp". They listen to you patiently with an amused light in their eyes and when you’re done talking, they will just shrug away your points and kindly recommend that you read up on a thirty-year old technology that was the last thing they ever learned and that has been dictating every single technical judgment they have offered since then.
I believe that in ten years from now, people will look back at Ruby on Rails and will have the same reaction. I’m not sure what Web frameworks we will have by then, but I’m quite convinced that a lot of the current Ruby on Rails fanatics will have the same kind of attitude: "That’s nice, but Ruby on Rails already did this ten years ago, and better".
Interestingly, they might even be right. But by then, it won’t matter because despite its technical excellence, Ruby on Rails will still be a niche technology that only experts know about...
I find [Ruby's] syntax and concepts extremely elegant and powerful at the same time...But it’s a complex language that contains a lot of advanced idioms which will be very hard for PHP and Visual Basic programmers to absorb.
[On Rails] Sometimes, too much magic is too much magic, and it can definitely be the case that the flow of code is too direct or too clever to be understandable by regular developers. Developers were able to do the jump from imperative to object-oriented programming, but it was a hard fight. I don't believe the Web world will ever be ready to embrace the Rails cleverness.
[Emphasis added.]
END QUOTE
That's an interesting article, though now quite old. What appears to have kept Rails from the mainstream is not its "cleverness" -- as much of it is now found in various mainstream languages and frameworks -- but slow performance, awkward integration with enterprise systems, poor and/or unprofessional documentation (how many Fortune 500 IT departments are going to trust their infrastructure to a system with docs written in a hallucinatory style by a guy named "Why the Lucky Stiff"?) and support-communities that seemed entirely populated by socially-inept 15-year-old boys.
Those who are best able to master symbolic languages generally tend to also be "socially-inept" and poor at writing and human interaction. Thus, the two factors may be related in terms of R-on-R. Symbol math and "people math" both consume a good many brain resources to do well, typically. Thus, it's difficult to master both.
That's an interesting hypothesis. If generally true, we should expect the Haskell community to display an exceptional degree of socially-inept writing and behaviour, given that Haskell is considerably more abstract than Ruby or RubyOnRails. In actuality, the Haskell community is notably mature, socially supportive, and articulate. By (anecdotal) contrast, the Borland Delphi and Microsoft Access forums I participated on some years ago were rife with immaturity, and nobody would accuse Delphi or Access of being abstract, let alone symbolic. In short, I doubt there is any consistent correlation between a language's level of symbolic-ness or abstraction and its community behaviour.
Is there a TiobeIndex equivalence of people skills?
I'm amazed that a page discussing the merits of abstraction doesn't contain the word "documentation" anywhere on it.
Why do you feel it should?
See also: TopOnAbstraction, RocketAnalogyProblem, ParadigmPotpourriMeansDiminishingReturns, GoalFrameOfReferenceMismatch
CategoryMetrics, CategoryEconomics