(Continued from NodeJsAndHofDiscussion)
Top, you say SeparationOfConcerns is a myth in business software and avoid using most of the language techniques available for abstraction. It seems likely that you actually don't get abstraction. Perhaps you could try some of these features; you might find they do improve SeparationOfConcerns, for instance. -DavidMcLean??
That's a misrepresentation of my position. I suggest one read SeparationAndGroupingAreArchaicConcepts for more on this. (The topic could perhaps use a rename.) Also note that abstraction by itself is not always better. What are needed are good abstractions, not necessarily more abstractions. I've been burned many times by abstraction attempts that went wrong simply because they didn't fit future ChangePattern(s). As Yogi Berra once said, "It's hard to make predictions, especially about the future". See EightyTwentyRule.
Creating good abstractions takes experience and analysis. It's also a matter of knowing the domain (or similar domains) and "office psychology" and developer psychology, not necessarily just a focus on programming language features or technique. As geeks we sometimes focus too much on magic tools or magic languages instead of WetWare. This is often a mistake, I've come to learn the hard way. In the end, ProgrammingIsInTheMind. It's not really about machines.
When I'm designing software, I often come up with roughly 5 different draft designs for each major module in my head or on the back of scratch paper. I then run various change scenarios (based on experience) in my head and see how the draft designs stand up to each scenario. It's a short-form of CodeChangeImpactAnalysis essentially. My main goal is not abstraction for abstraction's sake, but rather to stand up to future changes without over-complicating the current (or new) version. Abstraction is just a means to an end. If it improves readability and change handling, then it's helpful. If it's there just because "it seems cool" or "Fred told me this was good", then you are probably doing it wrong.
Another thing is that an abstraction that makes sense to me may not make sense to others. I've rejected some "nifty" abstractions because other developers (current or future) may not be so appreciative of something "weird". Other times I "go for it" and make sure I gave a hefty comment explaining it.
Over the years I've learned to keep abstractions rather small and semi-disposable: HelpersInsteadOfWrappers. Big, overridding abstractions can fail badly when they fail because you can't back out of them very easily. I will also accept some amount of concept duplication sometimes to keep them small and disposable. I used to try to stomp out almost all duplication, but it made the code really difficult to follow and change.
--top
How exactly is it misrepresenting your position to claim that 'you say SeparationOfConcerns is a myth in business software and avoid using most of the language techniques available for abstraction'? -DavidMcLean?
What's this "most" stuff? I use objects too, just not everywhere. (And I probably wouldn't miss them, but sometimes they have a minor advantage in name-space management. Related: OopNotForDomainModeling)
Oh, you're not still claiming object-orientation is a conspiracy then? Okay, fair enough. You say SeparationOfConcerns is a myth in business software, despite avoiding some language features that afford abstraction. -DavidMcLean?
I'm claiming it was way over-hyped for about decade. "Conspiracy" is probably not quite the word I'd choose any more than a salesperson only telling you the upsides of a product but not the downsides is a "conspiracy". The industry has "hype-cycles" where many vendors hype a new concept in order to generate new sales. And sometimes it's just a kind of fashion statement among techies: "I know Foo Oriented Programming and you don't! Therefore, I'll get laid and you won't.".
You actually call OOP a conspiracy on your site, which is what I was talking about. -DavidMcLean?
Not really. It's just worded in kind of a playful speculation. At least that's how I intended it. Anyhow, do you have a point, or are we going to play LaynesLaw over "conspiracy"?
Yep, I had a point. From above: "You say SeparationOfConcerns is a myth in business software, despite avoiding some language features that afford abstraction." -DavidMcLean?
Can you provide the context and quote?
Certainly. From HofPattern: "It may be one of those "assume a spherical cow" moment. SeparationOfConcerns is often a myth or excessively ideal in the biz world.".
I meant "as currently practiced". I cannot re-paste all the context and disclaimers every time I casually mention another topic. I want to discuss strategy, not words. It's resulted in bad or overused things such as a MirrorModel.
I don't doubt that SeparationOfConcerns is a myth as currently practiced by you, because you eschew features that can improve SeparationOfConcerns. -DavidMcLean?
Show them doing such well in realistic environments (code samples), and I'll use such more. Lab and toy examples (ArgumentByLabToy) just don't cut it because they remove some of the lumpiness of the real world in order to make the concept stand out more (hopefully only) for illustrative purposes.
Whenever you're given a code sample you complain it's rigged (e.g., deleteWhere() and the JavaScript example on ArrayDeletionExample). Those are hardly unrealistic examples; the JS one is from an actual custom biz app. -DavidMcLean?
I'll LetTheReaderDecide whether my criticisms are legitimate. Obviously, we have different views on that. If you believe the examples are sufficient, state so and move on.
I do believe they're sufficient. However, you clearly don't, so let's look at another one: Enumerable, from Ruby's standard library. You've described it as the ability to put DatabaseVerbs on arrays, and in fact you're right. Enumerable lets you put DatabaseVerbs on arrays, files, sockets, hashes, recordsets… basically anything for which it makes sense to have such verbs. In short, it gives a common API to all collections and collection-like things; this is obviously useful, and in fact on your own site you discuss such a collection API ( http://www.geocities.com/tablizer/array1.htm ).
Later in ArrayDeletionExample you suggest that different shops will implement a different set of DatabaseVerbs on collections, so there's no consistent API. In Ruby at least this is absolutely false; because Enumerable is part of the standard library, individual shops aren't going to implement it. For all built-in collections, Enumerable's already set up; for a new collection type, all one needs to do is implement a single method (.each) so that Enumerable knows how to get stuff out of the collection, and then one will have access to all the database-y functionality afforded by Enumerable.
Not only is Enumerable clearly useful, you yourself have identified the need for it. -DavidMcLean?
Essentially you are reinventing a (partial) database in code. I agree that the idea in general is a good idea. However, we don't want to make it too easy to drift away from the "standard". This is similar to our "herding" discussion at [insert later] and about Lispers reinventing "creative" FOR-loops, etc. (LispIsTooPowerful). In my opinion, it's better to build such into the language to get standardization. (Some of Microsoft's languages allow basic SQL against certain data structures. This allows one to more easily "upgrade" to a full RDBMS when needed. Whether this is better than LINQ or not is another discussion.) It's a great idea for a hobby language, but I don't want to debug Picasso's code at work. -t
In Ruby, one doesn't usually drift away from using the .each method and Enumerable, for two reasons. Firstly, most programmers actually like consistency and don't need to be forced into making stuff consistent. Secondly, there are a lot of methods in Enumerable ( http://ruby-doc.org/core-1.9.3/Enumerable.html ), all of them are useful at one time or another, and it's a lot more work to implement them manually and following different conventions than it is simply to implement .each and include Enumerable. -DavidMcLean?
Well, I'm a bit skeptical. I've been burned by Picasso's.
Further, it's temping to over-use code for collection processing, and if the application grows larger or more complicated, we'll eventually have to move to a RDBMS. I'd rather have a language that fits SQL's idioms from the start to "close the gap" a bit rather than something very different that can push the app to drift off on its own. Ruby's essentially risking an "Ruby-to-Sql Impedance Mismatch". (Not that SQL is the ideal query language, but it's the current de-facto standard.) Related: EmbraceSql.
If you really embrace full-out "build your own meta-structures", then why not promote Lisp instead of Ruby? It's syntax is far simpler. (Personally, I'd prefer Masp as a hobby language - MaspBrainstorming.)
I'm focusing on Ruby because Enumerable's a good example. I don't think I've said anything about building your own "meta-structures", and I'm not entirely sure what you mean by it.
Flexibility: Essentially roll-your-own-language without starting from scratch. Your arguments are essential a rerun of those made during the "Lisp-craze" in the 80's.
Consistency trumps flexibility in CBA tools unless the flexibility advantage is pretty large because companies want PlugCompatibleInterchangeableEngineers. Yes, it can take some of the fun out of programming, but remember the upside is that you are less likely to get some nut-job's Picasso-Code to maintain.
I pointed out a couple of paragraphs ago that Enumerable, as a common interface available on all collections, increases consistency. If you've got a collection in Ruby, you know you can call #each, or #collect, or #detect, or #take_while, or any of the Enumerable methods, using exactly the same syntax and with precisely the same semantics as you'd have with some different collection. Code is therefore much more consistent than it would be in a situation where collections implement similar functionality manually, or a situation where the developer needs higher-level iteration (say, #collect) but is stuck with an ExternalIterator and must manually implement collecting with it. -DavidMcLean?
What do you mean by "available on all collections"? Do you mean built in or add-able? This also gets into the issue of app language design and what would be the "ideal" CBA app language. That's a big topic. I'm not in the mood to dive into that right now.
- By "available on all collections", I mean "available on all collections". Every built-in collection class has #each defined and includes the Enumerable module, so the entire Enumerable interface works on every built-in; if one defines a new collection or collection-like thing, one must merely implement the #each method and include the module in order to gain access to all such functionality. As for getting "into the issue of app language design", you've been debating the utility of a language feature (higher-order functions) for dozens of pages; how exactly is this not already discussing app-language design? -DavidMcLean?
- Okay, if you wish to go down that road, what's wrong with relying on OOP to provide such functionality?
- Read my next paragraph to find out.
A lot of what you describe could perhaps be
done with old-fashioned OOP. OOP gives one the option to associate different information with the "function", such as a title and meta data. It's more flexible than HOF's in many regards even if it is a bit more initial syntax to set up. This is because objects need a class "wrapper", while HOF's don't. However, having class packaging does allow such additions without modifying the original code. We can associate more info and data with the "function" without disturbing any existing references to the "function". You cannot "attach new things" to HOF's.
- Yes, nearly everything higher-order functions can do may be emulated with the FunctorObject pattern, or in general through DependencyInjection. However, emulation is exactly what these patterns are: a way to emulate higher-order functions, usually with more syntactic overhead. If the language required the overhead of defining a new FunctorObject class for every single usage of #each, programmers would try to avoid using #each where possible; the fact that #each takes an anonymous function with a minimum of syntactic overhead makes it usable to the point of ubiquity in Ruby coding. -DavidMcLean?
- I'm against the idea of adding too many tools that are almost the same to a language unless they are heavily used enough to justify such. If the language comes with a decent collection or set of collections with such "verbs" built in, there will be only an infrequent need to roll your own. "Saving a few lines of code" is not a sufficient justification unless its use is heavy. See also NonOrthogonalLanguageFeatures. -t
- The use of #each is heavy in Ruby, precisely because it's so easily used and so consistent among collections; use of #each alone is common enough to justify streamlining it with higher-order function syntax, even disregarding the myriad other function-taking methods throughout the standard library. The language does "[come] with a decent collection or set of collections with such verbs built in": All the standard-library collections have #each and Enumerable, as I said above. The value of being able to apply those same verbs to custom collections is mostly in domain-specific collections and collection-like things, since the standard library is general rather than domain-specific. -DavidMcLean?
- Isn't it essentially used for "roll your own loop/block" constructs? This gets right back to the Lisp-like flexibility-versus-standardization wars. It perhaps may save a token or two of syntax over the OOP or "traditional" alternatives, but at the risk of unleashing Picasso's. Nothing is stopping a developer from creating #flib, #sznerg, #waggle, and #fluxcapacitor loops or blocks for job security or "fun". The biz environment doesn't want this, as the Lisp War showed. We've been over this already and probably won't agree yet again.
- Roll-your-own loops are rare in Ruby code for reasons I've already been over, mostly that it's much easier to implement #each and include Enumerable than it is to create #flib, #sznerg, #waggle, and #fluxcapacitor individually. Most programmers do in fact like consistency and will choose to write an #each method rather than implementing bizarre, different methods, without needing the language to force them into it. Also, keep in mind that developers can create #flib, #sznerg, #waggle, and #fluxcapacitor methods or functions in any language for job security or fun; do you really think allowing methods to accept a block will significantly increase the tendency to create poorly-named and inconsistent methods? -DavidMcLean?
- It's difficult to say for sure what "most programmers" will do without thorough studies and observations. At this stage, it's merely an anecdote battle. Sometimes a small percentage of bad apples can ruin it for everyone. I agree that bad apples will probably write bad code no matter what, but if you give them something very flexible, they will bend it more heavily, or at least in more directions, than a lower-brow language. It's The Joker at a vending machine versus The Joker at a candy store. Boredom and job security are powerful forces.
- So… are you saying we should make our programming languages worse because programmers are evil?
- "Evil" is only part of it. See the percent breakdowns below. It's a matter of tuning the language to human behavior.
- Bad programmers can write bad code equally well no matter what language they're using. Weakening the language to curtail their bad code mostly limits your good programmers; it's solving the wrong problem. If you've bad programmers, either educate them or get rid of them. -DavidMcLean?
- Insufficient knowledge is only half the problem. And messes made in "low-brow" or "mid-brow" languages are easier to fix and read in my opinion. A mess made with just Legos is easier to sift and clean than one made with Legos, Tinkertoys, and an Erector/Mechano set together because one's brain has to keep switching paradigm context and follow the interweaving of paradigm contexts.
- "educate them" means "educate them to be better programmers", not just "teach them the features they don't know".
- You remind me of my wife: She naively/stubbornly thinks that nagging works. If you know how to "train right", good for you. But that's a WetWare issue, not really a technical one. One thing I've learned in life is that humans suck. Dogs are the better species because most of the bullshit has been bred out of them. If dogs were only brighter.
- A bad programmer's going to be bad for one of two reasons. Either they legitimately don't know about good programming practice---naming stuff well, applying abstraction in layers, and so on---in which case when told about such things they'll accept and learn from them, or they do know those things and are ignoring them out of malice, in which case they shouldn't be working for your company. -DavidMcLean?
- The people who make the final hire and fire decisions are usually not technical and don't evaluate them on the same criteria that you probably would.
- I know that; I do read Dilbert. If there's no way for your company to evaluate programmers based on their programming skill, your company is fundamentally flawed and is the actual problem needing fixing.
- Most organizations are ran by Ferengi's, not Vulcans. People skills and "relating" to management and customers does play a big role, for good or bad. And some organizations don't want to pay for better talent because it's hard for the payers to measure the difference.
- And therein lies the root problem.
- Human nature. Whaddya expect from a species that recently evolved from poo-flinging apes?
- There are plenty of companies that do evaluate programmers based on programming. It's not a required property of humans to be incompetent; it's just poorly-organised companies that's the issue. -DavidMcLean?
- Until some manager is promoted, moves on, or there is a re-org or merger, then bad practices continue. Dilbertville is the equilibrium state.
- As for "attaching new things", you can indeed attach new things to anonymous functions. In Ruby, you'd do that by defining a new method on the function. In JavaScript or Python, you'd just assign to a slot on the function, as you would for a normal object. However, while it's possible, this pattern often isn't particularly useful. This is because anonymous functions of the sort passed to #each and similiar higher-order functions aren't actually very important; naming one of these functions would be like naming a for loop. (Keep in mind that attaching metadata to anonymous functions is still entirely possible.) -DavidMcLean?
Anything you want to associate with the function can be in its return value.
What if you are already using the return value for another purpose? I'm looking at maintenance issues here, such as adding new information or behavior down the road.
Return a struct.
So you are saying build all HOF's to return a struct in advance even if we only need one value for initial roll-out? One might as well use objects then because the initial simplicity of HOF's would be gone if we always made structs. Objects *are* structs, pretty much.
Sorry, pressed for time, I read what you wrote -- misinterpreted it -- and responded in haste. I should have written that anything you wish to associate with the function can be put in a struct along with the function. Its return value is not relevant.
But that's essentially a class or object. And it's still a case of changing existing references, which we wouldn't have to do if we started out with an object to begin with. If "simple" HOF's are heavily used in an app, then the overall app may indeed be simpler. However, if only a few are used, then the syntax difference between an object and HOF are too miniscule to fret about. Objects are plenty sufficient, and more familiar to most CBAD's. We don't want to weigh down our tool-box with 12 kinds of screwdrivers when a few handle the low and medium needs sufficiently. Only if you often do tasks where the subtlety of the screwdrivers makes a significant efficiency difference, then 12 screwdrivers may be justified. Let's balance the tools in CBAD's tool-box based on actual usage patterns. If they do a lot of hammering a lot of different kinds of materials, then 12 hammers may make sense. Related: ParadigmPotpourriMeansDiminishingReturns.
True, a struct and a class are closely related. However, it's not unusual for developers to, e.g., explicitly create FunctorObjects or use a CommandPattern. In these cases, objects are used to emulate HOFs. HOFs, without the need for artifice, would be syntactically simpler and more direct -- at least for those who appreciate them. Of course, for those who don't, object oriented languages provide sufficient facilities that the shy developer can avoid HOFs if he wants to. But that's no reason to deny developers who would like to use them. I have a screwdriver with 98 interchangeable bits. Some of them I'll never use, and a few I've never seen before, but I'd rather have all of them at my disposal than risk rounding off a screw head because the bit doesn't quite fit. Explicit support for HOFs, in object oriented languages, is the same. You can do everything in conventional OO without HOFs, but sometimes HOFs fit better.
A screwdriver with 98 bits is still going to be a lot of metal to haul around, and woe be to those who dropped them all on the floor. Anyhow, again again, if you can show that such is common and useful (syntactically more natural or concise, for example) in CBA's via realistic code examples in descriptive scenarios, I'd be happy to endorse them. Heavy use of them may just simply be your personal style preference and/or you are not as skilled at other approaches.
See HofPattern for the quintessential illustration of the concept. By the way, the screwdriver set is in a nice hard case with rubber bumpers, and it weighs at most two pounds. The bits are organised in rows by type, and are labelled to make them easy to find. It's a marvel of elegant capability, just like a good programming language. Why, by the way, do you assume my use of HOFs is indication of some lack of skill? Have you considered the possibility that your rejection of HOFs is an indication that you lack skill in using them? Perhaps you just need some practice. Javascript is an excellent place to start.
I didn't mean to imply you were unskilled, I was merely suggesting possibilities about why you think they are better than the alternatives even if they may not be. I probably could have worded that better. And an "illustration of the concept" is not sufficient. We know "lab toy" examples illustrate concepts. The issue is whether "the concept" can be applied to common and typical CBA's. I don't think it's widely applicable to such. If you claim it is, then please provide something(s) demonstrating a specific business need so that I can see it being applicable with my own eyes (and other curious readers).
I remember I had this same "problem" with "heavy" OOP proponents. They came up with nice SystemsSoftware and engineering/physical simulation examples/scenarios, but not CBA examples (outside of GUI's and system interfaces). I kept asking and asking and they often insisted that I would eventually "get it" for the domain side if I just kept trying and trying long enough. Author Richard Mansfield also asked the industry for a "proof of concept" sample application after seeing many shops foul up OOP attempts. Eventually the industry woke up and mostly realized OopNotForDomainModeling (at least the biz domain). I believe I'm seeing a repeat of history with FP/HOF's. -t
I don't think you can meaningfully compare object-oriented programming with higher-order functions. The former is an entire programming paradigm, and it can radically alter the structure of entire applications. The latter, however, is a much more "local" tool: Calling a higher-order function is a technique on basically the same level as writing a for loop, or calling a method on an object, or simply making any function call.
Of course, because higher-order functions are a very generic tool (much like all functions), they may be used to build up a wide variety of more complex systems. NodeJsAndHofDiscussion looks at one of those. Despite this, however, higher-order functions are capable of much simpler feats than building an entire platform. They work just as well to abstract out collection interfaces, as in Ruby's Enumerable, or to abstract out problem-specific detail from an algorithm, as in HofPattern, for instance.
Personally, I think there's none of the potential "danger", which you foresee through an analogy to overuse of object-oriented programming, in using higher-order functions for these simpler local situations. I agree that it's a little drastic to use a quite different application structure without proof of the technique's value. Thus, I recommend trying out some of the smaller, localised uses of higher-order functions. They don't enforce a radical change in your app structure; they just make some aspects of your code shorter and clearer. As you grow more comfortable with using higher-order functions, you may find that using them on a broader scale isn't that daunting a prospect; even if you do only keep them localised, you can at least enjoy their benefits in factoring out code patterns. -DavidMcLean?
I've been looking for a good CBA use over the years, but so far haven't found any. That's why I'm asking you for realistic sample code/scenarios, etc. You seem to believe that their utility for CBA is common and clear, which implies you'd be the ideal person to produce such samples. But, sadly, they never come, and like above, you expect the reader to just take your word for it. I'm from the MentalStateOfMissouri. The intensity and repetition level itself of such claims should not and will not persuade me, and that should be the case for any reader reviewing any technology (unless they have to make a choice soon such that they stuck with the lower parts of the EvidenceTotemPole). -t
- I'm not asking you to take my word for it. I'm asking you to try it, using examples such as are discussed on HofPattern, ArrayDeletionExample, and now this page (Enumerable), and to experience the value of higher-order functions firsthand. Code something in a language that supports higher-order functions; JavaScript would be an excellent choice. Use some. Try them out. -DavidMcLean?
- How do I tell if its "better"? And why the hell am I doing YOUR job of finding evidence for YOUR claim? What's next, Am I to wipe your [bleep] also? Try T.O.P. and Eval's. There's 50 evangelists out there evangelizing their IT wares. It doesn't scale to try all 50, and some of them look far more promising than HOF's for CBA.
- Is your rudeness necessary? I've seen that you regularly dismiss examples of the usefulness of higher-order functions provided by others, often declaring them outside your domain. I've also seen that what you actually consider part of the custom biz domain is extraordinarily vague. So I'm saying you should try them yourself, because surely an app you're working on will be a custom biz application. Apply some of the patterns we've discussed all over this wiki at your next custom business opportunity. See if the code's shorter or clearer (if you do it right, it should be both). I've tried eval() for exactly these sorts of purposes, because there are certain languages that actually require that rather than providing proper anonymous functions. And I know from this that using eval() is a significantly worse solution. -DavidMcLean?
- I've found that "clearer" is highly relative. People "see" code differently.
- Yep, that's why you should try higher-order functions yourself and see if you find the resulting code clearer. -DavidMcLean?
- See above about "50 scaling". You said you tried Eval and it failed. Why did it fail? What specifically did it do poorly?
- It didn't actually fail. It worked. It was just a mess. Scroll down to my Ruby examples to see why it was a mess. As for "50 scaling", you're the only developer I have ever known who doesn't see the value of higher-order functions. -DavidMcLean?
- That's not a realistic CBA example. Enough ArgumentByLabToy already. The others probably agreed with you to make you go away. I've done that myself to annoying quibbling evangelists of fads and "cool" MentalMasturbation.
- Like I said, it is a toy example. That's deliberate. I wanted to elaborate on the many and varied technical issues with using eval() as a replacement for higher-order functions, and I didn't want readers to be distracted by irrelevant domain details---because that's what the domain details would be, in this case: completely irrelevant to the example. -DavidMcLean?
- [Every developer to whom I've show higher-order functions -- especially as an alternative to eval, which arguably gets over-used in light of its flaws -- have essentially said "cool" and then used them in languages that support them. No drama. It's not a paradigm shift, it's just a nice language feature and about as big a deal as showing 'for' loops to programmers only used to 'while' loops.]
- If you apply the rules/steps I gave, it's clearer than the alternative. You just keep forgetting my rules.
- I must indeed be forgetting your rules, because I have no clue what you're talking about. What exactly is clearer than the alternative? Care to elaborate? -DavidMcLean?
- See CustomBusinessApplicationDefinition.
- Again, if it's TRUELY "significantly worse", then it should be easy to illustrate the reasons and context and measurement techniques. "Subtly worse" is hard to demonstrate, but you claim "significantly".
- Oh. I was expecting some rules or conventions for using eval() such that it works more like higher-order functions and less like an ad hoc mess. Alright. Let's take a code example. Let's say we have something like this in Ruby (deliberately kept simple):
[1, 2, 3].each do |x|
[4, 5, 6].each do |y|
puts x+y
end
end
- We call the #each method on two different arrays, passing an anonymous function to each invocation. Now, let's suppose #each doesn't take a function. Instead, it now takes a string to eval(). What happens to the above, fairly simple code sample?
[1, 2, 3].each "|x|
[4, 5, 6].each \"|y|
puts x+y
\"
"
- Firstly, notice there's no obvious way to handle the variables. I've put them at the beginning of the string with much the same syntax as Ruby's blocks already use, but the #each method would need to parse that manually and bind the resulting variables. Next, note that the inner string needs escaped delimiters, \". A third nested string will need more escaping, probably something like \\\". We could reduce that slightly in languages that allow both single and double quotes, but that'll only last us for two strings of depth. Then consider that editors don't know the strings contain code: Syntax highlighting is impossible, for example. The interpreter can't test for syntax errors during parsing, deferring it until actual execution time. Even given all that, there's still the speed hit involved in parsing and evaluating strings at runtime. And this is only a really simple example! -DavidMcLean?
- Could you make it a JavaScript version? I don't know Ruby. Thanks. Further, what real tasks is that doing? One can make a lab toy that exposes or exaggerates the weaknesses of just about any concept or technique.
- Sure, I'll make it JavaScript.
[1, 2, 3].forEach(function (x) {
[4, 5, 6].forEach(function (y) {
print(x+y);
});
});
- Little more verbose, since JS's function syntax isn't quite as concise, but otherwise basically the same. And the string version:
[1, 2, 3].forEach("(x)
[4, 5, 6].forEach(\"(y)
print(x+y);
\");
");
- Still fairly similar. Again, there's no obvious way to bind variables, so .forEach() will need to parse that little parenthetical thingy at the start manually. The same issues exist with this version as with the Ruby one. As for what tasks it's doing, this is indeed a toy example; its purpose is to illustrates what happens whenever you need nested anonymous functions and only have eval(). However, many of the weaknesses identified above apply both to nested and non-nested eval() strings; it's really only the \\\" delimiter-escaping problem that arises solely from nesting. -DavidMcLean?
I think HofPattern is about is clear an example as one can get, but as I mentioned elsewhere, I've dug up an old employee allocation application of mine -- based on a genetic algorithm -- that will make an ideal illustration. It was built in Java using a fairly conventional OO approach (Java doesn't support HigherOrderFunctions), but for illustration I'll convert part of it to use Javascript to illustrate how it would look using HOFs. That should provide a reasonable basis for comparison.
"As can get". Heckavity No! You can get clearer by applying it to something realistic. Lab toys clearly do labby things, no doubt, but that's not necessarily a demonstration of practical utility.
Well, it's as clear an example as one can get via an illustrative abstraction. However, I knew you'd once again bristle at its abstractness, which is why I mentioned that I'd present my real -- not just realistic, but real -- employee allocation application.
I cannot see your employee allocation application. There may be a non-HOF way to do it that you haven't thought of. And I do NOT "bristle at abstractions"! I use abstractions every day. But I want good abstractions; bad abstractions can make things worse. See TopOnAbstraction.
You will be able to see the application. Be patient. ;) Also, you may be defining "abstraction" differently to your above correspondent. HofPattern is an abstraction; it shows in the abstract a general pattern that may be applied to code where appropriate. You have indeed "bristle[d]" at the abstract nature of HofPattern, requesting non-abstract examples. -DavidMcLean?
It was worded poorly. The abstract-ness is NOT why I "bristled", but the way you wrote it implies it is.
Sorry? Are you saying you actually don't object to the way HofPattern is presented as a generic, illustrative abstraction? -DavidMcLean?
Your problematic implication was that I objected to HOF's mainly because they are (allegedly) "abstract".
No such implication was intended. It's not higher-order functions that are abstract; it's HofPattern. -DavidMcLean?
This "correction" doesn't change the nature of the problem, only the target.
How so? Is it not true that you objected to the way HofPattern is presented as a generic, illustrative abstraction? -DavidMcLean?
No. It's not. I object that you cannot find significant real CBA use for it, yet claim it "clearly" makes them better. It's this contradiction that is agitating me. I've said this like 50 times, yet you reinvent a different explanation above for my agitation. I will reject non-useful abstraction, if that's what you mean, but it's the "non-useful" that is the problem, NOT the "abstraction". 51 now. You put a giant bolt sticking out in the center of my wall, and then when I complain it's there without a use, you seem to see it as "you hate bolts". No, I hate useless and "distracting" bolts.
[What about the so-called "Brady Bunch" example, i.e., the Javascript-based multi-panel concurrent display at http://shark.armchair.mb.ca/~dave/hofajax that is also part of a real custom business application? Perhaps you could show a better way to implement it that doesn't use higher-order functions? We can happily ignore the fact that the DOM forces you to pass a function to setInterval(), as long as no other higher-order functions are used. Surely, if other approaches are equivalent or superior, you can alter this simple, real, non-abstract example to show it?]
See SummaryOfHofExamples
[By the way, I've changed it slightly to make it more reflective of its actual application and coalesced it into a single block of Javascript source -- I removed ajax.js. It's lost its "Brady Bunch"-ness, I'm sorry to say, but it's a better illustration.]
My comments on the applicability of the example is in SummaryOfHofExamples. By the way, I liked the brady-bunch version over the bar charts as far as an example.
[ToDo: split or move when edit-war has ended.]
Because StandardToolDependency?/StandardToolDependancy is stuck in an EditWar with G.V., here's some related material:
http://beust.com/weblog/2006/04/06/why-ruby-on-rails-wont-become-mainstream/
Some Quotes:
But it’s a complex language that contains a lot of advanced idioms which will be very hard for PHP and Visual Basic programmers to absorb.
But it’s still a very wide gap for corporate developers to cross. Sometimes, too much magic is too much magic, and it can definitely be the case that the flow of code is too direct or too clever to be understandable by regular developers...I don’t believe the Web world will ever be ready to embrace the Rails cleverness.
Have you ever come across Smalltalk or Lisp programmers? You know, these people who, no matter what you tell them, will always respond that "Smalltalk did that twenty years ago" or that "Nothing has been invented since Lisp". They listen to you patiently with an amused light in their eyes and when you’re done talking, they will just shrug away your points and kindly recommend that you read up on a thirty-year old technology that was the last thing they ever learned and that has been dictating every single technical judgment they have offered since then. (Emph. added)
I believe that in ten years from now, people will look back at Ruby on Rails and will have the same reaction. I’m not sure what Web frameworks we will have by then, but I’m quite convinced that a lot of the current Ruby on Rails fanatics will have the same kind of attitude: "That’s nice, but Ruby on Rails already did this ten years ago, and better".
Languages and tools keep reinventing Lisp's "meta" capabilities, but it never goes mainstream because the training needed to handle it is costly without a corresponding increase in documented productivity. Again, I generally agree the Lisper's are "right" from a purely technical perspective, but they don't understand the business side of unleashing an excessively powerful tool. It's too big of a chainsaw for the average lumber-jack. A handful of lumber-jacks may become super-productive, but too many will either not figure out how to use it right, or misuse it out of boredom or job security. The problems caused by the bad apples is equal to or greater than the productivity gains by the really good apples with with the super-saw.
PageAnchor: Abuse294
I would roughly estimate that a "high-brow" language will have this impact:
- Noticeable increase in productivity in 35% of developers.
- 25% "don't get it" well enough, and their productivity goes down.
- 20% misuse it for job security ("write-only" code) or resume padding ("I used pattern X 40 times"). For example, there were a lot of complaints about people "forcing" the GOF patterns in the GOF/OOP heyday (GOF is not a language feature, but demonstrates resume buzz-word games.)
- 15% misuse it out of boredom, decreasing the productivity of future readers.
The downsides of the bad apples outweighs the upsides of the good apples.
I also believe "fans" of certain techniques exaggerate the upsides of "meta" techniques. The improvements are relatively minor in practice.
[On what do you base your belief that meta techniques have relatively minor improvements, when by your own admission you haven't used them much? -DavidMcLean?]
I haven't used them much because I haven't found much use for them in CBA, and apparently neither have you guys because you have difficulty showing practical and common scenarios.
We've shown practical and common scenarios -- those on SummaryOfHofExamples, and even gone so far as to show a general pattern in HofPattern -- and you reject them as not being practical or common. I'm curious where you get your figures to "roughly estimate that a 'high-brow' language will have this impact"... It looks like arbitrary speculation, with no basis in evidence other than your personal dislike of "'meta' capabilities", despite the real evidence that "'meta' capabilities" are being adopted by mainstream languages and used without fuss by mainstream developers. A handful of knee-jerk comments from 2006 -- shortly before RubyOnRails pushed Ruby into the awareness of every Web developer and significantly influenced Web framework design on every platform -- do not make your case.
Your counter arguments are no more than anecdotal also. And I do not "dislike" meta capabilities. The industry "dislikes" them for the reasons given. R-on-R may carve out a nice little niche, but I don't think it will go/stay mainstream. Something with the impact of the original Visual Basic and/or HTML will eventually come along and make web GUI's more of a commodity rather than the twiddly "dark art" it currently is. You speculate that I "hate" meta capabilities, so I can speculate that you want to protect your GUI dark-arts from the proletariat programmers to justify your higher wages. I can play the motivation speculation game also. GUI's can and should be commodity technology. The web has simply temporarily got in the way. Commodity stuff usually doesn't need HOF's. HOF's are for the early days before the patterns/standards figure themselves out and settle. If JQuery becomes the de-facto client GUI standard, it too will be re-packaged into something medium- or low-brow to make it more digestible to the programmer masses, especially for internal apps that don't need to keep up with the style Jones'. And OOP can do mostly the same thing. -t
This is all speculation, so I have nothing to offer except "we'll see".
And you still cannot find non-UI and non-performance-oriented CBA scenarios for HOF's. (And the UI and performance claims are still rigged in my opinion. I haven't conceded those.) Why must you keep turning to GUI's and performance for (allegedly) realistic scenarios? Is there a reasons for this pattern?
The "Brady Bunch" example is UI oriented because it's real, uses JavaScript which supports HOFs and is well-known, and because modern business applications are often Web-based and therefore use UIs based on JavaScript. In short, it's realistic and relevant.
- We have to use HOF's because the only timer in JS forces us to use HOF's. It doesn't have to be that way anymore than QWERTY is some universal constant. We've been over this already and I don't want to re-bicker it. If you want to get client-specific, then yours has a memory leak.
- Please suggest an alternative syntax that would accomplish the same thing without HOFs, but without requiring the run-time compilation of eval(). By the way, my example doesn't have a memory leak; Firefox 18 has a memory leak. Note that it doesn't leak under Chrome or IE.
- Accomplish what specifically? Timers? Chrome doesn't click on refresh either.
- Accomplish the same thing as the current application, but without HOFs.
- If JS had a legitimate sleep() function, we could use use plain-jane loops. It's conceptually simpler, and probably less likely to cause memory leaks by re-invoking a function over and over.
- To work equivalently, using sleep() would imply multithreading along with all its attendant concurrency issues.
- JS's existing timer by itself doesn't provide multithreading.
- True, but it provides concurrency.
- Maybe not. Have you actually tested that assumption?
- Yes. Note that the graph bars on the "Brady Bunch" example behave independently. Some are considerably delayed, others change rapidly. Concurrency is thus demonstrated.
- Appearing independent to the eye is not a sufficient test. It could be doing it sequentially in a round-robin-like fashion, and the human eye couldn't tell the difference. Another thing, if it gets data from an actual server(s) instead of a filler demo function, the number of http connections a desktop supports is generally limited to roughly a half-dozen IIRC. You may see some odd blocking problems under production, at least at the speed of the demo. (One may be able to change the number of connections in the OS settings, but over-doing it is not recommended.)
- You will observe that some of the bars delay for over ten seconds whilst the others operate freely. Thus, it is clearly concurrent. It gets data from an actual server, only the raw data itself is generated. The number of connections is realistic and reflective of the production application.
- I am not understanding how that "proves" concurrency. In fact, I don't believe the human eye is sufficient to detect such. Non-concurrency can potentially emulate concurrency beyond visual detection (and/or beyond the refresh rate of the monitor). The "slow rate" refresh div's will simply jump into the queue less frequently than the quick ones. Also, in a typical environment, some servers will respond quickly and others slowly. I don't believe your example sufficiently tests such diversity because it uses a single server with a dummy plug that doesn't DB query on a busy server etc., but that's a secondary issue.
- Look at lines 4, 7 and 15 as the display starts -- particularly line 15. On these lines, the server responses are intentionally delayed in order to demonstrate that each bar updates concurrently. As for having a database backend, that's irrelevant to the illustration but any modern DBMS on that platform could easily handle the load.
- I don't think so. JS is not concurrent in most browsers. It appears that when an HTTP GET is requested, JS can "go off" and process other pending function calls. Its "timer" doesn't necessarily have to be running all instances, it just has to put an "expiry" time-stamp for the next cycle in a central queue after starting the current instance. When another GET is requested, then JS puts a marker in the stack and goes to the expiry queue to see if any expiry stamps are the same or older than the current time and execute those, oldest first. While GET's are potentially concurrent, JS itself is not. If you replaced the statement that triggers GET with say a call to calculate pi to N decimal places, I bet it would block the other DIV's (unless maybe a given timer instance is only allowed to run for the timer duration). It's an approximation of round-robin where longer cycles may skip a round-robin cycle. It kind of reminds me of VB classic's "DoEvents" command, which basically said, "go process the event queue and return here when the queue is empty". In this case, the GET call may approximate something like this pseudo-code internally:
processHttpGet(url, timeout) { // internal "system" function pseudo-code
h = new httpGet(url, timeout);
while (! h.finished) {
doEvents(); // process any pending events
}
return(h.results);
}
- It is most certainly concurrent. You appear to be conflating "multi-threaded" with "concurrent". The former can be used to achieve the latter, but the latter can be achieved without the former. As per the above exchange, you wrote "JS's existing timer by itself doesn't provide multithreading." I acknowledged this when I replied "true, but it provides concurrency." The example is certainly concurrent. Hit refresh whilst its running, and note that bar 15 will -- as intended -- take a long time to refresh whilst the other bars refresh relatively quickly. If the example did not support concurrency, updating bar 15 would block the others from updating.
- I have an issue with your use of the term "concurrency". JS is not providing the concurrency, but rather it's the GET process.
- The "GET process" has nothing to do with it. See http://computing.derby.ac.uk/~dave/hofajax/timertest.html which does not use AJAX or any GET actions. Concurrency is provided by the setInterval and setTimeout methods of window, and is based a single-threaded, non-blocking model. When a setInterval or setTimeout event finishes running, setInterval or setTimeout blocks until another setInterval or setTimeout event should be run. If a setInterval or setTimeout event never returns (due to an infinite loop) or waits a long time, it will prevent or delay subsequent setInterval or setTimeout events. This is a popular approach for achieving concurrency in GUIs, due to the complexity of multithreading.
- How is that proof that it's providing concurrency? It's just quick sequential queue-induced round-robin, which is visually indistinguishable from concurrency for most human eyes. Events are just queuing up and processed sequentially if their "number is up". If I have time, I'll try the "pi test" above. In this case each event finishes fast because it's a dummy local process, and so control is soon returned back to the even queue manager (or equiv.). If it has an HTTP call associated with it, then the HTTP process appears to do something like the doEvents sample above. My model still reflects the externally observable behavior (output), and the model does NOT depend on (general) JS concurrency, only HTTP request concurrency. It does seem similar to VB-classic, except that only certain system operations (such as GET) can issue DoEvents. You can perhaps call it "simulated concurrency" or "pseudo-concurrency", but it's not concurrency. Being able to suspend a given event to go work on another event by itself does not make "concurrency". One can make stuff look concurrent as long as any "long" request/algorithm issues sufficient do-events while waiting for completion or delivery. Such apps require programmer discipline to use do-events properly to avoid unnecessary sluggishness. (Note that there may also be internal "system events", such as checking on the HTTP queues/processes, which may themselves be threaded, but written in C/C++.)
- Again, I think you're conflating the term "concurrency" with "multithreaded" or "preemtive multithreaded". Yet, even preemptive multithreading on a single-core, non-hyperthreaded CPU isn't really executing multiple threads simultaneously, it just looks like it. It is, however, concurrency by definition.
- "Concurrency" can be relative to machine language, app language, etc. A single core cannot provide machine-language concurrency, but can provide app-language-level concurrency, or at least a sufficiently close approximation of it. (Yes, it is a UsefulLie.) We can say that two lines of app-language code are executing "at the same time" because the granularity of execution is below that of an app language statement. Let's say the machine code of a standard app-code PRINT statement is comprised of 5 machine code statements under the hood, which we'll label with the letters "a" thru "e". Let's assume two PRINT statements, p1 and p2 are being processed concurrently. The single-core CPU may be executing the following machine code instructions: p1a, p2a, p1b, p1c, p2b, p2c, p1d, p2d, p1e, p2e. (Note that it's not entirely symmetrical, intentionally, to make it more realistic.) We are allowed to say that p1 and p2 are both executing "at the same time" here because the sub-components overlap for the duration of the full execution of any of the two print statements from the app code's perspective. However, we cannot say the same about the machine code. JS in browsers provides no app-level concurrency (although certain system services it uses may be).
- As a thought experiment, note that it could be possible that our universe "runs" on a single core (or equiv.) and that only ONE particle can move at any given time. However, if the time-slices are small enough, our instruments couldn't tell the difference so that from our perspective the particles move concurrently even though technically it may not be "under the hood". -t
- You're still conflating the term "concurrency" with "preemptive multithreaded" or "preemptive multitasking". These are established terms with recognised definitions. The former applies to what Javascript does with timers -- it's single-threaded, non-blocking concurrency. It's what a variety of GUIs do, and what Mac OS did before OS X, and what Windows 3.x did.
- It appears we are entering into a definition debate. Wikipedia says, "In computer science, concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other." [Emphasis added] What source do you choose to use?
- That's a rather weak article. Google for apparent concurrency vs real concurrency.
- It seems to reiterate the above.
- You must be defining "simultaneously" differently than I do. In the example, without concurrency, the slow retrieval of bar 15's data would block the other bars from being updated. It doesn't, because they update simultaneously. However, I fully accept that "simultaneous" is subject to considerations of granularity. Because of this, concurrency is sometimes described as having levels.
- [Top, you may be confusing "concurrency" with "parallelism". When two tasks run concurrently, it only means that they might run and complete in overlapping time periods; it doesn't necessarily mean they get processed at the same time. Any sort of multitasking on a single-core processor is concurrent. Parallelism, however, refers to tasks literally processing at the same time. To achieve that, you need multiple cores or multiple processors. JavaScript's setTimeout and setInterval functions are concurrent, but not parallel. (I'm a little surprised you don't already identify JS's concurrency support as actually-concurrent, when we've spent a ridiculously long NodeJsAndHofDiscussion going over it, though.) -DavidMcLean?]
- You guys haven't proved that any JS statement set can "run and complete in overlapping time periods". Perhaps events (HOF's) can, but that only means concurrency at the event level, NOT at the statement level. Certain select statements, such as HTTP calls perhaps can, as already discussed, but that's limited concurrency. Like I said, concurrency is relative, and the observed behavior of the bar graph demo can be explained via limited concurrency. In other words, the model of limited-JS-statement-level concurrency can explain the observed behavior of the demo. Full-statement-level concurrency can also explain it, but is not the only model and thus doesn't prove full-statement concurrency. It's only one of two valid candidate models.
- I'm curious -- what is the relevance or significance of the level of concurrency exhibited by the example? Does it have something to do with my challenge to "suggest an alternative syntax that would accomplish the same thing without HOFs, but without requiring the run-time compilation of eval()"?
- Isn't that what BradyBunchGridDiscussion was originally about? And existing HTML can do it with frames or iframes and META refresh, it's just kind of verbose.
- Show us. In particular, show us there's a reason to choose frames or iframes and META refresh over HOFs.
- Are we talking about what actually exists or what potentially exists? Like I keep saying repeatedly multiple times already again redundantly, the "best" solution for one specific client type/brand may not be for another type/brand. I want to deal more with general concepts, not what works best with IBM version 48.3928923 or MS or Tim Berner Lee's client/browser 7.0482.3A. The HtmlStack's JS has certain built-in tools/API's and HOF's happen to be a better fit for that tool interface just like SQL will work better on Oracle than Rel would, and Rel would work better on a Rel DB than SQL would on a Rel DB: it's the "native" language/tool-set. In summary, I could demonstrate that frames/iframe work, but I won't claim them better nor "less code" than your HOF solution for existing browsers.
- But HOFs eliminate the need to worry about different clients, don't they? In other words, you don't need to employ iframes or frames and META refresh, no need (in general) to consider specific browser quirks, in order to define a general solution to this category of problem. In other words, HOFs are more general than the capabilities of a particular browser. Using your Rel/Oracle+SQL example, HOFs are like using VIEWs -- which are a general capability of relational DBMSs, and available (with different syntax, obviously) in both Oracle and Rel -- rather than relying on something specific to one or the other.
- Again, the industry doesn't necessarily want "general" as the primary factor. The GreatLispWar, remember? The industry wants common idioms pre-packaged into paint-by-numbers programming: just fill in the scalar attributes and Walaah!
- [It's spelt "violà", and it's pointed out below that the industry doesn't want that. -DavidMcLean?]
- Walaah is the low-brow version :-) The real issue is not what staff they want, but what tools work best with the staff they actually get when all is said and done. Maintainability by different and future staff with potentially unpredictable skill levels is probably more important than language flexibility to most orgs who have more than few years of experience with custom projects.
- [More flexibility in languages can increase code's maintainability, though. The most trivial example is that, by using higher-level techniques, you can end up with less code solving the problem, which means there's less code available to maintain. -DavidMcLean?]
- Finding and knowing WHAT to change (without breaking something) is often the hard part. Typing a dozen more characters on the keyboard is a pittance compared to that.
- [When there's less code to maintain, there are fewer possible places at which the code can be changed. Therefore, statistically speaking, it should be easier to locate the correct changepoint. -DavidMcLean?]
- That's one factor, but it's not the only factor to successfully reading code. I've seen repetitious bloated code that was fairly easy to read and compact code that did a lot, but took a long time to decipher its purpose and/or change for new requirements.
- [Sure, but we're not discussing reading code, but maintaining it. Repetitious bloated code takes more work to maintain simply because it's repetitious. -DavidMcLean?]
- The extremes of both ends are not the ideal for a typical programmer. There's a happy medium. Too un-factored and you have to dig around in repetition and make repetitious changes, but too much abstraction and it's hard to understand what's going on and/or the scope of impact such that it's easy to make the wrong change and turn it all into a tangled ball of yarn and bugs.
- [Don't be silly. All code is improved by OneMoreLevelOfIndirection. ;) -DavidMcLean?]
- And I don't see the analogy of views comparable. They are more comparable to subroutines: multiple "statements" condensed into a single name.
- Both HOFs and VIEWs are multiple statements condensed, but that wasn't the point. I meant only that HOFs are a general concept. So are VIEWs. Relying on a concept that is available across implementations results in more generic, re-usable code than creating specific solutions for each client. As for what the industry wants, I'm in the higher education business and regularly speak to employers. They want graduates who understand both current technology and possible future technology. The buzz around possible future technology is increasingly about functional programming and higher level programming skills in general; I hear no interest in "paint-by-numbers programming". That appears to run counter to your claim.
- What employers want and what employers get are two different things. Of course they want Mozarts. Everybody wants Mozarts. If my pipes leak, I want a Mozart plumber, not a Salieri plumber. Whether Mozarts are "affordable" and whether HR/owners will approve is another matter. (Plus, Mozarts often have crappy people skills.) You may be in the higher-order education business, but I'm actually in the field and have been for more than 2 decades both as a contractor and an employee. I don't have to ask employers what they want verbally, I see it in the office directly, and on job ads and salaries offered during interviews etc. It's hard to tell what kind of trade-off's people make simply by asking them what they want. It's when you contrast factors that you start to find out, especially during actual decisions, not merely paper surveys by a 3rd party.
Which of the examples is performance oriented? I thought none of them were, unless you're referring to HOFs demonstrating better performance than eval and the like. In that case, it's true, but eval demonstrates no benefits whatsoever over HOFs. The only justification for using eval is having to use a language without HOFs.
Which means we can do almost the same thing without having to add new linquistical constructs.
I'm not clear on your point -- are you saying eval() is as good as HOFs? One of the strongest justifications for HOFs is to avoid all the downsides of eval.
Eval's, objects, case statements, and a 4th one I forgot (hats off to Rick Perry) are candidate alternatives, depending on the situation. I've said this already.
All have significant downsides compared to HOFs, which has already been demonstrated in considerable detail. Using case statements is almost ludicrously impractical compared to HOFs. Eval offers no benefits over HOFs, and HOFs offer considerable benefit over eval. Only object-oriented techniques come close, at the expense of some syntactic overhead. The only downside of HOFs is that some programmers might not be familiar with them. This is typically addressed by spending the hour or two of study needed to understand them, and in most cases even weak programmers find learning about HOFs after using functions no more difficult than learning 'for' loops after using 'while' loops.
That's your summary opinion, and I disagreed.
Why do you disagree?
I'm not going to repeat my points here. That's unnecessary duplication of text, something you don't seem particularly concerned about on this wiki.
[Top, if you've made any points which counter the fact that higher-order functions have exactly one downside, as mentioned above, please do repeat them here (or link to them). I had no idea they existed. -DavidMcLean?]
SummaryOfHofExamples. The bottom line is that HOF's confuse too many developers such that they create maintenance risk, and the alternatives are not that bad.
[Do you have evidence that higher-order functions confuse developers? -DavidMcLean?]
It's anecdotal based on experience. Your counter claims are also anecdotal.
[As was asked over on SummaryOfHofExamples: Have you asked your colleagues about HOFs? I'd be curious to know their specific responses. -DavidMcLean?]
Directly? Mostly not, but their code doesn't reflect any personal love for them and it's not what they talk about when we discuss code design strategies.
[Then you don't have any anecdotal evidence that they're confused by higher-order functions, now, do you? -DavidMcLean?]
Yes, "This JavaScript is weird. I miss VB's events."
[Of course it's "weird". It's different to VB. Anything unfamiliar starts out being "weird". It's pretty much the BlubParadox, really. However, in general familiarity with features like higher-order functions can be garnered with just a few hours of practice, at which point it will no longer be weird and its obvious superiority over VB's model may become apparent. -DavidMcLean?]
When they are kept simple. But if they get intermingled into a complex tool-kit or in the hands of somebody devious or abusive of code, then it's not so easy to comprehend what's going on.
[Any language feature may be abused. VB events, for example, are notorious for being misused (see EventsCallMethods). Do you have real examples of higher-order functions being abused to the level of non-comprehension of code? -DavidMcLean?]
Yes, but high abstraction is generally the most powerful form of abuse. A nuke in the hand of a mad-man is far more dangerous than a rifle. Excess repetition is annoying, but usually not a show-stopper. Something that is just too convoluted to figure out is a show-stopper. Telling your manager that it will take approximately 2 weeks to fix/add something generally goes over better than, "I don't have a damn clue, I still haven't figured out what's wrong yet." The path ahead is clearer with low and mid-brow tools. Organizations like programmer herding mechanisms: the sheep can wander, but not too far. There is a reason that citizens are hesitant about giving deputies automatic weapons even though they have far more potential power than hand-guns.
- [What about the complete rigid inflexibility that can result from abusing stuff like VB events, as noted on the previously-linked EventsCallMethods? If the codebase has abused those to a significant extent, you may be stuck unable to overhaul the codebase, at least without throwing out large portions of code. -DavidMcLean?]
- I'll leave comments about EventsCallMethods in that topic. Most org's do not overhaul code bases unless some new approach to computing comes along and they decide to move into the next generation. Examples include CUI to GUI, and GUI to Web.
- [Rigid, inflexible code produced through abuse of features still needs to be maintained. To maintain it properly and with affordance for changes, it'll need to be overhauled into actually-flexible code. -DavidMcLean?]
- That's not the way most organizations work, for good or bad. They simply don't value the most flexible abstractions to the extent you do, as explained in GreatLispWar. Plus, one can't know if one generation's assumptions will fit the next generation's. For example, client-centric applications in VB, Delphi, etc. had a lot of free reign as far as user activity flow. Then the web came along and initially had limited UI flow capabilities. A framework that assumed flexible interaction flows wouldn't work in that environment and the UI's would all have to be redone. The future of IT has been hard to predict. Plus, hosters may not support your prior tool stack. See also YagNi.
- [Yes, it's true that later developments in software can disrupt one's assumptions and require code changes. However, that's exactly why we build abstractions in software (well, one of the reasons): to insulate against outside change. A well-designed abstraction means that, when your environment and scenario change, you can often isolate code updates to the abstraction's own implementation without touching the code using it. Code not using good abstractions is going to be much less likely to work when the environment changes and require more changes to function in an altered environment. -DavidMcLean?]
- If you could demonstrate how HOF's do that noticeably better than the alternatives in realistic situations, not just lab toys, that would be just peachy.
- I'm still working on my sample employee allocation system. Time, alas, is currently at a premium. I will get to it.
- If it worked right, the task would be allocated to multiple employees and be done already ;-P
Worse-case productivity often affects decisions such that organizations may lower
average productivity to reduce risk. You seem to focus on average productivity.
[I focus on programmers with actual programming ability. I don't think designing programming languages to cater to organisations with bad hiring processes is valuable. -DavidMcLean?]
You can focus on whatever specialty you want and complain about organization decision making, but ultimately you don't make those decisions for the majority of the marketplace. We can focus on the ideal or WhenInRome. I make tool design decisions based on how it will be received in the actual marketplace, not under the assumptions that somebody will solve the problem that HumansSuck. The Borg probably use Lisp. You are free to join them when they pass by.
[The Borg obviously only code in raw machine language. Programming languages should, primarily, be designed based on the needs and wants of programmers, because programmers are who have to use them. -DavidMcLean?]
Lisp is machine language on their hardware.
[The Borg is a LispMachine? I guess that makes sense -- if I was going to construct a ruthlessly intelligent distributed hive-species, I'd use LISP too.]
It's actually a giant ball of parentheses, not a cube. They only used a cube on the show because George Lucas would sue over similarity to the Death Star.
CategoryFunctionalProgramming
JanuaryThirteen