Array Deletion Example

A FunctionalProgramming show-down, which originated in HowCanSomethingBeSuperGreatWithoutProducingExternalEvidence:

[FixMe: Some code samples victim of TabMunging]

Removing items from an array the procedural way, you can't modify a list while iterating it. [Is this really procedural? Looks like there are objects in here.]
var aList = getStuffFromWhereEver();
var toRemove = new Array();
for(var index=0;index<aList.length;index++)
if(aList[index].someCondition) //someCondition is a function that you use to decide what to remove
for(var index=aList.length;index>0;index--)
for(var index2=0;index2<toRemove.length;index2++)
[NOTE: Simply reverse the order of the loop. Decrement through the loop and items can be removed (and added!) while iterating through it.]

Not true - because you're removing random items, you don't know which ones will match, so they aren't coming off the end.

now, same thing but with higher order functions and first class function...

var aList = remove(getStuffFromWhereEver(),function(each){
return each.someCondition;
OK, so remove is a HigherOrderFunction that takes as its first argument, a list, and as its second argument, a FirstClassFunction that is a UnaryPredicate?. With remove, you'll never repeat the loops from above again, it's OnceAndOnlyOnce at a language level. There are many common things you do to collections, all of them can be put into HigherOrderFunctions that takes functions as arguments to specialize them. There is no argument, this approach is objectively better than the procedural approach, and languages that don't support this approach, are crippled, period.

Just "mark" the items to be deleted, and then delete them as a second step, after the loop. If you have to use a more powerful data structure in order to have marks, such as say.......tables...... instead of arrays, then so be it. Showing how something better works with primitive structures is not a selling point to somebody who likes more modern structures. Even without tables, there are probably better implementations. Here is a FoxPro version:

  use myTable
if <someCondition>
It is about the same number of tokens as your "improved" version. Thus, your "lead to less code" claim has been proven false, and I am confident the other claims will also (if you define them better). I don't claim FoxPro is the ideal language, but when it comes to "structure chomping", it can often take very little code. It is sometimes mistaken for pseudo-code.

Under some conditions, the above can even be shortened to:

  use myTable
  delete for <someCondition>
We don't need the "pack" command because if the config is set right, other commands automatically "respect" the built-in record deletion flag. However, eventually "pack" may be needed to clean up the space taken by deleted records. For temporary tables, such is usually not a concern, for the entire table eventually goes away (some dialects do it automatically if a table is declared temporary).

Note that if I want to make my example handle any table, any function, and any condition, I could do this:

  procedure foo
parameters tableName, condition, processFunc
use &tableName
  do &processFunc
  if &condition
The "&" does run-time code substitution. In FoxPro, the called routine "inherits" the parent's scope, sort of closure-like behavior in a rough sense. (Ideally it would be nice if one could optionally switch this behavior off when not being used.) Pascal has a "static" version of this using "nested functions".

One of the advantages of using string substitution (eval()-like) for "higher order functions" is that the function name can be stored outside the program, such as in a database. String substitution is part of what makes languages such as TCL great "glue" languages. Function names and all kinds of "meta stuff" can be passed around between different programs in different languages (and databases).

If you understood how closures maintain their environment, and how that is taken advantage of by using recursion, then you'd understand why eval() isn't a substitution for a closure and why HigherOrderFunctions can't simply be replaced with an eval call to a function. Your statement that "inherits the parent's scope is sort of closure like" just shows you don't understand a closure.

It solves the problem at hand. If you are talking about other problems or mass generacy, then that is another sub-topic.

This entire digression about tables and FoxPro is moot. It misses the point badly. The idea here was to demonstrate that a simple task, like deletion from a list/table/structure/doohicky could be made smaller and easier to understand using HigherOrderFunctions. A showcase of their obvious strengths (and a case for why every modern language should include them).

The FoxPro version of table deletion might have a slightly shorter definition, terrific. Yay for FoxPro. The issue at hand is that HOF constructs shorten code drastically and allow for reuse.

Even without closures, they're powerful. For example, most structures in Ruby support a "collect" method. Collect takes a structure and returns another structure with all the entries that match a given criterion. The definition for such a method is simple, here is the trivial Array example:

 class Array
def example_collect( &proc )
temp = []
self.each do |x|
if x )
 temp << x
This example is slightly longer, symbolwise, than your FoxPro definition, but it is a great proof-of-concept of how HOFs make powerful and generic code. We can use this (trivially longer) definition a million times over, for many different uses. For instance, we could define our array rejection in terms of collect and another HOF:

 class Array
def example_reject( &proc )
self.example_collect { |item| !( item ) ) }
The magic of HOFs makes this possible. This is why we are ranting and raving about them. It's why we claim they're so obviously good as to be almost a no-brainer for modern language design. When we invoke them, we get such a huge savings in code (and brainspace, it's way easier to think of structure rejection atomically) that the feature speaks for itself.

Nor are these concepts limited to simple structure control. New conditionals can be made. New looping constructs are trivial. Note that we used self.each in the example_collect code. It's iteration across an arbitrary structure, and to us it's free. It's easy to imagine making something more complex, like a framework for cryptographically secure network communications, which uses HigherOrderFunctions as a way to place your own protocol into the secure framework.

Does this make the original statements and code clearer?

I think we are going to need a specific UseCase to settle this. It is not hard to make something like this with eval:

  newStructure = filteredCopy(oldStructure, filterCriteria)
Why do we need such a use case? HOFs are similar to eval in the sense that code gets conceptually moved around, but instead of moving strings you're moving actual functional units (which can entail a performance savings). HOFs are just a little cleaner (you don't need to worry as much about scope issues, either losing scope or gaining scope). In Lisp they play nicely together, and are used for different purposes. They really aren't in competition in most languages. Why exactly do we need to have some kind of showdown to compare them, since they're really meant for different things?

Re: "HOFs are just a little cleaner".

Note the word "little". This topic was motivated by claims such as "clearly better" and "great, not just good". If we are splitting hairs, then my original assertion that the bragging is way beyond the level of benefits is true. String-based meta-ing is simple because the language does not have to have HOF built-in, and string substitution can be used for ANYTHING, not just function substitution. Thus, it is more generic and wider reaching. It is a simpler concept that has more uses. Dedicated HOF may simplify a few things, but I find the need for them is not common enough to justify dedicated language features, especially when one uses tables instead of arrays. These array examples here are just reinventing queries from scratch. Rather than brag about making it easier to reinvent the wheel, reuse the goddam wheel instead. GreencoddsTenthRuleOfProgramming. -- top

Well, it's really hard to say eval is "simpler" than anonymous functions, since anonymous functions will follow your language's scoping rules (which are very similar across most languages) and eval scoping rules vary based on implementation (Lisp loses context, you say FoxPro keeps context, Ruby lets you keep OR discard context). Text-based substitution has a spotty history and is actually quite a bit more difficult, since you can accidently create malformed programs (as in syntactically invalid). In a language with real HigherOrderFunctions, you don't have to worry about that.

Sounds like this is degenerating into another compiler-versus-interpreter type-checking battle. No need to repeat these kind of arguments here. I don't need Eval-like stuff often enough to worry about those so far. The boundary of abstraction difference often does not fall on the function-level anyhow in practice. There are often orthogonal factors such that a simple list of functions is an insufficient characteristic of the repetition pattern that is an abstraction factoring candidate. See EmployeeTypes.

Further, HigherOrderFunctions can be combined more safely. You saw my example with the Ruby reject. Doing that in eval would be trickier. Ruby has both a powerful Eval and simple HOFs, and people use HOFs for that sort of thing every time because it's less confusing and safer. Going even further, higher-order functions allow you to compose logic at runtime. This is safer and easier than trying to nest code lexically and then calling eval. Eval is a kind of last-resort sort of thing in most languages, it is usually slow, inefficient, dangerous because of inadvertent text substitutions, and conceptually harder to deal with. HigherOrderFunctions offer nearly all the benefits (and then some) with none of those penalties.

Hence, we say they're superior, obviously good, and highly desirable.

Throw recursion into the mix, and HigherOrderFunctions and AnonymousFunction/Closures are clearly enormously superior, because they carry their execution environment with them, something string based substitution can't do at all. Evaling a string is really no match for executing a HigherOrderFunctions and passing it a closure.

Again, I would like to see specific UseCases of "enormously superior", not just BrochureTalk.

This isn't brochure talk. This is the real deal, Top. I gave you Ruby code showing a nice piece of code that used an previously defined, HOF-dependent function to do something different. Key to realizing its superiority to your own examples, notice that the original example_select was written without example_reject in mind. To properly make those in an eval scheme, you would need to adopt a specific style (in nearly every eval case I know of) and meticulously avoid name conflicts. It would be difficult.

I am not fully sure what your Ruby code is doing, but can't one get a "reject" by simply putting a "NOT" before an "accept" criteria?

Also, consider the extended uses I suggested. New conditionals and looping constructs. This isn't brochure talk either, a simple look at the Ruby standard library will give you a wealth of examples. The Lisp standard libraries are more difficult to read (for most people), but they're rife with it.

TCL can make new block constructs using string substitution.

Think about how you would make an extensible cryptographically secure networking service (two components, client and server) that merely create a secure corridor for another protocol. You could use anonymous functions (or named functions called by anonymous functions) to process the data according to an arbitrary protocol without ever even thinking about that protocol, or coding in a specific scope-aware style. Oh, and make it fast enough to be used for real-time, low-latency communications.

I am not a systems programmer, and thus don't relate to network protocol examples very well. May I ask for custom business application examples? Thank You. Perhaps the characteristics of our domains are different enough to warrant different approaches. Network protocols and interfaces are less subject to the crazy whims of PointyHairedBosses, for example.

The performance, syntax, and manipulation benefits of HOFs are obvious if you consider them, and we've given you several examples complete with reasoning. Past this point, you're on your own. We can't make you realize anything, Top. We can only give you the knowledge to make such realization possible.

Your reasoning is often circular and your examples assume change needs which may be artificial. I cannot tell if they are artificial without looking at specific situations.

If you're so dead set on a UseCase, please name a case in which your table-oriented-eval-coupled example is superior, because thus far you have not (your High Level language example vs. C-like example pseudocode isn't really valid or compelling). I'm pretty sure the HOF version of any such case will be easier, faster, and cleaner. There are VERY few situations in which the use of Eval is necessary, outside of runtime code generation (which is what eval is for, specifically, so it's not really fair for HOFs to have to meet on that ground).

I never claimed they were objectively "superior". And, TOP is not the topic. Further, like I already stated, I don't use Eval that often in practice for reasons already given. It is more useful for debunking toy dispatching examples than it is for practical stuff. I already listed some toy examples I would like to see OO and FP people show up.

[Give up dude... top's not listening at all.]

Try something new, such as Realistic UseCases outside of SystemsSoftware . I'm listening, but you are only making "trust me" claims. Do show-me, not trust-me.

[Those are realistic to any real programmer.]

Always argument-by-intimidation instead of argument-by-use-case. Here is a quote from one of my webpages that describes you guys:

Why not some use-cases outside of systems programming? What the hell are you afraid of? Are you some kind of realism-coward who wants to hide in a corner and perform MentalMasturbation all day?

Who's mentally masturbating? Surely not HOF-users. You probably aren't used to using them because the languages you work with don't provide them, but lots of people use those patterns. How about sorting? I sort things all the live-long-freakin-day. I sort things all the time, because sorting this is a common problem (and a common way to reduce other problems). Instead of having to write a sort for each data type and set, I can write ONE sort that uses a HOF predicate (at least, in a dynamically typed language or a language with C++-style templates). This is exactly what many standard libraries do.

Please don't pretend that HOFs are some kind of eggheaded conehead feature. They're not. They're everywhere, if you have the wherewithal to see them.

[Wait wait wait, I can just see it now.... top's reply... "why not just say 'order by fieldName', what do I need to sort arrays for". Never mind the massive overhead of a db call, or the fact that all the data is already in memory, let's make the db do everything, shit, let's just throw out programming all together. LOL, Bwahahahahahaha.]

Noooow you're gettin' it. You are solving problems that often I don't have because I avoid fricken arrays and use a real state and attribute management paradigm instead of 1950's Fortran leftovers known as Arrays. Higher abstractions, such as CollectionOrientedProgramming, often require more horsepower. Databases can use RAM cache too, BTW. Not every RDBMS has to be bloated and formal like Oracle.

Err, people still need to solve the problem, Top. Just because you get to play hooky and get sorting as a free keyword doesn't mean sorting class problems don't exist anymore. We're not ignoring existing technology, we just can't leverage a database in most of our coding.

I agree that one has to use more primitive techniques to build higher-power tools.

[You don't agree with me. That's not what I said. Twisting other people's words is a child's game, Top. If you'd like to be treated a such, instead of as a peer, then please continue.]

{I swear to God, Allah, and Dr. Codd that I have no intention of "twisting words". If there was a misunderstanding, it was purely accidental. Why am I always guilty-until-proven-innocent?}

Nor have you shown that such an abstraction is truly desirable or universal (or even really workable).

Didn't you agree that you spend a lot of time worrying about sorting?

[ No, actually I don't, since with HOFs in my language of choice, all I have to do is define my sort criterion. Just like your paradigm of choice, incidentally. What's funny about your resistance to HOFs is that they're really quite useful in the context of table-oriented programming. If you'd stop trying to evangelize and just admit that maybe someone knows a neat trick that you don't, you might realize some of the uses with your chosen style of programming. ]

Show me. I'm all eyes.

Already did, but because the word "Array" was found within 60m of my example, you dismissed it out-of-hand. Adapt some of our examples. Try applying the idea to something you're working on right now. Take your favorite language, and pretend it has HigherOrderFunctions too, then solve something. I gave you an example from my line of work, easing calculation of weather data. That's about as real life and application oriented as you're gonna get, and a very practical use-case.

Remember the BlubParadox. We all have to work hard to think beyond the language we work in and consider things in a more conceptual way, otherwise we can't adapt to new (potentially timesaving) tools.

Maybe you guys are Blubbed by being stuck with arrays and only BigIron databases all your life. Until I see scenarios relevant to my domain, I have little choice but to conclude you are blowing smoke. The consistent pattern to you past examples is that you use your gizmos to work around the limitations of primitive data datastructures. You have called Wolf too many times.

It's not our job to do your homework, Top. We've said, "This is good. We do lots of neat things with it." We've listened to your TableOrientedProgramming evangelism for a long time (most of us with a surprisingly temperate degree of patience). Some of us have even gleaned insight from what you say because-keep with me, Top-we try and apply this to our domain. You could at least try to extend that professional courtesy back. Take a moment and try and take it forward yourself.

I always look for better ways to do something. But apparently I can't apply your claims with significant improvement to my domain and neither can you it seems. Thus, their application to my domain is still an unknown. If you want to sell your GoldenHammer to everybody and their dog, not just systems programmers, you need to find better scenarios.

Let's see if I can summarize the disagreement here properly: One is saying that the array deletion examples are generic enough to apply to many typical problems, but the other side is saying that they are not really tested until a sampling of real-world situations are "plugged into" them, such as train scheduling or accounts payable scenarios. One is arguing that a template-like solution is a sufficient demonstration, while the other is saying that the devil is in the details and it needs to be tested against such details.

I do indeed believe in RaceTheDamnedCar. "Lab" examples are often not good enough because they often make assumptions that don't pan out in the real world. I've seen a lot of nifty stuff in toy/lab examples that either can't bend to the real world, or nobody has shown how to bend them. WaterbedTheory is probably the most common reason the lab samples fail to scale, but not the only. -t

Let's head that off then. Someone needs to write the code that lets you sort. HOFs in these languages make it easy. Incidentally, for a more application-level use of HOF-objects: I have an application that needs to process long blocks of weather data and calculate things like wind shear, atmospheric pressure, and interpolations between points. Instead of writing one function per task, I wrote one function that iterated across the data (two points at a time) and called a function object on each one. Now, I can write new operations (like calculating the Rise Rate of the weather balloon, a brand new requirement) in just a few simple lines of code without thinking about the iteration. That's the kind of savings we're talking about.

[EditHint: Perhaps the weather example needs to be moved to its own section or topic. It is scattered about.]

I would like to see a sample schema and some pseudo-code, if you don't mind. I don't see where weather balloons relate to interpolation. And, what are your "tasks"?

Okay, I'll be super clear, but this isn't a problem we used tables to solve (nor do I believe it would benefit from such a solution), so no schema, and the pseudo-code would not be helpful, so none for you.

A Weather Balloon sends back data every 2 seconds (or so). This is a lot of data, and the client doesn't want to edit in this space. We interpolate between points that span a 100' boundary (dividing the data into 5000' boundaries). That's the first interpolation link. We are playing with a scheme, but right now it's linear. I might switch it to a Hermite curve if that's proven inadequate.

Next, the data the balloon sends back is sometimes bogus. These things are a product of the NAFTA treaty, they fail a bit all the time. So the guy who is going to say if the rocket-launch is okay needs a meteorologist to "correct" the data. They fill in holes (via linear interpolation across gaps) and an linearly or bi-linearly interpolate across arbitrary ranges.

Also, we need to calculate Wind Shear, Rise Rate, and Atmospheric Pressure from this data.

I wrote one iteration function that takes a functor (HOF simulated in C++, common stuff found in the STL) and iterates across the set two points at a time, calling the functor. The functor can choose to alter the data in any way it desires as it iterates, or not.

Then I wrote Functor Objects for each operation I've described. The longest was was 10 lines, because I could program without worrying about any kind of iteration state (the iteration function provided most of it) or anything else. Just the task at hand.

And I've been able to non-trivially extend the system to calculate new data as I go, without cluttering the code.

Is that detailed enough?

I am still not fully relating to the example, but it seems like something that EVAL can handle:

  function Iterate(funcName) {
while (complexIteration) {
Eval(functName . stdParams);
The only complaint I can see is perhaps speed for super-large datasets.

That is something like it. It looks more like (please forgive the C++, it's what I get paid to work in, and this has been altered so I don't get sued for giving away LMCO code):

 DataBlock?::with_range( double start,
double stop,
PermuteFunctor?& functor )
map<double,Record>::iterator point = _bData.lower_bound( start );
map<double,Record>::iterator end = _bData.upper_bound( stop );

if( point == _bData.end() ) { throw new no_such_record_exception(); }

// One argument version of functor invocation, for the first point. Record prev = functor( point->second ); ++point;

// Two argument version, for all subsequent points. while( point != end ) { prev = functor( point->second, prev ); ++point; } }
Functors are just objects with the operator() method overloaded for one and two argument versions. This version is actually pretty fast too. These datasets involve several million records (two seconds, for 150,000ft, it adds up fast).

With this framework, I can write (and have written) diverse operations on the data without worrying about the ranges, the iteration, or even what's going on with the dataset.

A faster-executing solution is perhaps to use a case statement:

  function Iterate(funcName) {
while (complexIteration) {
process(funcName, otherParams)
  function process(funcName, otherParams) {
select on funcName {
case 'foo': {........}
case 'bar': {........}
case 'blah': {........}
Some people balk at long case statements, but frankly if they are well organized, I see no real problem with them. Blocks is blocks, but that is another battle.

Personally, I would probably first try to create a "clean" data set that has all the interpolation and cleaning applied, and then process it. Otherwise the process of interpolating and cleaning the data is mixed together with processing, which I would rather separate. See SeparateIoFromCalculation.

Well, at least you're trying now.

The processing and the "cleaning" are separate processes. They occur in separate places, as different functors. I get the side benefit of using the same iteration function for both, shrinking my code.

As for the rest, I thought of that too. I didn't do it because of composition. The user decides what calculations are done and which are not, this is a series of preferences. When they decide, I compose several functors into one larger functor. I sort the functors (sorting again!) based on a simple priority system (generate new points, interpolate old points, perform data calculations) so that I do things in the right order.

Since this composition can change on a whim (and it will evolve even more once our automatic data correction algorithms are complete) are more flexible solution than a case statement has to be put in to allow the code to grow. Already I'd have 13+ items in that caste statement, which is 130 lines of code for one case statement (roughly). That's bad.

Further, since these compositions are nearly arbitrary (it's a bunch of checkboxes), I didn't want to have to write one of the following:

The anonymity of the functions works strongly to my advantage here.

At this point I could not make a specific recommendation without knowing your users and requirements and change patterns. Eval may perhaps be too slow for some domains. I won't dispute that. Not everything is good everywhere. There are 3 approaches I have at my disposal:

The chances of none of them working satisfactorily is small. (See "perfect storm" below)

It is not small. It's 100%.

Have I done it, have I actually stumped Top? :D

[Nah, you said photoshop, so he'll call it a plugin... then a device driver, therefore not applicable to his domain, thus you've proved nothing, then he'll want to AgreeToDisagree because he doesn't have any real argument, since he doesn't really understand everything you're saying. But I give you an A+ for effort!]

A+? If I had an underling that gave me round-a-bout requirements documents like this, I would send them back to school before they got their first paycheck.

Another approach that I have not not seen explored is to combine them before compiling. For example, building a big case statement from snippets stored in files. Then, put it into the compile script.

PageAnchor "perfect_storm"

Further, I don't think it is that likely to have several dozen functions in that Weather Example that all happen to be mutually-exclusive. After a bunch it would more likely be that some need/want execution for the same loop, requiring something more sophisticated or at least change the very nature of the problem altogether.

Thus, four conditions have to be in place at the same time for my suggestions not the work:

I will agree that there may be an occasional PerfectStorm where my suggestions don't work sufficiently, but the claim seems to be everyday improvements from your suggestions, not just for perfect storms. No language or paradigm does everything well. A perfect storm can probably be found for any approach. (But not being there, I cannot verify that it really is a perfect storm in the weather case, no pun intended.)

I believe the original author of the weather example somewhere admitted that more details could not be provided due to a Non-Disclosure Agreement (a "shut-up" clause in an employment or project contract). Thus, it is insufficient for detailed probing and should thus not be used for further exploration or debate beyond those benefits normally allowed for a toy example.

A Non-Disclosure Agreement is another way of saying, "I can't provide thorough evidence here because I'd get sued if I did." Good, it's now clear that you CANNOT give thorough evidence here. You are essentially admitting you cannot present a full case and complete the job. QED.

-- top

Other than possible speed complaints, I don't see what the problem is with Eval. As far as case statements being too long, split each case block off into a separate function.

Ahh top. Speed IS a consideration (even after reducing the record space to 100' boundaries, 120000 ft of data reduces to 1200 large records, and so far we've only talked about Wind. I also have to do stuff for the temperature domain, which is even more complex!

But let's ignore speed considerations. Eval would mean mode code! In order to use Eval, we would either have to take our constants and regex them into contextless code or (in an eval system that preserves context, an oddity) we would have to munge together these contexts. Each new context would have to be aware of and avoid the previous ones. The functional semantic (passing in args) implicitly provides this for us, and makes composition as simple as a list of functions which are executed in series, without caring about string manipulation or context management. Less code is good, right?

I need a far more detailed description of your problem space here. I cannot make recommendations for code organization without knowing what is going on. If your context is that different (a new requirement not mentioned above), then perhaps an entirely different approach is needed. Maybe associative arrays or tables would server context management better. It hints that you are doing everything backasswards, but I cannot identify where without having more specifics in front of me. You are using code to define requirements instead of words, and that is just wrong if clean communication is desired.

[ I already use associative arrays heavily. I can look up any record by height and time. You can see that in my sample code's iteration above (note the _bData.find() call, which is a dictionary lookup). Basically, all data is already tabled! I'm already leveraging that aspect pretty heavily.

As for the "Everything Bassackwards", all I can say is that my peers can read my code and say it is the cleanest design they've ever seen for such a "murderous" problem. This coming from someone who worked for years exclusively with databases. I rather like the design, since it leads me to a very declarative approach to coding the system (and the entire app is currently under 4k lines of code! This includes the Qt gui, the graphing code, and code for managing multiple flights at once. And it runs pretty snappy even on a P41ghz (which is the lowest quality machine we have).

That's the best I can offer. Don't you think maybe you're just being stubborn now? ]

{No. You are asking me to accept AuthoritativeEvidence. Top does not accept authoritative evidence. That is a universal constant.}

As for your case idea, remember that I need to compose these options! A case statement wouldn't let me do this (unless I had an enum for every possible combination of operations). If I had something like Lisp's "cond" construct I could do this, but it would still involve keeping a bool for each. Instead of that, I can just construct and save objects which represent user-defined operations. They manage themselves cleanly and allow for extensibility.

What do you mean by "compose"? Not mutually exclusive? [ Correct, they may be combined in any arbitrary grouping as defined by the operator. They might want to recalculate shear while interpolating across data gaps. They might want to mark a region of data bad while setting all wind values to 0. They might want to generate missing points, interpolate all points in a range, recompute shear and rise rate, then recompute pressure. ]

You might also suggest this domain is overly complex. Sadly, it's the requirements of the US Air Force. We said we'd do it, so we need to meet their requirements. Your tools do not address such a problem in any reasonable way. May I offer you this neat new tool, HigherOrderFunctions? They solve this problem neatly and cleanly, with a minimal of code (and fuss) and with a great deal of extensibility. They model this kind of variable, complex BusinessLogic beautifully. Significantly better than any solution you have offered. Could I be anymore real-world, practical, and firmly rooted in the domain you work in?

I don't know, you keep flip flopping your requirements. I have not seen a specific requirement that flunks Eval nor case statements (from a code perspective). [ The composition requirement throws case statements out the window. Please read above where I address you directly. As for Eval, both speed and code-bloat make it impractical (although possible) at best. This is Extreme Business Application Programming, Top. No more minor leagues. We work on huge chunks of data, we have to work fast and our code needs to sparkle. ]

[Are you kidding... the guy just gave you a great description of his domain, the problems faced, and how he solved them.]

The "context" thing is brand new. Just whip up some sample data and show Eval choking, okay?

It's not new. We touched on it earlier, but tabled it for a bit. I mentioned that Eval-ing code requires that the code be placed in the "Context" of the executing program. Languages handle this differently. Most remove ALL context, like Lisp and Ruby. You suggest FoxPro keeps existing context (which is both good and bad, I think). Either way, you need to be concerned about:

I am sorry, but I have not seen any problems or "bloat" with an Eval solution outside of speed (which I cannot test). I will give you credit for speed just to move beyond that annoying topic. But Moore's law is on my side in the long run. I am not sure what you mean by the second point of placing values in the eval string. Generally one just uses variables. The eval's and exec's I am familiar with are very similar to a run-time "include" statement. You can stick them just about anywhere. Maybe even the repeated loop statements can be executed that way rather than try to put the function inside the loop (I have never tried that, though). They "see" the context of where they are stuck just like the code actually stuck there. (FoxPro does not have an Exec-like statement, but gets around that by having functions inherit caller scope. TCL has more powerful such tools, but FoxPro's is usually good-enough.) If Lisp and Ruby lack contexted-based Eval's or Exec's, then they are crippled in that respect. At least make it an option. If we continue with this example, I would have to play "20 questions" to ascertain the specific requirements and relationships, which would probably end up violating your NDA, getting us both in the slammer.

You are so bound up in FoxPro that you can't see outside it. That's too bad. Hopefully you can grow a bit as a programmer and see what we're talking about at some later date. Until you're willing to take your head out of the sand, this discussion will continue to SpinLock. Your argument about case statements lies broken at your feet. Your eval reasoning is so hopelessly coiled around an outdated single-paradigm language that you can barely have a rational conversation about it. You're unwilling to even consider new tools, merely because of mental inertia.

Yeah, that's right. I am closed-minded and evil, and you are open-minded and good. You did NOT identify any specific, documented problems with eval(). In fact, it appears you don't even understand how eval's outside of Lisp and Ruby work. How do you know they won't work if you have never used that kind??????

You're not concerned with making great software. You're concerned with making TopMind right. And that is a profound waste of my time, because all I wanted to do was share a design that used HigherOrderFunctions to a significant degree with great success. Great software. You're so busy trying to attack non-table-oriented programming that you can't even extend the professional courtesy of thinking about something critically on your own. We have to walk you through it like a truculent college freshman, or you start screaming about how unrealistic and academic it all is.

The only people you're hurting with such an attitude is the people who have to use or maintain your software. And me, because you wasted my time, and a fair amount of it too.

If you can identify SPECIFIC problems with my code, I will mend my ways. Vagueness is Evil!

[ How about this? You're refusing to examine a new tool. You have a small number of tools you evaluate as a GoldenHammer, and you ruthlessly push square pegs into round holes with them. While doing this, you loudly decry the nobility of this action. You're doing this for our own damn good, right? Until we all use the paradigm you understand, we're not really working. Never mind the fact that none of your proffered techniques can even scratch the surface of my problem (and all but eval was considered, because eval is not available in C++).

Now that you've run into this problem, your first step-before even considering the tool I used to solve the problem cleanly-is to try and redefine the problem into an easier form. Because, well, these requirements sound too exacting and complex! Barring that, you say they're unclear. Guess what? They are, but I'm not giving you any of the unclear ones. Unclear requirements are part of Business Applications, your code has to be flexible so that you can adapt to what your customer wants, even before they're done deciding about it. You didn't even seem to really read what I wrote about the order of the operations (nor did you reread it carefully, I told you the order outright!).

I'm done with this, TopMind. I did my best, and I've been a hell of a lot more patient and detailed than anyone should expect me to be. but your hardheadedness has defeated even my optimism. You're not trying to learn, you're trying to fight. I don't have enough to gain by continuing to hammer the same points into your skull until I finally break through. Good luck TopMind. You need it. ]

You did NOT "do [your] best" if you exclude specific scenarios and requirements that bust Eval. You are expecting me to accept anecdotal evidence at face value. If that is "hardheadedness", then I am fricken guilty as charged. --top


Now let's try to bring this discussion to some closures. This discussion was supposed to prove what? That higher order functions is a language feature pretty much unrelated to relational model?

Or was it intended to prove that adding function values to FoxPro would be an improvement to FoxPro? I don't understand your objections at all, TOP. Why not have function values in your FoxPro or whatever? Would it ruin FoxPros purity?

It was claimed that such features were a *significant* improvement. I don't see anything significant.

[Of course you don't, because you've never worked with them, you have no experience in the matter, and don't appreciate the advantages offered over eval, due to that complete lack of experience. It apparently never occurs to you that sometimes you have to use something for a while to truly grok it.]

Compile-time checking and perhaps speed advantages are about the only thing I see, and those are incremental things. Eval is simpler conceptually and applies to more than just functions. Why clutter a language with gimmicks unless a strong justification is shown?

[That statement just proves you don't know what you're talking about, how about not offering up opinions on things that you've never used. How about listening to people who actually have experience in the matter and taking their word that it's a huge advantage?]

I am taking your word for nothing. Show us code and CLEAR requirements. Why have this discussion if you are not going to make a case by demonstrating something concrete rather than "you would JUST agree with me if you had my experience". That's anti-scientific. You don't need discussions to make claims about your grand experience. That's a resume, not a discussion nor a demonstration. -t

Iteration can be improved upon. Here's a Common Lisp example using a brief form of iteration.

 (coerce (loop for i across #(1 2 3 4 5) when (oddp i) collect i)

answer> #(1 3 5)
Here, #(1 2 3 4 5) is syntax which denotes an array containing 1...5. It says that when an element is odd, collect it into some unnamed list, which will then be returned when loop is done. That list is then coerced into a "simple vector". (A kind of array without wizzy features like fill pointers.)

I personally think this means two things.

A meta lesson is that macros are good, since loop is just a macro some guy worked on. This means a language which supports macros well (with debugging features, etc) offers a way to extend the language, kind of like a patchless opensource. But of course that is a controversial conclusion. (Some people are worried what their coworkers will do with the power of macros. And I don't have the experience to know whether this is a true concern.)

[I admit I wasn't too clear about the purpose of this example either. If it's just about how HigherOrderFunctions are better than conventional functions, nothing about iteration, then please feel free to delete my above CL example. -- TayssirJohnGabbour]

For thoroughness, here is an SQL-based example. I am assuming that complex processing or analysis of the "nodes" needs to take place before we know what is to be deleted. Otherwise we could just use:

  DELETE FROM foo WHERE blah blah"
Complex processing version in PHP-like syntax:

  $toBeDeleted = '';  // empty string list
  $rs = query("select * from Foo");
  while (getNext($rs)) {
$someCondition = [complex processing here]....
if ($someCondition) {
listAppend($toBeDeleted, $rs['primaryID']);
  query("delete from Foo where primaryID in ($toBeDeleted)");
This assumes numeric ID's, but some dialects can handle quotes around numbers also if we wish to make it more generic. The default string delimiter is a comma. If you want a different delimiter, then use a call such as:

listAppend($toBeDeleted, $rs['primaryID'],'|');
(I am not necessarily using PHP's list function conventions here.) -- AnonymousDonor

And then consider what happens when you need to delete based on some other condition:

  $toBeDeleted = '';  // empty string list
  $rs = query("select * from Foo");
  while (getNext($rs)) {
$someOtherCondition = [more complex processing here]....
if ($someCondition) {
listAppend($toBeDeleted, $rs['primaryID']);
  query("delete from Foo where primaryID in ($toBeDeleted)");
And then if you have to delete on another table on a different condition:

  $toBeDeleted = '';  // empty string list
  $rs = query("select * from Bar");
  while (getNext($rs)) {
$someCondition = [more complex processing here]....
if ($someCondition) {
listAppend($toBeDeleted, $rs['primaryID']);
  query("delete from Bar where primaryID in ($toBeDeleted)");
You can see where this is going. DuplicatedCode up the wazoo. CutAndPasteProgramming?. Massive OnceAndOnlyOnce violations.

Now compare the version using PHP's approximation of HigherOrderFunctions (it's really more like eval, it doesn't close over the lexical environment, but it's usually GoodEnough):

  function deleteWhere($function, $tableName) {
$toBeDeleted = '';  // empty string list
$rs = query("select * from $tableName");
while (getNext($rs)) {
$someCondition = $function($row);
if ($someCondition) {
listAppend($toBeDeleted, $rs['primaryID']);
query("delete from $tableName where primaryID in ($toBeDeleted)");

function someCondition() { [complex processing here].... }

function someOtherCondition() { [more complex processing here].... }

function someThirdCondition() { [still more complex processing here].... }

deleteWhere('someCondition', 'Foo'); deleteWhere('someOtherCondition', 'Foo'); deleteWhere('someThirdCondition', 'Bar');
Each new usage is only a function definition and a function call. Compare that to writing out a half dozen lines of QueryAndLoop code each time. And if you change the database (PHP requires different API calls for each database vendor), you only need to change one function instead of every single query. -- JonathanTang

That is not really much different from an Eval()

Wrong, that's a lot different, and a lot cleaner.

, except maybe runs faster. However, note that the function setup takes just about as much code as the loop setup, so there is not really much, if any, code savings either way.

Wrong again, it's a huge difference in the amount of duplicated code, and function setup is no where near as much code as that loop setup, plus it's much simpler.

Like I mentioned in the above weather example, usually there are ways to simplify commonly used loops. If I had more domain details, I would bet money I could do it there too.

Apparently you just don't know what good code looks like at all,

Knock it off with the jabs, will ya? It ain't helping and I am twitching to fight back. I can say mean things about your fad domga shit also.

even when it's shown to you. Jonathan's php above is far far better than yours. deleteWhere used as a HigherOrderFunction makes much more sense than repeating that same query loop code every time you need it.

Either you repeat function setup code or you repeat loop setup code. I don't see any way around it. I suppose you are assuming functions are always created no matter what out of dogma (see LongFunctions). But I am assuming that something will be in-lined if it is not used more than once. Otherwise, we may have to create a complex parameter list. One-size-fits-all parameter signatures are not that common in my experience.

[As I've said before, your experience is quite limited, because having common functions signature for all this stuff is trivial, if you have No arg, single arg, and double arg, Predicates, Functions, and Procedures, that covers any scenario you can dream up, and allows functional composition and complete easy reuse of HigherOrderFunctions like that one above. There's never a need for a complex parameter list, parameters are passed via lexical scope, i.e. closures, and don't need to be named, another one of those fancy features that you don't think we need. And the lack of lexical scope is why eval simply isn't sufficient for a replacement to closures, because without lexical scope, you'd have to pass a bunch of parameters.]

The loop code is written once. Post that, predicate functions are the only thing required. All of these will be short and simple, and typically less code than even the tightest of the tightly written loops.

Not by much.

[By tons!]

(alleged) Tops version of 10 loops

  $toBeDeleted = '';  // empty string list
  $rs = query("select * from Bar");
  while (getNext($rs)) {
$someCondition = [more complex processing here]....
if ($someCondition) {
listAppend($toBeDeleted, $rs['primaryID']);
  query("delete from Bar where primaryID in ($toBeDeleted)");

$toBeDeleted = ''; // empty string list $rs = query("select * from Bar"); while (getNext($rs)) { $someCondition = [more complex processing here].... if ($someCondition) { listAppend($toBeDeleted, $rs['primaryID']); } } query("delete from Bar where primaryID in ($toBeDeleted)");

$toBeDeleted = ''; // empty string list $rs = query("select * from Bar"); while (getNext($rs)) { $someCondition = [more complex processing here].... if ($someCondition) { listAppend($toBeDeleted, $rs['primaryID']); } } query("delete from Bar where primaryID in ($toBeDeleted)");

$toBeDeleted = ''; // empty string list $rs = query("select * from Bar"); while (getNext($rs)) { $someCondition = [more complex processing here].... if ($someCondition) { listAppend($toBeDeleted, $rs['primaryID']); } } query("delete from Bar where primaryID in ($toBeDeleted)");

$toBeDeleted = ''; // empty string list $rs = query("select * from Bar"); while (getNext($rs)) { $someCondition = [more complex processing here].... if ($someCondition) { listAppend($toBeDeleted, $rs['primaryID']); } } query("delete from Bar where primaryID in ($toBeDeleted)");

$toBeDeleted = ''; // empty string list $rs = query("select * from Bar"); while (getNext($rs)) { $someCondition = [more complex processing here].... if ($someCondition) { listAppend($toBeDeleted, $rs['primaryID']); } } query("delete from Bar where primaryID in ($toBeDeleted)");

$toBeDeleted = ''; // empty string list $rs = query("select * from Bar"); while (getNext($rs)) { $someCondition = [more complex processing here].... if ($someCondition) { listAppend($toBeDeleted, $rs['primaryID']); } } query("delete from Bar where primaryID in ($toBeDeleted)");

$toBeDeleted = ''; // empty string list $rs = query("select * from Bar"); while (getNext($rs)) { $someCondition = [more complex processing here].... if ($someCondition) { listAppend($toBeDeleted, $rs['primaryID']); } } query("delete from Bar where primaryID in ($toBeDeleted)");

$toBeDeleted = ''; // empty string list $rs = query("select * from Bar"); while (getNext($rs)) { $someCondition = [more complex processing here].... if ($someCondition) { listAppend($toBeDeleted, $rs['primaryID']); } } query("delete from Bar where primaryID in ($toBeDeleted)");

$toBeDeleted = ''; // empty string list $rs = query("select * from Bar"); while (getNext($rs)) { $someCondition = [more complex processing here].... if ($someCondition) { listAppend($toBeDeleted, $rs['primaryID']); } } query("delete from Bar where primaryID in ($toBeDeleted)");
Jonathan's same 10 loops assuming anonymous functions

  function deleteWhere($function, $tableName) {
$toBeDeleted = '';  // empty string list
$rs = query("select * from $tableName");
while (getNext($rs)) {
$someCondition = $function($row);
if ($someCondition) {
listAppend($toBeDeleted, $rs['primaryID']);
query("delete from $tableName where primaryID in ($toBeDeleted)");

deleteWhere(function($each){[more complex processing here]....}, 'Foo'); deleteWhere(function($each){[more complex processing here]....}, 'Foo'); deleteWhere(function($each){[more complex processing here]....}, 'Foo'); deleteWhere(function($each){[more complex processing here]....}, 'Foo'); deleteWhere(function($each){[more complex processing here]....}, 'Foo'); deleteWhere(function($each){[more complex processing here]....}, 'Foo'); deleteWhere(function($each){[more complex processing here]....}, 'Foo'); deleteWhere(function($each){[more complex processing here]....}, 'Foo'); deleteWhere(function($each){[more complex processing here]....}, 'Foo'); deleteWhere(function($each){[more complex processing here]....}, 'Foo');
It's quite obvious that Jonathan's is far better. Top, if you can't see that, you have problems. If you think that's a 3 token savings, then you better recount buddy.

RIGGED! You excluded parameter management.

I told you, we don't need to pass parameters, the lexical environment does that for us.

Those things most likely will not work in isolation. They will interact with their code environment. And, you made yours look smaller by putting it all on the same line. Further, likely one will not need every piece of the loop versions. Some may not have complex processing, some may not have deletion, some will have different WHERE clauses etc. You assume far too much uniformity to be realistic. MutuallyExclusiveCategoriesDontScale is true of "lists" of function alternatives also. I see such M.E. lists break down in both classifications/taxonomies *and* function decomposition. I call it as I see it in the real world. If you observe more regularity (M.E. lists), then we will just have to AgreeToDisagree. --top

No, you assume that I can't make it that uniform, I'm telling you you're wrong, I can.

Don't "tell" me, show me with a realistic scenario. I used to try those kinds of things with Eval, and they don't really help when real world variations come it. When you start dealing with all those variations, and the parameters, yours will be just as large. You will possibly have AttributesInNameSmell or massive parameter overloading also.

Again, wrong, don't assume because you can't figure out how to factor out those variations, that someone else can't. I do it daily. Those examples are quite realistic and aren't rigged. You just don't understand functional programming and implicit param passing via closures using lexical scope, if you did, you'd realize it's quite easy to rid yourself of passing parameters manually for those kinds of operations.''

Okay, I'll give you the scope. But, using a language that allows a routine to inherit parent's scope, such as FoxPro (see examples above), one can do the same thing without explicit HOF:

  =deleteWhere("myFunc1", "complex processing 1", "foo")
  =deleteWhere("myFunc2", "complex processing 2", "foo")
  =deleteWhere("myFunc3", "complex processing 3", "foo")
But even with such ability, my claim that sufficient uniformity probably lacks in the real world stands. (I should point out that FoxPro can only execute one statement in an Eval clause. However, one can still execute a function that still inherits caller scope if they need more than one statement.)

FoxPro is a very limited language, you're still missing the fact that lexical scope is key. Inheriting parents scope doesn't cut it, because many HigherOrderFunctions after being called, curry your function around to other HigherOrderFunctions, or make recursive calls to themselves, and each call creates a brand new scope to prevent conflicts with variables from previous scopes, there is no shortcut, you must have lexical scope to realize the full power, simply inheriting the parents scope won't work. Your claim that sufficient uniformity lacks in the real world, is from your lack of experience with functional programming, and is flat out wrong. Functional programming makes these things trivial, see the other dudes lisp example below, and give him some time to fix it up a bit, it was a first cut.

But here's where the major reuse comes in. Those predicate functions could be reused for a different predicate-taking function on a structure, like one that collects. The parameter list for these is always the same, they take one argument which is the object to act upon, returning a boolean. --AnonymousDonor

If you can show it significantly simplifying typical biz-app code, I will give you credence. Until that, it is only anecdotal claims, of which I am highly skeptical. You claim that I am doing it all wrong and dumb, then show me the right and smart way with your actual shorter code.

Let's see it shorten: [NoteAboutGeocities] (ChallengeSixVersusFpDiscussion)

That's a report writer, not much of a program, has no real behavior to begin with. It just presents the results of sql queries, strawman.

Actually, I see a way with HigherOrderFunctions to make that process quite trivial. If I have some time later, I'll sketch out a lisp version that does it. Whenever we talk about moving and collecting logic, HOFs usually provide an elegant solution.

One has to see the entire thing to see if there is a net reduction. Simplifying one part may complicate the unshown parts.

There is a reporting example in SeparateIoFromCalculation that may relate to the issue of HOF's and loops. See also VariationsTendTowardCartesianProduct.

Here is a first-cut at the link above in lisp leveraging HOFs. This was a 30 min attempt, does not meet all the requirements, and in retrospect I shouldn't make a list of functions, I should build a new one that does both. But, it is a start. On Monday I'll revisit and refine, I am on vacation to San Francisco now!

 (defvar *queries* nil)
 (defvar *criterion* nil)
 (defvar *db* (list '(:reporter "Jango" :title "I rule!")
'(:reporter "Mace" :title "No you don't!")
'(:reporter "Boba" :title "Daddy! No! Why??")))

(defun run-queries (queries) (loop for (string func) in queries do (format t "~%~a: " string) collect (funcall func (read-line))))

(defun passes-criterion? (item) (loop for func in *criterion* unless (funcall func item) return nil finally (return 't)))

(defun get-matches () (loop for record in *db* when (passes-criterion? record) collect record))

(defmacro make-query (name-str &rest body) `(progn (push (list ,name-str (lambda (value) ,@body))
; Example query. Keep in mind we could shorten this further. (make-query "Reporter's name?" (lambda (x) (equal (getf x :reporter) value))) Please read the whole thing before working up a bunch of code. Please read my comment above before getting worked up. This is a trivial start to what could be a very real implementation. It uses HigherOrderFunctions. It doesn't have to be perfect, it took 30 minutes.

Looping over Eval's done on lists/tables could achieve something similar, but by itself is only a small portion of the sample app. It's also missing the UI. Your code appears to also be reinventing a very limited competitor to SQL. The sample is not really about how to roll-your-own query language, but rather more about gluing together different services and "interface" languages, such as HTML and SQL. In biz apps that's often what one does: glue together existing tools, services, DSL's, and API's into a usable unit for a specific purpose. -t

Pending Summary:

Powerful tools available that can do most of what HOF's do: No they can't, please wait till the discussion is over before summarizing it wrongly.

The only thing I have not seen them do is create custom control structures. However, languages like TCL seem to do it by treating blocks as strings to be executed by the command call. The upLevel command allows it to execute a string (block) in the caller's scope. Yes, they are probably slower than HOF. I will give HOF the speed benefit. However, those kinds of things are not usually a bottleneck in my experience and domain. If they are too slow for massive weather data processing, so be it. Don't spoil the simplicity for the rest of us.

You win on speed, I win on simplicity.

Wrong, you lose on both counts. The HigherOrderFunctions are both simpler, more elegant, faster, and safer. You don't seem to know what simplicity is.

''How is:

calling a function that reads in a string, parses it, establishes appropriate name bindings to the current environment, then finally compiles/interprets and executes the resulting code. [eval/execute]

'simpler and more elegant' than:

calling a function. [higher-order-functions]


50 years from I doubt they will give a fudge about it being slower. The bottlenecks are more and more in the I/O, not processing. -- top

I have suggested that functional "tricks" often don't save that much code because the vast majority of repeated loop setup code can be reduced to 2 lines or less, and my code averages about one loop every 15 lines or so. Thus, any savings in code from functional-based loop simplification will only be a few percent's worth most of the time, and this is only if there is a lot of regularity. Again, this is "good", but not "great", as some have claimed.

The general pattern for repeated loop setup is: ( And for comparison, how much work you can do with HOF lines. )

  myMap1 = loopSetupFoo([local vars as parameters])|  # Compared, three HOF lines can do many times more
  while (myMap2 = getNextFoo(myMap1)) { |  myMap.each{ |x, y| puts "#{x} => #{y}" } # print
....|  myMap.sort{ |x, y| x > y }# sort
  }|  myMap.reject! { |x,y| x.isNotSpecial }# filter
("Eval" techniques may reduce it further, but are not considered here.) { And there goes your "Loop code is short" suggestion out the window. Of course, it totally missed the point to begin with... }

This pattern follows the observation that we need one "initialization" call and one repeated call for loop iteration. Sometimes they can be combined, but in many interpreters/compilers this results in unnecessarily repeated execution per loop iteration.

I have not seen many situations where this two-line technique cannot reduce repeatedly-used looping patterns from several lines per instance to 2. In all likelihood, the above weather example can also be likewise reduced.

This solution is a common external iterator pattern. We see these all the time in the C++ StandardTemplateLibrary. They are falling out of style because, even with shortened loop setups, they are a pain for anything more complex than simple iteration. Your method would provide no savings, in the long run, with operations like collection or filtration. It is far easier to make "verb" methods like "collect", "filter", "reject", "sort" that accept logic and handle the gritty details themselves. Since these operations are called commonly, the net code savings over time is substantial.

Your other eval solutions suggests it/'s safe to modify the local environment to set things up. This is really a rather scary operation to do. What if someone changes your environment? Implicit reassignments happen, things that someone else may not realize. It's all well and good to say, "Eval gets access to the enclosing scope, so I can create variables as needed." but it's dangerous. Lispers can do this too, with macros, but it's dangerous and people usually try to avoid such "VariableCapture?" because implicit behavior is unsafe and confusing in the long run.

HOFs can do a lot of things. We've been focusing on common uses, such as simplifying and abstracting iterative tasks (like doing something for each item in a collection), for providing plug-and-play logic for common tasks (criterion for sorting, methods for printing), for providing dynamic callbacks for more complex systems (secure network protocol wrapper), or for providing a generic framework for arbitrary operations (such as the weather example above).

While it is true that Eval can do many of these things, eval does it slower, with less overall safety, and with somewhat more complexity when it comes time to compose or swap data. Using Eval for many of these tasks is the application of a GoldenHammer, because you can get exactly the same end result faster, with less code, with more genericity, and with more safety. In programs that do static type checking (such as C++ or Haskell), HigherOrderFunctions provide this as well.

Eval can be a better solution, but its "SweetSpot?" of ideal applications is much narrower due to speed, syntax and safety. In languages that provide both (and differentiate between them), HigherOrderFunctions are usually the preferred method because of this (see Ruby's and Python's standard libraries). Still, people use them when they need to perform some operations or dynamically load code. That's what they're there for, and thusly they excel. A HigherOrderFunction doesn't even address that issue. Common idioms in many languages eval code which defines (and if possible, compiles to native/bytecode representation) functions for further use.

TCL, which has come up several times now, is somewhat of a degenerate case for this debate-and is rather unsuitable for this debate-because in TCL programs are strings which are programs (you can check out HomoiconicLanguages for more detail on this). As such, in TCL Eval is much closer to a HigherOrderFunction-like-meme than in other languages like Lisp, Ruby or Python.

When we talk about loops and functional tricks, this is often done as much for reusability as it is for explicit code savings. Examine the following Ruby code:

  myarray = ["Header1", "Header2", "Header3", "Header4"]
  myfile = "somefile.txt" )
  myresult = [] # Blank array

my_hof = lambda { |x| myresult << x } # A HOF to append items to an array.

myarray.each( &my_hof ) # Append each array item to myresult via the HOF myfile.each( &my_hof ) # Append each file line to the array

# print out every item in the array on its own line myresult.each { |x| puts x }
Note how the interface for the File iteration is identical to the array iteration, we use the same HOF for both too. Pretty cool, huh? If we were defining functions (or classes, or what-have-you) dynamically, at run time, based on unknown input (trust your input? I hope so) then Eval would be the way you go.

{First of all, I tend to use RDBMS, or at least MinimalTables for such "structures" and thus don't care much about added array power. You are essentially bragging about putting DatabaseVerbs on arrays. Second, "filtering" is often not very generic because we cannot filter in just one spot. Rough example:}

PageAnchor Mandy (just to give it a unique name)

Functional version of that, exactly half the lines of code, same logic.
  while (foo = getNext(bar)) {|bar.applyMatches(function(each){
filterThis = false|if(!condition1)return true;
if (condition1) {|doSomething();
doSomething()  |if(!condition2)return false;
filterThis = true|doSomethingElse();
if (condition2) {|return glob();},
doSomethingElse()| function(foo){process(foo)});
filterThis = glob()  |
}| I would however, consider learning CommandQuerySeparation, decision making and processing
}| do not have to be intermixed, there's always a cleaner way, and this sample is nasty.
if (! filterThis) {|
  } // end-while|

(indentation messed-up by TabMunging)

[It might be half as much code, but using embedded returns can be problematic at times, and considered a "smell" by some (although I am not adamant about it). See below (PageAnchor Mandy2) for a shortened example.]

Those aren't embedded returns, they're GuardClauses and they're a common pattern to clean up that's rat's nest of nested conditionals you used. They return from the anonymous function, and most consider it far cleaner than nested conditionals (for small functions) because they clearly signal, "done", without forcing me to continue reading code.

[I could do the same (embedded returns) to reduce code size, and keep the scope using Pascal-style nested functions or FoxPro-style caller inheritance. For example, if we wanted to print out a list of rejected ones for debugging or logging, the multi-return points would have to be reworked. I would have less rework. And, your "half" count is an exaggeration because your example is a bit wider code-wise. I copied this example to LinesOfCode where it is shown with alternative metrics that raw line-counts is misleading. Here I reworked it using your "return" technique. This assumes that function scope inheritance is the default:]

PageAnchor Mandy2

 while (foo = getNext(bar)) {if (can()) process();}
 function can() {
if(!condition1) return true;
if(!condition2) return false;
return glob();

This appears to me to be within about 5% of the right-hand version above code-size-wise.

{Decision making and processing is often intermixed in the real world. We cannot know all filtering conditions up front, and thus often must make them in the course of doing something.}

Actually Top, I'm bragging about being able to put database verbs on ANYTHING, even things that it might not normally make sense to do so on (each_line for a network socket, for instance). By building up such verbs, we can treat low-level structures at a higher level, without losing any control when we want it. It gives us more than that, but it's a start.

As for your attempt at an example, it seems to be very poorly formed, like you're trying to do more than one thing at a time. Maybe you should think it through? Your need to reduce work so that one loop does more than one thing at a time. I assume you'd like to filter across a precomputed range. We could do this more elegantly if I had a bit more time, but here's a version does the same thing:

 shouldFilter = false
 my_map.reject! do |key,val|
# ... do something to shouldFilter here, based on key or val, conditionally.
shouldFilter # Returns shouldFilter
You tried to make it look very complex, but all you're really doing is some additional logic to determine when to filter (and possibly take other actions, which is bad style in any paradigm, but is not ruled out here). Do you have anymore spurious examples for us to debunk?

I am just trying to point out that in my observation, it is often tough to separate things because they tend to be inherently coupled to each other. It would indeed make life simpler if we knew all the filter conditions before entering a loop or block, but often we don't. (Often if the filtering is simple, the WHERE clause does the filtering.) And, often they interact such that there may be filter A and filter B, each of which is finally applied based on other information we collect in the process. If you disagree and claim there is more regularity than I see, then so be it. It is anecdote-versus-anecdote. -- top

Can you give a real-world example of this? -- dlf

I had to parse a funky log file once and rejected lines as I parsed further down and progressively obtained more information. For example, it might reject a line because it is missing a date field(s), but later reject lines that have an invalid date range.

What's really interesting is that HigherOrderFunctions allow programming in a more declarative fashion using verbs like foreach, select, reject, detect, remove, inject, and collect, exactly the kind of thing you'd think top would actually prefer over hand rolled loops. Odd that he's arguing so much for the code heavy hand written loops.

Indeed. And since he's such a huge fan of declarative programming, it's an odd position to take. It reinforces my perception that TopMind is rejecting these ideas mostly on the grounds of inertia, not by any rational reasoning.

Like I keep saying, most of those are trying to sell reinventing-the-database using arrays. Rational thinking is not reinventing something that already exists and is more scalable than arrays.

[Get off the array thing top, it's a straw man, HigherOrderFunctions are used for tons of things besides looping over stuff, see BlocksInRuby for a few examples of common use.]

Tell me it is more logical to reinvent non-scalable wheels. Vulcan that! Nor am I "rejecting" them. I am only suggesting their benefits are overrated by some FP zealots, who apparently never mastered imperative programming and databases because they use crappy non-FP examples to compare. If you have a good database system, you don't need to use files and arrays that often. Thus, bragging about how your techniques make files and arrays interchangeable is falling on deaf ears. You are solving the wrong problem correctly. You claim I repeat myself too often, but you still cannot predict my responses and point of view. -- top

You sure wish we were re-inventing, but we're not. We're making sublanguages, control structures, and powerful libraries and applications, extending into many realms. You're busy worrying about making reports.

DomainPissingMatch-based intimidation again?

You claim the benefits are overrated, but your examples are poor and deliberately worded to give yourself an non-existent advantage. Whenever one is discredited, you run to a new tack on the topic, to face the same slap.

Odd, that is just about how I perceive your "tactics". See "knee in ear" analogy.

Now, as predicted above, you're returning to, "All this is unnecessary if you work with tables." This position-tenuous as it is-is tantamount to concession by you, TopMind, because you can't back it up.

At this point it is anecdote-versus-anecdote on both sides. You say my examples are unrealistic, and I say yours are unrealistic. Only a field investigation team can settle such.

Whenever people point out application domains that these techniques don't allow you handle with ease, you claim they're "too low level," even when it's clearly in the business application domain. Or you claim your approach is easier, although that's sheerly a matter of your perception (we've shown you how we made these things easy, and you say, "No no no! I'm just sure a case statement will save the day!").

I presented the case statement as one possible alternative above. Nobody has found any objective problem with it. The claimed problems were based on anecdotes, not shown code. That weather example is not very good anyhow because the reader cannot verify most of the claims made and don't know the users' actual needs. When I ask questions, I get the equiv of "RTFM". You assume your writing is clear, but it sometimes is not. I can point out specific examples if you wish explore RTFM complaints more.

Your pride is blinding you to a valuable tool. A tool that is just as applicable in programming central to databases as to network sockets.

Perhaps if you used databases more, you wouldn't need network sockets and files as much. Like I have said many times, CompaniesHireLikeMinded. In a DB-centric shop, there are less data files and less socket programming to deal with. Maybe I live a sheltered life among RDBMS shops; the "Hunchback of Codd". Why not, DB's are wonderful tools that abstract a lot of "structure and attribute handling" stuff that others appear to reinvent from scratch over and over, and inconsistently also. Showing how your techniques allow you to tack DatabaseVerbs onto arrays, files, and sockets is an example of this. The next shop will tack on a whole *different* set of verbs/operations (and still not relational). You guys keep performing the same dance to different music without realizing it. -- AnonymousDonor (TopMind?)

[If you like the database so much, then you should love the fact that HigherOrderFunctions will let you wrap a database api around anything, even when it's not database related, and can't be. You work on recordset's all the time top, why don't you have a set of common higher order functions that apply to recordsets? If you did, you wouldn't have any patterns in your code, because you'd be able to do just about anything with a one liner. You should at least wrap this up...]

myMap1 = loopSetupFoo([local vars as parameters])
while (myMap2 = getNextFoo(myMap1)) {
function forEach(rs, aFunc){
while(row = getNextFoo(rs))
[so that you can do... ]
forEach(loopSetupFoo([local vars as parameters]),function(each){
[and avoid creating variables and having to pump out the same while loop a thousand times, it may not look like much, but over time, it saves tons of code, especially when you get all your basic tricks and idioms wrapped up into HigherOrderFunctions so you don't have any patterns left in your code. Given the way you program... I'd do this..]
function forEach(sqlString, aFunc){
var rs = runSql(sqlString);
[so you could just do...]
forEach("Select * from someTable",function(each){
[at the very least, it makes working with resultset's much simpler and cleaner, no setup code at all.]

(Perhaps above should be moved to QueryAndLoop.)

Huh? Tell me how a database would help with creating:

You work in an RDBMS shop; fine, if it puts bread on the table, good for you. But don't assume that everyone else faces the same problem domain that you do. Those are all projects I've worked on, either at various employers or in school courses. Only 2 others (FictionPublishingExample and an intranet calendar application) benefitted from an RDBMS. Only 1 of the above (the ray tracer) did not benefit from objects.

Most of us like objects and HigherOrderFunctions because we've found them widely applicable to the kind of work that we do. If they're not useful for you, fine, don't use them. But that doesn't mean that the rest of us are automatically ReinventingTheDatabaseInApplication. -- JonathanTang

[His pride has kept him from learning anything in the two years I've seen him posting.]

I am surprised you used the word "pride" instead of "arrogance". Your diplomacy skills are improving (slightly). The bottom line is that your claims and examples don't ring very applicable to the IT world as I observe it. They might save a line or two of code here and there, but nothing revolutionary. You keep using words such as "clearly better" and "major improvement".

[Any language that uses HigherOrderFunctions, IS clearly a major improvement over the exact same language without them, it's that simple top, it has nothing to do with what domain you work in. It's an extra tool in the box that has a huge effect on how you program and how much code you write, anyone who's actually used them, will agree with that. You are the one that doesn't have experience with them. You can't really speak intelligently about something you don't ever use now can you?]

I am sorry, but I have yet to see a "killer example" from you. I will grant you speed and compile-time checking over string-centric techniques for things such as creating one's own control blocks, but I have not seen any convincing code-reduction examples from you guys. Further, you seem to be solving problems that I don't encounter very often. In summary:


Not Shown:

Now, where do you disagree with this summary? -- top

There were clear examples given of how it creates less code, you just didn't understand them.

One does not have to understand code to know that they are not shorter. I can do the same thing with Eval and other dynamic language tricks. If the advantage is something other than code size, then you are not being clear. Your reuse claims are for things that I don't use very often in the first place. You are trying to sell a refrigerator to an Eskimo.

There were also examples given on how it would help "your" database patterns, making it very relevant, you also didn't understand them, your summary isn't accurate because they were shown. You don't see a killer example, because you don't want too, it's that simple, you don't want to learn anything, you just want to defend what you know. I'm not explaining it anymore, you don't seem to have the willingness to learn anything new.

Either it is less code or it is not. Are you implying that I am hallucinating code size???? You appear to be trying to change the subject away from code-size as a metric. -t

I will consider reuse claims, but only in the context of a semi-realistic biz example where I can make some informed assessment of probabilities of the re-use scenario. Some of the scenarios given here are about making networking tools and device drivers, etc. I'm not challenging those domains because I have no experience on how likely one is too need to reuse X for B after using it for A. (And I can usually explain the reason why something is unlikely in the biz domain, such as describing the usual alternatives or other factors that complicate the reuse of using something as-is.) -t

To clarify it for me, does the HigherOrderFunction enable the second example?

 Keys = Select.Data(Entity.Name)
Key = Remove.Field(Keys); * Removes Key from Keys
 Until Key = EOF

 Keys = Create.Array.Iterator(Select.Data(Entity.Name))
 Iterate(Keys, Get.Object.Code("** Code for each key"))

 Iterate(Create.Array.Iterator(Select.Data(Entity.Name)), Get.Object.Code("** Code for each key"))
I have taken a copy of the above question to HigherOrderFunction wiki - it's not really a question about SQL --PeterLynch

Close, see if I can clarify with some pseudocode showing both sides of the issue. First, we want reusable iterations over lists, of anything, even recordsets that come from databases, so, forEach takes any list, and a one arg function and applies it to each item in list, that's the HigherOrderFunction

  function goobleGorp(aList, aFunction) <-- just the method signature for simplicity

function boobleGoobles(aGooble) <-- specifies amount to raise employees, notice anAmount as input goobleGorp(gobbleList, <-- call to goobleGorp with list from anywhere, even db function(each) <-- anonymous one arg function, required by goobleGorp, applied to each gobble each.wobble+=aGooble; <-- each.wobble += aGooble, notice aGooble is from parents scope end)-- creating a closure, preventing us from needing to pass aGooble end-- to the anonymous function, it is captured for free, allowing -- goobleGorp to always use the same method signature for its function -- argument making it totally generic and reusable, while still -- completely flexible and simple
this would commonly be formatted as a one liner
  function boobleGoobles(aGooble)
Although not applicable in all cases, the above can be entirely done in SQL
  query("update Emps set salary = salary + $amt");
[It can't if salary is a complex computation that involves much business logic, don't try and apply sql, it isn't applicable to the sample, and is rather silly to think I don't know how to do a simple update statement, or that an update statement is in any way comparable to the above sample.]

Every time I suggest that your examples "assume too much regularity (simplicity)" to fit real problems, you get on my case. Now you are doing the same thing. -- top

[No, the whole point of that sample is to show you how to achieve regularity by avoiding parameter passing with implicit closures, enabling generic reuse of functions. I'm showing you that your objection, "assume too much regularity (simplicity)", is incorrect. It was not about doing a simple update to salary.]

I mostly just see query language reinvention up there (ExpressionApiComplaints). You seem to have difficulty finding an example that does not rely on arrays and does not reinvent query languages or databases. There seems to be a pattern to your choice of examples. -- top

[Here, I fixed the context so you can grasp it without thinking about employees and saleries]

I think the change will confuse readers because replies and code comments are based on the original example. Besides, examples are easier to relate to when they have something specific instead of Gloobs and Foos.

[ now read it really slowly, until you figure out that the example is about how to pass paramaters to functions, without explicitly passing them.]

I can do the same thing with languages that allow functions to inherit scope from the caller.

[You seem to have dificulty reading code and understanding clear explanations. Look at the aGooble variable, see how it's used by the goobleGorp HigherOrderFunction, without being passed to it. You always complain about not being able to achieve uniformity, well this is how we do it. It's not about arrays, it's not about query languages; it's about how to do something that saves you a lot of work when programming, "Not passing variables".]

Note that one can do something similar with Eval:

  function looper(theList, theExpression) {
for i = getNext(theList) {

Remember that some languages let a routine inherit the parent's scope, while others (TCL) have multiple kinds of Eval such that one can optionally execute the expression in the caller's scope. But again, in practice I don't find a practical need for such very often. The regularly to take advantage of it does not exist often enough. And it is often possible to simplify the loop if the loop is used often. That way one can put the loop inside the functions instead of the other way around. Your code-saving claims seem to assume wordy loops. Here is another psuedoc-doe version that makes scope inheritance optional:

  function looper(theList, theExpression, scopeLevel) {
for i = getNext(theList) {
evalOnStack(scopeLevel, theExpression)

Here the scopeLevel is the stack level, similar to the "upVar" TCL parameter. Zero is the current stack (inside of "looper"), 1 is one level up.

Top, you're not entirely incorrect. We have been giving you lots of examples involving arrays and query languages. That's only because you demand such examples, though. You say all others are unrelated to your sphere of interest. You can't complain about the playing field when you're the one who chose it.

But, that aside, it's still worthwhile to reiterate our point. Using HigherOrderFunctions allows us to extend a language in ways that the desingers have not forseen. Kind of like macros, but safer and applicable in different areas. By using them, we can extend our languages via libraries in very interesting ways. There's a real advantage to this. The ability to safely extend the language and compose logic is a powerful and desirable one.

Imagine if you could do it in SQL. You could extend your language in new directions to specifically address your database domain. I know the idea sounds attractive to me. -- dlf

I will believe it when I actually SEE it. I am from the MentalStateOfMissouri. -- top

[You have seen it... and you screamed RIGGED, yet it wasn't rigged, that 10 sample function above clearly demonstrates that you are wrong.]

Then stop wasting time and test them out. Free test-drive, Top. JustDoIt. -- dlf

So you are admitting that you cannot document in text your claimed benefits? If I ever figure out what the hell is so great about what you are bragging about, but have such a hard time articulating the benefits, then spank my sorry tongue-tied ass.

Top, we've shown you some pretty damn compelling features of HOFs. You just refuse to see them as such.

You are not even clear on why they are better. You would make a shitty trial lawyer. You would fail in a court of law (barring use of anecdotal evidence).

Whenever we give a specific example, you seem to lack the knowledge to give it context. Whenever we enter into a domain you're more familiar with, you shout that we should just use tables and SQL. It's frustrating, you are the penultimate HostileStudent.

Yeah, that's right. I am the bad guy and you are the good guy, with brilliant, airtight evidence.

We're at a point where we're unsure what you would consider "compelling." I know what I consider compelling.

  Step 1: Declare exactly what you are about to demonstrate
  Step 2: Demonstrate with code or code examples.

My weather app was compelling to my peers.

[As far as I can tell, there's nothing that top will find compelling to prove to him, anything that he doesn't already know. If he doesn't currently use the technique, then as far as he's concerned, it's useless, or so it appears.]

I have had code complimented also. We don't need yet another anecdote battle.

However, you seem to lack the context/experience to read the code and you won't listen to anectodal evidence (for good or for ill), so we're rather out of options. The best thing to do is to AnswerTheQuestionThroughExperience?.

Quite frankly, this is best anyways. We can tell you they're great all day, but until you really experience them you may not be able to identify with the benefits. Computer Science's history is filled with people being unable to immediately grasp the benefits of new features, but time showing they were indeed a good idea. You yourself claim you're ahead of the industry by being TableOriented?. Is it not possible people are ahead of you in other aspects of the discipline?

Just because you do not see it does not mean evidence is not there.

There is very little to distinquish between a religious fanatic and a GoldenHammer fanatic.

By taking a closer look, on your own terms, I am certain you will find your take on what's so great about HigherOrderFunctions. -- dlf

Top, for all your whining about incomplete evidence, your argument thus far has been weak, at best.

The burden of evidence is on you, not me. Everything you presented can either be done with eval or dynamic function scope, and/or something that is not something I need in my domain. (Some I did not bother with a string/eval solution because they are demonstrating needs I don't encounter. But one is probably available.)

We have shown you code that gives a demonstrable savings and increases flexibility.

Show me more realistic "flexibility" scenarios, not foobar ("lab") examples. Foobar examples only show what is possible, not what is practical.

Your response has been to say, "No, it doesn't." Even when we show how your approach would scale, versus the HOF approach (which turned out very bad for your approach), you still said, "No it doesn't."

By "scale", are you talking about speed? I gave you machine speed credit already. If you mean the case list, you did not give enough details.

At some point, Top, we've exhuasted due dilligence. We've told you. We've shown you. You demanded code for that "Example 6" and I provided a skeleton that did what it asked for. You never even responded to it. I've shown you requirements that your techniques cannot cleanly address and you've ignored that too.

I made a complete, runnable example. I cannot compare a small skeleton with a complete running example.

Code reuse on a grand scale. Modular design. Language extension. Pattern Simplification. HigherOrderFunctions give you these benefits. We've shown you examples. You don't even address the issue directly, you keep reverting to tables.

If tables avert the need for them, then tables avert the need for them. What else can I say?

The issue is not about TableOrientedProgramming, it's about if HigherOrderFunctions are an improvment on languages without. Tables are orthogonal. Heck, even closures are orthogonal.

Maybe not. I originally thought so also, but your examples seem to be either reinventing databases or automating things that databases already provide, re your "array query language".

You've given me Example 6 and I've met it with code. What more do I have to do? How many examples can we lay out side-by-side to show that HOFs result in clearer, shorter, faster code? -- dlf

I would like to see code of a full, runnable example. If you don't have time for challenge #6, just say so and we will leave it at that.

Maybe we should make a little competition. My version vs. yours. We'd have to talk about ground rules, but it would be interesting to see who could make a smaller, faster, clearer solution. I'll pit my toolbox against yours any day of the week Top. -- dlf

What kind of ground rules do you have in mind? It already runs. The screen-shots on that page are of real output. If you have any questions about it, feel free to ask. And, I will try not to slip into "RTFM" mode, unlike Mr. Weather Example. (Note that I did not use any Eval tricks in that example, although maybe doing such could reduce the code-size a few percent.) One decision you may have to make early is whether you will use a database of some kind, or say s-expressions.

I am pasting back a copy of the original "employee raise" code examples, if you don't mind. -- top

  function giveAllEmployeesRaise(anAmount) <-- specifies amount to raise employees, notice anAmount as input
forEach(employeeList,  <-- call to forEach with list from anywhere, even db
function(anEmployee)  <-- anonymous one arg function, required by forEach, applied to each employee
  anEmployee.salery+=anAmount; <-- anEmployee.salery += anAmount, notice anAmount is from parents scope
end)-- creating a closure, preventing us from needing to pass anAmount
  end-- to the anonymous function, it is captured for free, allowing
-- forEach to always use the same method signature for its function
-- argument making it totally generic and reusable, while still
-- completely flexibe and simple
this would commonly be formatted as a one liner
  function giveAllEmployeesRaise(anAmount)

ExBase version:

 USE empl
 REPLACE ALL salary WITH salary + raiseAmt

SQL version:

 UPDATE empl SET salary = salary + @@raiseAmt   // parameter syntax varies per vendor

This is probably showing my ignorance, but how close are StoredProcedures to being HigherOrderFunctions for databases?

[Stored procedures are the database version of functions. If you could pass a stored procedure, as a paramater to another stored procedure, including it's execution environment(in other words, not it's name as a string), then it'd be a HigherOrderStoredProcedure?, or if a stored procedure, could return another stored procedure as its result, not it's name, but an actual executable procedure as the result, it'd be a HigherOrderStoredProcedure?, but as far as I know, that doesn't exist.]

Re: Why do my enemies always talk about "real behavior"? [I hope you realize how humorous this statement is, Top]

I don't see the humor. You guys keep talking about the "power of behavior", but never demonstrate anything real. I only see reinventing DatabaseVerbs for each and every class, or MentalMasturbation that does not apply to real problems. Related: IsDeclarativeLessExpressive

Some people are claiming that one or more of the FP code examples above are "significantly smaller" code-wise. Ignoring speed and "elegance" (cough), as far as I know, no FP example shrank the code size more than about 5%. If there is one, I missed it and would like it pointed out please. Note that some have been reworked since the original. I keep thinking we have closed out the size issue only to have it pop back up (such as ChallengeSixVersusFpDiscussion). I want to nip it in the bud. -- top

One more point in favor of HOFs vrs eval - code executed by eval is a string, and as such isn't part of the semantic structure of the problem. I know that Top is interested in exposing program structure and abstract ("syntaxless") language structures - evaled code can't participate in these techniques. HOFs, as a language construct, are certainly superior in this regard.

HOF's are just references. One can reference functions by name or by some internal pointer. Using an internal pointer is a violation of OnceAndOnlyOnce if the name will suffice. Yeah I know, sometimes ugly pointers just run faster. -- top

[A name is also just a reference. And if you think OnceAndOnlyOnce means you can't use references, then... well, I don't know what to say. It's like arguing with the Flat Eearth Society.]

They are most certainly not just references. Many HigherOrderFunctions are curried (partially-bound) functions, with some elements bound to the function and others left free. Many others, even if not curried, are defined only once and used only once at their place of definition - introducing names for these can be a big headache for programming in functional style. One could argue just as easily that use of a name violates OnceAndOnlyOnce... or one could sidestep the whole silly argument by decoupling lookup with dereference. (The latter model is counter to the RelationalModel, I suppose - but tough.)

Generally functions that are only used in one place are fairly easy to replicate, replace with other techniques, or are unnecessary.

If "If-else" is also a function then the "if clause" are only use in one place, how do you replace it with other technique? Or how is "If" unnecessary?

Why would you want to roll-your-own control structure for a one-time use?

About using SQL query for deletion.

Has anyone realize that the SQL DELETE, UPDATE, SELECT statement use a form of Higher order Function?

 DELETE from person
where = "john";
is actually equivelant to lisp

 (DELETE :from 'person
:where (lambda (person)(= (name person) "john"))))
where one could put in macro to make it

 (DELETE :from person
:where (= (name person) "john")))
Now Top, I want you to realize that the benefit of using DB is there because SQL syntax try to be familiar with the usage of Higher order function. However, SQL does not have the full power of HOF so it just provides you with fix capability to do it. That's why you cannot extends SQL to have

 INCREMENT age IN person WHERE = "john"
You can use UPDATE for it, but thanks to Codds for that. Because Insert, Update, Delete is so universal (it's like the word "Do" "Act").

Seeing this similarity I don't know why top hates too much about Higher order function, when he himself uses it all the time.

Imagine You don't have a SQL DELETE statement. What would you do to get the same terseness as SQL you have now?

Please clarify. I am not understanding your point. Use a stored procedural or function if you want an "increment" function. Add, change, and delete are fundamental building blocks, and that is why they are included. Arguing that we need HOF in case the SQL inventor skipped a beat is an odd argument. DrCodd did not create SQL, by the way. Nor is SQL the pinnacle of potential query languages. -- top

SQL is actually a functional language, in that a statement can't have side effects while it is processing. (An update statement modifies tables only after all other processing has completed.) Joining in another table to a query is very much like creating an anonymous function. For example (please forgive my syntax mistakes as I am neither a SQL nor lisp expert): UPDATE employee SET salary = basewage + 0.05*SUM(sales) FROM employee JOIN orders ON employee.emp_id = orders.emp_id WHERE orders.qty > 1000

This might be expressed in lisp like:

      (UPDATE :from 'employee :key 'salary
(+ (base-wage employee)
(lambda (emp-id-val)
  (SELECT (* 0.05 (sum (sales employee))) :from 'orders :where (and (= emp-id-val (emp-id employee)) (> (qty  orders) 1000)))
(emp-id employee)))

The point here is that the join and SUM is effectively creating an anonymous function.

Perhaps one should distinquish between saying a query language *is* a functional language and saying a query language can be implemented (or implemented better) in a function language. I am not sure the debate here is how best to implement a query language. However, things may change if query languages allow user-defined or DBA-defined functions. MS-Access will allow such when defined in VBA, IIRC, but they are not recognized in aggregate operations though.

It's not about me

If you want HOF to be better supported and used more in languages and applications that you may one day be maintaining, then it benefits you to promote the value of HOF to one of the largest domains: custom business applications. If you can demonstrate to other readers besides me in a semi-realistic coded example (and perhaps realistic change scenarios) how the code and/or maintenance steps are shorter/fewer, then the world will accept HOF's more and languages and tools and training materials will better support them such that HOF's will be more common in YOUR work world and the work of your colleagues. Selling the idea to meteorologists isn't going to go very far. -t

If I want to promote HigherOrderFunctions, I'll do it where the audience is genuinely interested in learning new techniques, and not operating under a closed-minded agenda that automatically and categorically deprecates everything that isn't some form of TableOrientedProgramming.

I've many times tried to envision how they can improve custom biz apps, but so far haven't seen them significantly improving things. SystemsSoftware examples don't extrapolate across the line because certain regularities in that domain are rarer in biz apps because the biz rules are not shaped and managed by engineers (for good or bad), but those less interested in clean rules and logic. I'd like to see a coded example in the domain. If you are unable to provide such, just politely say so instead of calling me "closed minded". It just degenerates the discussion. I am not closed minded. Given good evidence, I will change my mind.

I am implementing AnonymousFunctions and HigherOrderFunctions in the RelProject, which is an alternative to SQL. The role of SQL in "custom biz apps" should be self-evident. One reason for doing this is to provide a simple, efficient, and effective way to support user-defined aggregate operators (in addition to the built-in SUM, COUNT, AVG, etc.) for use by the SUMMARIZE relational operator. This is roughly equivalent to being able to define custom aggregate operators (in addition to the built-in SUM, COUNT, AVG, etc.) for GROUP BY queries in SQL. A second reason is to provide an alternative to the EVALUATE mechanism -- which is a typical string-based "EVAL" implementation -- that is considerably more type-safe. The value of these in "custom biz apps" should also be self-evident. If it isn't, there's no point in continuing this debate.

That said, maybe the particular domain of "custom biz apps" you're developing do not require the level of customisation, re-use, and type safety typically provided by HigherOrderFunctions. If so, that's fine -- if you don't need them, you don't need them!

That is SystemsSoftware. I don't dispute the value of HOF there. And it's not that biz apps don't need re-use, it's that the re-use is more irregular. The kinds of influences on each variation is greater such that the boundaries are rougher and more unpredictable. It's hard to define stable GateKeeper interfaces up-front that work in new, unexpected situations without overhauling. Abstractions have to be loosey goosey to flex sufficiently if you want reuse (or least I haven't found a way to get tight abstractions to flex in the biz environment). -t

I'm not sure I'm following you here. When you write "that is SystemsSoftware", do you mean all the SQL in your "custom biz apps" is also SystemsSoftware? Or do you mean that if you need to write a custom aggregate operator -- e.g., a statistical operation like standard deviation that might not be provided by your SQL DBMS -- that the custom aggregate operator is SystemsSoftware?

My classification would depend on the details. If one had to know specifics of the innards of the database engine to write a custom aggregator, then I would call it SystemsSoftware. If it's designed such that it's more like a DBA defining a view, then I wouldn't. In the later case, the "objects" being referenced are the usual "visible" objects, such as tables, views, columns, indexes, etc. Of course, it may be in-between, and each DB engine may do it differently.

There's no reason why a custom aggregator should need to know the specifics of the innards of the database engine. The objects being referenced would be tuples/rows, attributes/values, and relations/RelVars/tables.

And I'm not convinced that a HOF is the best way to define such, but would like to analyze a specific scenario if you can provide one. It could be defined something like this:

  // Sample fizBin
  func aggr_fizBin(theValue, &collector) {
    if (! typeOf(theValue) in ["number","integer"]) {
       sys.throwStdError("this aggregator operator must be numeric");
    } else {
       collector = collector + sqrRoot(theValue);
That's a reasonable definition. Now imagine that some significant aspect of your aggregation function needs to change depending on row values, so that what you effectively need is n different functions, where n could vary between 1 and the quantity of rows in a given group of rows. I'm sure you could do it without HOFs, but HOFs generally make it easier to express.

      function updateInnerHTML(id, str) {
          element = document.getElementById(id)
          if (element != null)
             element.innerHTML = str

function makeLoader(url, element) { return function() { xmlhttpPost(url, "", function(str) {updateInnerHTML(element, str)}) } }

for (i=0; i<numberOfDivisions; i++) { loader = makeLoader("" + i, "pagediv" + i) // loader now contains a function() {...} setInterval(loader, 250) }
      function update0() {
         xmlhttpPost("", "", function(str) {updateInnerHTML("pagediv0", str)})

setInterval(update0, 250)

function update1() { xmlhttpPost("", "", function(str) {updateInnerHTML("pagediv1", str)}) }

setInterval(update1, 250)

function update2() { xmlhttpPost("", "", function(str) {updateInnerHTML("pagediv2", str)}) }

setInterval(update2, 250)

function update3() { xmlhttpPost("", "", function(str) {updateInnerHTML("pagediv3", str)}) }

setInterval(update3, 250)
Also, generally for something custom, one doesn't have to worry about making it overly generic such that it can accept any type etc. It's for a specific purpose and type and the person paying you usually isn't asking for mass reuse-ability, but to solve a specific request.

Note that in my many years in "the industry", I have never been asked for something like a custom aggregator. There are usually ways to compute such with existing tools, and they are probably too industry-specific to qualify as something generic. SQL's overly-large granularity often complicates such, but that's a debate over query languages, not HOF's.

Whether you're a developer who's never needed a custom aggregator, or a developer who creates them on a daily basis, could come down to the difference between (say) being a developer of financial reporting applications that mainly present summaries of historical data, vs being a developer of sales forecasting applications that frequently need one-off regression techniques. In either case, the job is almost the same -- "custom biz app" developer -- but the requirements and what you focus on might be dramatically different.

Of course, nothing should force you to use HOFs if you don't want to, but isn't it better to have a language with a specialised capability that you don't need than to want that capability and not have it?

Somewhere there is a topic on the trade-offs of this. The more ways there are to do something, the more techniques a developer has to learn to work with in existing code. It increases recruiting and training costs, and is thus not a free lunch. Anyhow, I don't think this topic is really about what features to have in a programming language, but rather whether HOF's can "significantly simplify" typical custom biz apps, which is the primary claim being addressed if I am not mistaking. I don't dispute that once in a blue moon there may be a biz app they can simplify, but I just don't see evidence of ubiquitous improvement. It may exist, I just haven't seen it via inspect-able source code. -t

I don't know why providing HOFs in a language would increase recruiting or training costs. Hiring and training Java developers didn't become more expensive when Java introduced anonymous inner classes. Hiring and training C# developers didn't become more expensive when C# introduced lambdas. These are language features that are handy, but not necessary -- at least, compared to (say) being able to evaluate expressions, which is absolutely necessary. You can effectively use these languages without even being aware of such specialised features, and neither language had them in their earlier versions.

I haven't seen any claim here that HOFs can "significantly simplify typical custom biz apps" or that they result in "ubiquitous improvement" in that domain. If that's what you're looking for, you won't find it, but it's not what HOF proponents have claimed. Custom biz apps tend to be CRUD screens and reports; i.e., data in, data out, minimal transformation. If that's what you're developing, HOFs may be rarely desired, though the tools you're using may internally employ them extensively. As noted above, however, for the times when you do want them -- such as custom aggregation -- they're handy and effective and arguably simpler than the alternatives.

See the opening "clearly better" quotation and the related parent topic.

I've read that. I see no mention of "typical custom biz apps". Do you?

I have noticed that when someone essentially says "feature <x> is cool!" you frequently jump in with (essentially, and sometimes eventually) a demand that the claimant prove that feature <x> results in simpler, easier-to-understand, and/or less code in "typical custom biz apps". I find this odd, because I don't think I've yet seen a claimant that "feature <x> is cool" explicitly advocate its use in "typical custom biz apps". In most cases, though it's rarely clear what domain the claimant works in, it does appear to be clear that it's not "typical custom biz apps".

I don't know if biz apps have "minimal transformation". It depends on how you define "transformation". They do a lot of "routing" of information based on various business rules. Group "A" may see (or receive) data group X, group B see data group X and Y, group C see something else, etc. The trick is not only to manage and define all this "routing", but allow users to see, understand, and/or control much of the routing also. ("Routing" here includes both virtual viewpoint alteration and actual "movement" of data. As far as power-users "controlling" it, see CompilingVersusMetaDataAid.)

Routing is not transformation.

Recombining is not transformation? Anyhow, let's try not to get bogged down over this term.

Discussion migrated to NodeJsAndHofDiscussion because it was getting TooBigToEdit.

See also: DynamicStringsVsFunctional, ChallengeSixVersusFpDiscussion, QueryAndLoop, HowToSellGoldenHammers

CategoryExample, CategoryFunctionalProgramming, CategoryBusinessDomain, CategoryJavaScript


EditText of this page (last edited September 30, 2014) or FindPage with title or text search