Functional Programming Is Nota Paradigm

There are a lot of pages discussing how FunctionalProgramming compares to different programming paradigms such as OO. These pages have begun to piss me off a little. Even though I am an advocate of FunctionalProgramming and sympathise with most of the arguments presented for functional therein, there is this little problem:

Nobody knows what functional programming is.

This is not like the fuzziness of ObjectOrientedProgramming, because that boils down to arguing whether inheritance is important or not. But inheritance is a language feature, and only indirectly relates to how problems are structured. Now proponents of FunctionalProgramming, on the other hand, can show without effort how easy it is to program in OO, FlowBasedProgramming, LogicProgramming, etc. ways in functional languages. Thus, functional programming is here understood as a means of structuring programs in different ways, not a way to structure programs, that is, a paradigm.

But even when FunctionalProgramming is seen as a paradigm, there is little consensus about how it structures programs. Some emphasise laziness and dataflow-style programming. Some emphasise recursion and program proving. Some emphasise absence of side-effects (often the same ones as the former group), or heavy use of higher order functions, or MonadicProgramming. Some emphasise DeclarativeProgramming, whatever that means.

In effect, I have come to the conclusion that whenever people talk about FunctionalProgramming here, they usually talk about methodology, not paradigms. When they talk about paradigms, they talk about different paradigms.

-- PanuKalliokoski


Before we ask whether foo is a paradigm, we should first ask what a "paradigm" is. I smell a long terminology debate ahead.... see LaynesLaw.


The history of "functional programming" seems interesting. I believe it started off meaning a style of programming where one can conveniently pass around functions as a useful object. Just as you could say "integer programming" lets you pass around integers. Perhaps you could also inspect a function for its documentation string, or inquire as to what code composes it.

In the same way, I'm told that recursion did not initially mean a function calling itself (of course, I also mean fn-a calling fn-b calling fn-a). It simply meant a function calling another function. And I prefer that old definition because when you use the newer definition for the reason of being "less trivial," people have a hard time understanding it. Self-recursion is just one interesting kind of recursion, not something to fetishize (as SiCp does with eval/apply).

I like these old definitions. I think our desire for specialization makes our scopes narrower. -- TayssirJohnGabbour


The notion of "functional programming" has changed quite a bit over time. LispLanguage is oft acknowledged as the first FunctionalProgrammingLanguage, having first class functions and in many ways being little more than syntactic sugar for the LambdaCalculus (I'm speaking of early Lisp dialects, not CommonLisp or SchemeLanguage or anything else modern). However, since then the "functional" community has diverged quite a bit from the "Lisp" community in numerous ways:

(Of course, then there are languages like OzLanguage and ErlangLanguage, which are dynamically typed but are also SingleAssignmentLanguages; though Oz lets you use state if you want to).

The one key commonality between these different camps within functional programming is--of course--first class functions. And this remains a key differentiator between any of these, and mainstream OO languages like C#, C++, or Java--though all three are getting more and more "functional" as time progresses. (See MainstreamInfluenceOfFunctionalLanguages).

Actually, early Lisp programs did not avoid side-effects at all, at all. Go look up the MIT AiMemos? for some examples of really early Lisp code. The notion of referential transparency as a virtue came out of the LambdaTheUltimate papers, IIRC.


I have always understood that the main purpose of FP is to write program functions that behave as functions in the lambda calculus do. That it, FP programs should be written as so that they define a relationship between the function's domain and its range, with the actual desired computation being treated as a secondary concern - hence the term DeclarativeProgramming. The remaining 'definitions' are really techniques for accomplishing this goal. Certain, this was the thrust of Backus' Turing Award speech, which is usually seen as the beginning of FP as a formal method (though of course many programmers using Lisp and related languages had been using what today are considered 'functional' techniques for years). Am I mistaken in this? - JayOsako

Actually, I would not call them function in lambda calculus. Technically they are lambda expressions, and in principle a lambda expression is technically a distinct (albeit related) mathematical concept then a function.

But anyways you are right that computation is a secondary concern and FP programs should be written to describe the relation between input and output. In particular the relation should be described by means of functional composition. Because we also have LogicProgramming (or ConstraintProgramming ) that has the almost same definition except that the relation between input and output is described by means of specifying constraints. This is already referenced on wiki, see NygaardClassification. --Costin


Isn't the above, so far, merely a classic illustration of LaynesLaw? Definitely!

If there is a point to be gleaned from this page, perhaps it is that ThereAreNoParadigms.


NovemberZeroFive (I knew from the first post, that this would become topical, I just had to wait)

CategoryFunctionalProgramming CategoryRant


EditText of this page (last edited December 4, 2007) or FindPage with title or text search