Oo Vs Functional

[MergeMe with FpVsOo somehow.]

Everything Object Oriented Programming can do can be done better in functional programming - the code is easier to write, runs faster, and uses less memory.

Many features and design patterns in Object Oriented languages are imitations (poor by necessity) of those in functional programming, and that compiled object-oriented code cannot be optimized.

The CeePlusPlus StandardTemplateLibrary and some of the collections in most OO languages (like SmalltalkLanguage) are obvious (and valuable) knock-offs of functional techniques, but OO programmers are not aware of the history or what they are missing.

Patterns like VisitorPattern are really second-tier to using AlgebrasInFunctionalProgramming?. Map, fold, zip, and custom folds for structures allow programmers to spend a lot less time building and operating on data structures and worrying about memory use, especially in LazyFunctionalLanguages. Programmers in functional languages don't have to write loops over and over again. It seems I am always writing loops in C++ and C#, each one a chance to make a mistake in iteration. In addition, code gets bigger because the loop cannot be reused, so we experience poor pipelining and caching by the CPU.

In functional programming, the loop is usually written once per data structure, and you just call it with the logic you wish to perform. It really is simpler.

Functional programming is a better way to program and an easier way to think, especially for new programmers. But it's hard to unlearn the OO/imperative way of thinking once you already do it.

See OoIsPragmatic [By Doug Ransom - doug.ransom@alumni.uvic.ca]


See http://www.uni-koblenz.de/~laemmel/expression/ for a paper that discusses (and formally proves) the equivalence of object oriented and functional programming. With this equivalence formally proven, it is potentially in a compiler's best interest to automatically translate an imperative, object oriented program to functional form first, then compile it functionally. This would give the benefits of FP to the OO programmer, making the arguments raised above utterly moot.

However, from a purely syntactic point of view, I prefer function programming over OO. The programs are smaller (more concise) and much easier to keep straight in your head. Nonetheless, I also agree that this style of coding isn't universal; there are some things that OO is better at than FP. --SamuelFalvo?


Fp and Oo are good for different kinds of tasks. I hope the statement above was written as a subtle troll, rather than an honest reflection of the poster's views! If I am programming a UI, I use OO. If I am doing mathematical programming, I use FP. In the real world, does anyone really do anything different?

It's such a shame when someone finds technology that works in one particular situation and then infers it should be used everywhere for everything. Why are so many analytically-oriented people so inflexible? People should apply paradigms in appropriate areas, rather than having a ParadigmPissingMatch. -- anon


I can't agree with this. The big win of doing functional programming is in higher-order functions, which let you easily build up new behaviors compositionally and at runtime. This is tremendously useful for programming in a modular, easily-tested manner, and lets you push the Extreme Programming notion of OnceAndOnlyOnce much farther than most people are used to. It's tremendously useful for both exploratory programming and good design. The big win of the OO style, however, is that you can use subclassing to extend existing data types. In ML, this is like being able to add new branches to an existing data type. This is a major modularity win, and lets programs grow cleanly. When you need to do something like this in ML, life gets moby difficult.

But if you look at the actual world's change patterns, it often does not change in a tree-wise fashion, it changes in a graph-wise fashion. If you have deep trees, then you have the FragileParentProblem? where changing a node may break siblings, and shallow trees don't factor repetition well. The DeltaIsolation philosophy is nice on paper, but does not work well in the real world. I would like to see set-based DeltaIsolation implemented somewhere instead of tree-based. Functional and relational come closer to that ideal than inheritance trees IMO. MultipleInheritance is insufficient to emulate this also. Plus, the granularity of difference is often not on class or method boundaries. It is tough to override 1/3 of a method without mass refactoring. You have to pick the right chunking size up front.

Isn't that what DelegationInsteadOfInheritance? is for?

I think it's pretty clear that the advantages of OO and FP are orthogonal. This isn't just speculation: ObjectFunctional languages like CLOS or Dylan are *ludicrously* fun to program in, because your program can grow just as it needs, without having to go through any contortions like the VisitorProgramming? (fie on OO), or funky existential-type-encoding tricks (fie on ML).

I completely agree. Just note that it's not enough to have an ObjectFunctional language, where both styles are available but where you have to choose between them, as is the case in ObjectiveCaml. The ability to grow your program in every direction as needed comes with MultiMethods. -- DanielBonniot


What is FunctionalProgramming?

In functional programming, 'functions' are "first class citizens"; like data, they can be manipulated. Functions can work on functions and return a resulting function. Functions that manipulate functions are called functors or higher-order functions.

Secondly, functions are expressions; they return a value and have no side effects. This is generally true of functional languages, and functions and languages can be said to be pure (or impure).

Functional languages are derived from a calculus of functions called Lambda Calculus, invented by Alonzo Church in about 1941 (see e.g. TheCalculiOfLambdaConversion).

Later JohnBackus in the 50's invented a language called FP, the first functional language; I am not sure if it was implemented in those days.

Then, also in the late 50's, JohnMcCarthy invented Lisp [LispLanguage], standing for LISt Processor. It has a strange syntax of braced list functions within yet more braced list functions, of lists within lists. This represents the underlying data structure which represents the functions. With the strangely named functions, CAR and CDR, which were named after registers in the IBM 701 for traversing the data structures. (Contents of the Address Register and Contents of the Decrement Register) There were plans for a "higher" level language with a PascalLanguage-like syntax implemented on top of it, but in those days, for performance reasons, it was not implemented. [This may or not be true.] (Actually, it was not implemented in the proposed form for lack of interest. for a later development, see CGOL) Anyway, it was a strange but very powerful language, especially for the time that it was first implemented. See his web site http://www-formal.stanford.edu/jmc/ for his original paper.

Modern functional languages like ML [MlLanguage] still are based very much on list or data structure programming. The way functions are called/applied is based on pattern matching, of both structure and value. ML is pure in this respect, whereas other functional languages use what is termed overloading, where multiple functions may be used in matching (the overall domain not necessarily being complete or determinate). Matching gives tremendous power, and ability to traverse data structures; it gives great algorithmic power, simplicity and beauty.

[By Aaron Gray - A view of the other side - by an OO'er - please feel free to correct me. -- aarongray@beeb.net]


The title of this page makes no sense until one clarifies to what degree they mean of "functional". If they mean "pure functional" as in "no language native concepts of procedural operations", then I would have to disagree - wrapping your head around the pure-functional concepts of monads and call-with-CC is tricky, and really just AbstractionInversions? for the lack of any way to say "do-X-then-Y". If you mean "procedural/OOP languages that support functional-programming concepts", then these days most languages allow that. C# has gotten better syntax for inline, anonymous functions, and Java isn't too painful either. C++ it's possible, but missing a lot of syntactic sugar. If you mean Lisp-like code-is-data, I would disagree that's a requirement for functional languages. It's a useful linguistic feature, but it's association with functional programming is due to it's presence in the prototypical functional languages.


What I would like to see

I would like to see a proper marriage of the structure of Object Orientation with the data structure and algorithmic manipulation of functional languages. ObjectiveCaml a newish language does this to some extent, but is very odd as it is true to functional purity and its syntax and what results is very confusing, not really for a beginner in functional programming. What about RubyLanguage?. Ruby is good, but lacks a good library, and suffers from some line-noise and Perlisms. Strings are an abomination, and the namespaces are quite horrible. Other than that, Ruby is actually a great language (no sarcasm intended).

What I would like to see is something that is incremental in implementation where language features can be brought in or imported just like libraries. Taking an object-oriented framework but where all program constructs are data, just like Lisp, but with conventional syntactic constructs. This I think is key to "stepping out" as Lisp did early in its day.

This is what is call a Meta Language, where language constructs native or foreign can be manipulated as data. An example:

 import C;

aStatement : C.Statement = if (aBool) then trueStatement else falseStatement;

aProgram : C.Program = { ... aStatement ... };
Note: This statement must contain a local symbol table and allow for unbound elements, a la lambda ;) This gives us a very powerful basis on which to construct a true morphosis of object orientism and functionalism.

[Please delete this if it's too off the mark. -- AaronGray

Try Scala http://scala.epfl.ch/docu/ -- ChanningWalton


There is a sense in which functional programming can be seen as "dual" to object-oriented programming. Here the term dual has a specific technical meaning which is approximately that there is a correspondence between each feature in functional language and a feature in an object-oriented language, which are in a sense opposite in meaning. A simple example of Duality is the connection between unions and records. Duality provides a mechanism for comparing object-oriented languages with functional languages in terms of features offered. Based on this kind of comparison, it seems that OO languages have certain facilities that have not been implemented in many functional languages (subtyping and inheritance), and vice versa (first-class functions, referential transparency). However, in most languages, this kind of connection has not been made, and the set of features offered does seem quite arbitrarily chosen from this point of view.

However, dual language features are to certain extent incompatible with each other (try inheritance from an union). This explains the technical difficulties that are inherent in unifying functional programming and object-oriented programming in a single language.

The connection between dual features can be thought of as similar to the relation between "lower bounds" and "upper bounds" of a set in mathematics. Functional programming focuses on lower bounds [such as unions and constructors] and object-oriented programming focuses on upper bounds [such as records and methods].

The difference is best visible in the kinds of interfaces that occur in those languages. OO languages are full of functions from integer to integer, and often interfaces have these as well. Interfaces in OO therefore describe an upper limit of operations that an object may perform on it. In functional languages such as Haskell, in contrast, these are very much less common. Instead, functional interfaces usually describe a minimum set of properties of numbers that are needed, and builds on those.

To understand this correspondence better, I suggest everyone reads articles by Erik Poll, which attack this question directly:

This explains the connection between OO and functional programming. More general information about duality can be found at:

Esa Pulkkinen <esa.pulkkinen@kotiposti.net>

I'm not quite sure what kind of duality you're hinting at (FP and OOP manifest several depending on the point of view), but inheritance from a union should not be a problem. Actually, any set of mutually exclusive (in whatever way) implementations of interface form a union, and a derivative of this interface should be a more generic interface, thus allowing at least those implementations which the original interface allow.

Answer: I agree, this is possible. I think an extension of the implementation you refer to is called coinheritance in one of the articles referenced above, and the corresponding relationship for the interfaces is called supertyping. Coinheritance is dual to inheritance and supertyping is dual to subtyping. Note that coinheritance is not inheritance, because coinheritance generates more generic operations, whereas inheritance generates more specific data. -- EsaPulkkinen

Casting this rather theoretical issue aside, the duality I most commonly see between OO and FP is that when we have data structures and routines that work on those, OO hides the data structure and thus allows it to be extended, but FP hides the set of operations available from the data structure and thus allows it to be extended.

Answer: I think of it like this: both OO and FP have routines, which can be seen to have four interfaces to the external world:

Here 'input data' is dual to 'incoming method' and 'output data' is dual to 'outgoing method'. Incoming methods control what processing is performed by the routine. The routine can use outgoing methods to change the behaviour of its environment. Input and output data is the information manipulated by the routine. Input and output data is hidden by OO as the state of the object. FP hides incoming and outgoing methods in functions (this makes functions pure since all decisions what the function does are then made by the function itself). The methods can be seen to describe modifications to the input/output data. So, the dual of "pure function" (FP sense) is a stateful object that strictly owns its data [where the interface to the object does not at all depend on the data structure.]. In general, it is not necessary to hide any of the four interfaces (but this is often a good idea). -- EsaPulkkinen


A formal basis would be great, but a practical edge should also exist.

A formal base, i.e. some typed extension to lambda calculus with a proper semantic construction like Milner's ML has.

A practical framework and implementation that non-functional programmers can understand and work with.

I say this as it is probably the best and only way to create a mainstream Functional Object Oriented Language (FOOL), that can be used by teams of programmers from different disciplines, from the functional school, and from the more mainstream software engineering background.

-- AaronGray


What is Object Oriented Programming or OOPs?

[Perhaps definition should be moved to a different topic]

Object Orientation has many faces or paradigms.

Types of object oriented languages or models are:

Languages:

See Jonathan Rees' list of OO properties: http://www.paulgraham.com/reesoo.html


On inheritance of overloaded pure functions

If the language in question, a new functional language or object-oriented extension of an existing one, has functional inheritance (either on a per-function basis or on a pure function included in a class), then overloading is the answer. This allows a function's overloaded "fragments" to be pieced together via inheritance. -- AaronGray


Classes can bag sets of pure functions for usage in an inheriting context.

A set of related pure functions can be bagged together by a class construct. This allows a set of functions to be made available by inheritance within a class that applies them. -- AaronGray


Compare C++ after 20 years to Smalltalk. We need a new OOL that implements and has a continuum of implementation possibilities from prototypical object based to dynamic class based to static class based. Where static classes can be cloned into dynamic ones or into prototypical objects and prototypical objects and be turned back into objects of a dynamic or static class. C++ has stood in the way of this.


"Everything Object Oriented Programming can do can be done better in functional programming. By better I mean easier to write code that runs faster and with less memory."

How about simulation? Where OOP started :)

-- AaronGray


Each generation of programmers bring new waves of principles into the programming mainstream. What was once complex becomes seemingly simple. This will happen to FunctionalProgramming, it will become simplified, blackboxed, and compartmentalized, and "tacit" knowledge for most programmers.

Since functional programming predates generations of 'principles' that have been adopted by the mainstream, how does your hypothesis fit with history?


A big deal has been made recently about the "Pull model" vs "Push Model" for processing XML in the Microsoft DotNet runtime. In functional programming, these techniques are one and the same, and very easy. This type of processing is where FunctionalProgramming really shines.

Similarly, there have been efforts made to reduce memory use in DotNet by avoiding the XML DOM and using an XML Reader and XML Writer, and perhaps chaining them together. The problem is, the developer has to chain all these things together.

In FunctionalProgramming, this would come for free, at least with a lazy language. The implementation is so much simpler - just use the tree as if it were there, and it will be parsed as you navigate through it, and bits you don't reference anymore are simply discarded.

Similarly, trees from one compiler phase to another to another can be processed in a pipeline, so that the intermediate trees never fully come into existence, much like the way a filter reads from a pipe and writes it output. This is very hard to do in OO - nobody bothers because it's hard. Instead, OO programmers work at a lower level of abstraction and spend a bunch more time getting the same result (i.e. fooling around with XML readers instead of using a DOM).

LazyEvaluation makes life a lot simpler.

-- DougRansom


Philips'es ElegantLanguage does LazyEvaluation for parsing, with a choice of Recursive Descent, LL(1) or LALR(1) parser generators, it is also a functional language with type inheritance.


Everything Object Oriented Programming can do can be done better in functional programming--the code is easier to write, runs faster, and uses less memory.

That just plain isn't true. (I'm a functional programmer, BTW.)

In many ways, OO and functional languages are trying to solve the same problem, namely that global state is a bad idea. The OO solution is to use lots of little local states. The functional solution is to remove state altogether.

Many features and design patterns in Object Oriented languages are imitations (poor by necessity) of those in functional programming, and that compiled object-oriented code cannot be optimized.

Similarly, many design patterns in functional languages are imitations of those in OO programming. Those patterns designed to simulate state (because sometimes you need state) or enforce evaluation order (because the performance model of LazyEvaluation is so obscure, despite the static semantics being much simpler) are two of the most glaring examples.

-- AndrewBromage


For me, the big problem with functional programming is that functions don't scale. It just doesn't make sense for anyone but a mathematician or logician to think of large systems as compositions of functions. Always remember that mathematics is a highly selective field of a study. Traditional mathematics chooses its problems with care, since the goal is usually to come up with a general proof of some fact. If a mathematician comes to believe it's not possible to write a proof, they will abandon the problem. There is no such luxury in programming. Functions work best with mathematics, and that they can be usefully (depending upon your perspective) applied to so many non-mathematical problems is a unexpected and significant result. Simply, while functional programming has many benefits, its costs are far higher than object-oriented programming. The difficulty that proponents of functional programming have in convincing others of the merits of their ways often stems from this fact.

Give me some examples/proofs, I've been using Haskell for more than a year. I like composing functions. The GHC Haskell compiler is written in Haskell, so I think the "abandon a proof" reasoning is flawed. What are the costs of FP and OO? I don't think FP is any more expensive than OO. -- ShaeErisson

The cost of functional programming is mathematical sophistication, something that, in my experience, many programmers don't have, or even actively avoid. The vast majority of programmers I've met just don't find functions an intuitive or natural way to model many of the sorts of things that arise in the real world of programming. But they do claim objects seem more natural and intuitive. The functional programming view treats programming as a branch of mathematics, but as anyone who has ever written a program outside the cloistered world of academia knows, there's a lot more to programming than just mathematics. Programming is much closer to engineering then mathematics, and this ultimately why functional programming fails: it does not embrace engineering. Objects, for all their faults and inelegancies (what an inelegant word!), does.

''I actually see the reverse in practice. Functional programming does see the benefits of software engineering. It is unlikely that it will be adopted, because unlike Object Oriented programming, it is substantially different from procedural programming. I think one of the virtues of object oriented programming is that it was able to appear accessible to programmers who were used to very machine oriented computing. It is Object Oriented and Procedural programming that refuse to embrace mathematics, ...

even decades old research, at any formal level. Object Oriented programs seem to me to have extremely weak methods of formal semantics, and when they do it is usually some incredibly adhoc imitation (in the form of a pattern) of something that was formally proved in a research paper.

I think the problem with funcitonal programming is that it is percieved as being only for brilliant mathematicians, when average hobbyists like me can pick up Haskell and start using it for fun hobby projects, not just theorem provers or the like. Saying that you need to be a graduate student in category theory to understand monads is not far from saying you need to understand set theory to use addition and substraction. Just because you don't fully understand a formal method doesn't mean it won't be useful to you.

And what is with this predjudice against theory? I think one of the virtues of Haskell is that it is beautiful in theory, (which you don't even need to understand well to benefit from) and is an incredibly practical and expressive language for large scale hobby programs, and assuming a substantially more open-minded programmer culture, large scale proprietary programs. ''

None of this is inconsistent with the fact that many people like functional programming a lot, or that functional programming is often a rich source of ideas for new programming techniques,or even that functional programming may in the end be better in every way that object-oriented programming. Given that right now, even given a big head start, there are so few large functional programming applications as compared to large object-oriented applications seems to be a plain fact, and it is clear that programmers have voted with their feet. The only question is why.

The why for more is the inability to clearly answer the simple question put forward in FunctionalModeling. As someone actively interested in a BetterWay?, I can't really say functional is better for the engineering things I do. -- AnonymousDonor

One argument I have against the 'academics only' argument is that I only had cs101 (write BASIC using the ROM on this IBM-XT) and business calculus 1 before I dropped out of college. Yet, I find Haskell to be productive and enjoyable. I still don't understand most of the math behind monads, but I sure can do some cool stuff with them. --ShaeErisson


Well, for starters, you can do proper OO in a completely statically managed memory store. This means, you can pre-allocate all your objects up front, and do useful work in a real, honest to goodness OO way, without further memory allocations. Objects may even be allocated statically in the program image if the language permits it.

FP can't do this. Literally everything in an FP language is dynamic in some manner or other. There's tons of memory allocations, frees, and garbage collection going on. As a consequence, FP is pretty doggone slow in comparison to an OO equivalent program.

Note that this doesn't make FP a bad solution to a problem. There has been research in optimizing ML for use in embedded environments, for example, using memory management tricks borrowed, from ALL languages, from Forth! Yes, memory is still managed dynamically, but the overwhelming majority of allocations and frees are arranged so that they're contiguous in memory, so that they can be bulk released by just recycling the free memory pointer (what would be called the "dictionary pointer" in Forth). Multiple such "dictionaries" exist, of course, each representing a different average lifetime of object.

Still, despite this, this embedded ML still uses dynamic memory management and still occasionally requires the use of a garbage collector. So it's still not as fast as OO, though it does come close.

That being said, I'm very much interested in functional programming and object oriented programming alike. I really don't see the two as being mutually exclusive. Even with my C, C++, Oberon, and Forth programming, I tend to program in a functional style (this is especially true with Forth, considering that most Forth implementations do not support local variables. And good riddance too!).


While I agree where you're headed with this conversation (that for big projects OO is better), I disagree with the way you argue for it. Saying FP is always GCed and OO is never is a REALLY bad argument. I'm sure anyone could make an implementation (that might be limiting) such that it didn't use GC (just like you made your application by allocating the objects beforehand). This, however is NOT a good argument for choosing OO over FP. I personally think FP is very interesting for problems in the mathematical domain. However I fail to see how one can model an environment (with different objects) in it and end up with a nice design. Humans are most adept at recognizing objects. It's only in stricter domains that we start to see the different functionalities. -- ChristophePoucet?

Not only is it a bad argument, it's factually incorrect. For example the Stalin compiler for Scheme will statically allocate data. If you only have a fixed set of data then your Scheme program compiled with Stalin will only use statically allocated memory. As for being not as fast as OO, the Stalin implementation of the CoyoteGulchBenchmark? is *faster* than the C++ implementation. Check comp.lang.scheme for details. As for OO scaling better than FP, it's a tough claim to make. There are big FP programs out there (like the ErlangLanguage products) and they claim they scale better than OO products. -- NoelWelsh


There's tons of memory allocations, frees, and garbage collection going on. As a consequence, FP is pretty doggone slow in comparison to an OO equivalent program.

This is a common misconception. Today's best garbage collectors are as fast or faster than equivalent malloc/free. The best ML implementations are pretty fast. For example ObjectiveCaml language is, in average, faster than C++. However, in the vast majority of applications, speed issues are much overemphasized, and very often a non-issue.

It is much more important to have a (high-level) expressive and safe language than a (low-level) supposedly fast language, in order to have the task at hand completed in due time with the least bugs. Optimizing by hand the 5% of slow functions in C is a trivial task when the whole system works.

Using a low-level language based only on performance issues is one of the most common errors of programmers. One should always remember the PrematureOptimization rule (also RulesOfOptimization rule 1) when choosing a programming language.

For me, the big problem with functional programming is that functions don't scale. It just doesn't make sense for anyone but a mathematician or logician to think of large systems as compositions of functions.

Another misconception. There are many large software systems written in Lisp that have been developed over decennials, the most famous one being the Emacs editor, or the Macsyma computer algebra system too (see maxima.sourceforge.net), or the CLIPS expert system. Some softwares written in FP languages just couldn't be done in imperative languages, because of too great a complexity.

FP and OO are two ways of seeing the same problem, one in a process manner (FP), the other in a descriptive manner (OO). None is better, they are complementary. When the application can be described easily as a set of complex processes, FP will be much more adapted. When the application can be described easily in a hierarchical fashion, OO is likely to be simpler.

Many OO practitioners reject the notion that OO is about trees.

The problem is, in many cases, it is intellectually simpler to describe a system as a set of objects than as a set of processes. However, sometimes, the later is more adapted, and leads to much more scalable code than OO. It is the experience of the programmer that will lead him to choose one paradigm or the other.


It seems I am always writing loops in C++ and C#, each one a chance to make a mistake in iteration. In addition, code gets bigger because the loop cannot be reused, so we experience poor pipelining and caching by the CPU.

In functional programming, the loop is usually written once per data structure, and you just call it with the logic you wish to perform. It really is simpler.

In C++ and C#, looping isn't object-oriented, so this is kind of a straw man.

In Smalltalk, for example, you pass a block to an iterator method, and many people seem to agree that it's a better way of doing things. OK, so this technique was stolen from FP, but that doesn't mean that FP is unilaterally superior...just that it has some good ideas to borrow.

-- TimMoore (who went from being an FP zealot to being an object zealot and isn't turning back ;-) )

You can write applies in C++ just as easy as an iterator. Without the ease of passing a block, people don't use this option very much. Personally, I like having an apply base class because it documents the interface. A block can be anything.


This is the worst example of a FalseDichotomy I've ever seen. -- DanielBrockman

It makes some sense to compare ObjectOrientation and FunctionalProgramming since you might be in the situation to have to choose between languages in either camp. But you're right, those two paradigms are not incompatible: just look at ObjectFunctional languages. -- DanielBonniot


Actually, I think functional programming is overrated. The fact is that object oriented programming is more elegant and much easier to program in than is functional programming.

The few really nice things in functional programming - like not having to write loops over and over again - have already been stolen by pure object-oriented languages using blocks(eg. Ruby, Smalltalk with foreach and other looping constructs). In fact, since Smalltalk was really the first modern object-oriented language (I know about Simula), it could be said that object oriented languages had all the important function programming methods right from the beginning.

Furthermore, higher order methods of function composition are not really used that often other than a few specific situations; for example, generalized looping constructs and callbacks (foreach and map). Purely functional programming is almost impossible, even in "pure" languages like Haskell, because you have to use print statements and get input from the user, which is never purely functional unless you start using world variables. Language like Haskell have gone to great lengths to minimize imperative programming (e.g. Monads) while still simulating imperative features (like exceptions and state data). Why go through all this work of avoiding imperative features? There is no good reason.

[This paragraph mispresents the purity of Haskell. Haskell does have a (conceptual) world value, but it is wrapped in a state monad so you can't actually mess with it. Input/output is not fundamentally non-functional. And as for the "why": because that's the only way to have lazy evaluation + I/O with value semantics.]

One reason might be that people like to think programming can be thought of mathematically using function composition. This makes mathematical analysis much easier. You can prove things about programs more easily. But who cares? Proving program correctness is very difficult even if you use functional programming and is practically unnecessary in all but a minority of situations (e.g. software for X-ray equipment, airplanes etc). People have invested so much time and energy in functional programming that they refuse to realize that it is not that great in the vast majority of situations, because most things cannot be easily broken down as a composition of functions. The only place functional programming really shines is in very mathematical problems.

In real-world situations, explicit state is important and widely used. Most things - including human beings - can much more easily be thought of as having explicit state rather than being functions (e.g. Behaviouralism (functional approach) vs Cognitive paradigm (brain has state data that effect decisions)). Even in engineering, state data is widely used to model things, as in control theory with its state space approach. Functional programming by its very nature is static, since it is grounded in the idea of things never changing their values. Imperative programming is more dynamic because it allows things to change value.

Statements such as "more elegant and much easier to program" and "can much more easily be thought of as having..." suggest MostHolyWarsTiedToPsychology.

The AnonymousDonor takes a very philosophical point of view, but the argumentation is quite moot. A dynamic system that changes value in time is semantically equivalent to a static list of the states of the system at different moments of time. ... It's true that behaviourists took a functional approach... However, a state-changing system, while it cannot be modeled as a function from an input stimulus to output reaction, can be modeled as a function from a list (of inputs) to a list (of outputs).


I would like to see a proper marriage of the structure of Object Orientation with the data structure and algorithmic manipulation of functional languages.

That's easy. Just have a look at any functional language, you get OO for free. This is how it works (using HaskellLanguage as an example, because that's what I'm most familiar with):

The only way to interact with an object is to send it a message (aka invoke it's methods), right? So it doesn't matter if there are any other parts of the object, only the methods count. Therefore an object is the record of its methods. A constructor then is just an ordinary function that builds the record, constructing appropriate methods on the fly. Here are the classic Ellipses and Rectangles:

 data Shape = Shape { area :: Float, draw :: IO () }

new_rectangle width height = Shape { area = width * height, draw = draw_rectangle width height } new_ellipse major minor = Shape { area = width * height * pi / 4, draw = draw_ellipse width height }
Done! True objects in a functional language. You can also have local state (just allocate the mutable cells in the constructor, your methods end up in the ST monad) and you can have inheritance using extensible records (using HList or Trex or whatever another FP language provides). Single inheritance can even be encoded using parametric polymorphism.

In summary: first class functions give you methods for free. Therefore, FP subsumes OO.

This is just like saying that C supports object oriented programming as well as C++. There has always been a convention in C of using structs with function pointers to emulate objects, but that convention, along with cute examples like this Haskell code, always fall apart when you try to use them for anything other than a toy example. Each person who does OOP this way in C or Haskell is going to re-invent it in a slightly different way, so nobody's code will interoperate with anybody else's code. Even if you went all out and formalized some particular object-functional mapping, and developed a library that help with its use, it still doesn't mean that Haskell supports OOP, any more than having an MP3 library means that Haskell supports MP3. -- MichaelSparks

That's true to some extent, but doesn't matter all that much. Functional programs compose at the abstraction level of functions, even if they contain some OO patterns. Objects are rare in functional programs, so no "object standard" is needed.

The point of this embedding is just to show that it can be done. It's also surprisingly simple: just put functions into a record. You cannot embed FP into any object oriented language the same way. You don't get first class functions and closures, unless you already had them before (in the form of Smalltalk's blocks). Therefore, FP gives me all of OO and then some.

Embedding is the closest to an objective argument we can hope to achieve. The retort "but that's not the same!!" is just politics.

Well of course objects are rare in functional programs, for the same reason they are rare in C programs. The language doesn't support them directly. It's really just that simple. You don't seem to appreciate the difference between a language supporting a feature directly, and the ability of a programmer to pretend that it does via some convention or library. You claim that embedding is the closest to an objective argument we can hope to achieve. If that is your standard, then I claim that I can implement any FP feature in any turing-complete language, using some form of embedding. Do you disagree with that claim? If not, then do you agree that the only argument is over the level of complexity of the mapping? -- MichaelSparks

Please show me how to get lexical closures in C++ without writing a whole interpreter. See the difference? [... try boost lambdas] ( BoostLambdaLibrary, BoostFusion, BoostPhoenixLibrary )

Also, please tell me how a collection of methods is not an object. If the record of functions is an object, then HaskellLanguage directly supports objects. If not, there has to be a difference. What is different?

And lastly, objects are rare in FP because they are inconvenient. Not because of language limitations, but because of their nature. The FP'ers argument is simply: I already have first class functions. There is no point sticking them into records because some other sect says so!

I haven't claimed that I could use lexical closures in C++ with any sort of simplicity or elegance. You're making judgement calls about what is and is not a reasonable solution. You refer to this distinction as the "difference" in your sentence "See the difference?". What I would like you to acknowledge and/or understand is that there is no clear cut difference. A language either has direct and explicit support for some construct, or it does not. If it does not, then you can pretend it does by using some form of mapping. Your way of representing objects is a mapping. If I were to use lexical closures in C++ by some crazy convoluted scheme, then it would also be a mapping. Put aside the complexity of the mappings. They are both mappings. I would like you to either acknowledge or argue that point before I continue to build my argument. -- MichaelSparks


I have been wrestling over the difference between OO and FP for two days now (from F-Bounded polymorphism to Bertrand Meyer's claim that OO subsumes FP -- the traditional claim that FP subsumes OO being pretty much taken for granted in academia). One thing I will say is this though. The differences between modeling with OO and FP are isomorphic with the differences between math and the world. Between a priori and a posteriori. Even between analytic and continental philosophy. One side approaches from the mathematical side of relationships and mappings, another side approaches from the physical side of concrete objects in space. --TomHarada


One thing I've noticed, now that functional programming is (somewhat) more popular given the release by MS of F#, from which Linq and generics originally evolved, is that OO and FP tend to attract different personality types, cognitively speaking. Some people are very concrete, and creating objects that manage state fits very well with how they think about almost everything. Things are what they are, and just because someone decides to color her hair blue, she's still the same person, just a different state/property. Other people think more adeptly in terms of functions, where their individual pieces of thought regard everything as a function. To such people, using the hair color example again, the focus isn't on the fact that the person is the same person, but on the process that makes the hair blue: you can take any person (argument), apply the function (make hair blue), and end up with a person with blue hair. Which person (argument) doesn't matter, only whether they are part of the domain of the function. Such people aren't ignorant of the change of state, per se, but it isn't how they think about the world.

To each type of person, different truths are "obvious", and communicating between paradigms can be difficult. Hence the intriguing discussion on this page.


Clearly, since most languages are TuringComplete, they can express exactly the same semantics. They do so by providing some form of abstraction. These abstractions are (generally speaking) leaky. They don't fit all problems equally well. Some concepts are easy to explain in English but not Finnish and vice versa. That said, the OP is clearly a pointless provocation, because not all problems are easily solved functionally and not all problems are easily solved with objects/classes. Some problems are best solved with other paradigms altogether. Stating that your "pet" paradigm is always best just marks you as an inexperienced one-trick-pony.

RE: since most languages are TuringComplete, they can express exactly the same semantics.

Wrong. TuringComplete languages can express the same functions, but that is very different from expressing the same semantics.


Statelessness is becoming more and more important for me in my day to day work, because I'm working in a scalable/cloud environment. I need to be able to kill instances at a moment's notice and spin up new environments quickly. State is becoming an external concern, something that I can't tie to my code's deployment environment at all. In this case, thinking "functionally" becomes more natural; state is more obviously side-effect oriented, and I actually began to understand the concept of a Monad because of the necessary treatment of infrastructural configuration in this ephemeral environment.


See also ObjectFunctionalImplementation.


CategoryFunctionalProgramming CategoryObjectFunctionalPatterns FunctionalModeling ParadigmPissingMatch


EditText of this page (last edited December 13, 2014) or FindPage with title or text search