Closures And Objects Are Equivalent

The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?" Qc Na looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures."

Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.

On his next walk with Qc Na, Anton attempted to impress his master by saying "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, Anton became enlightened.

From: http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg03277.html


Poll: Are objects and closures equivalent?

Aye's:

Nays:


It is well-known that objects and closures over lexical scopes (LexicalClosures) are equivalent things; which one you prefer is a matter of taste and background. (Actors are also equivalent to both; see ActorsModel.) Those with a strong functional background (in particular, Lisp/Scheme) tend to prefer closures; and many object systems in LispLanguage and SchemeLanguage (but not including CommonLispObjectSystem, which uses method polymorphism on pure data objects) use closures to implement objects.

StructureAndInterpretationOfComputerPrograms gives a [http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-20.html#%_sec_3.1 nice sample for this type of object system].

It is not well-known that objects and closures are equivalent. On the contrary, what is "well-known" is what LucaCardelli presents in the 18th chapter of TheoryOfObjects, precisely that encoding object oriented features in lambda calculi is unwieldly and cumbersome, especially when it comes to typed version of those calculi. This page is further evidence as it presents only objects with one method and no inheritance.

Using PatternMatching, supporting more than one method is no more difficult than supporting a single method. DelegationIsInheritance, and delegation can be supported straightforwardly by adding a self parameter to calls.

In general, emulating object features in a statically typed functional language with closures (and even with more advanced features like modules), for example StandardMl?, is at least cumbersome if not impossible. There are object systems written for Lisp and Scheme, some of them using macros (CLOS), some in functional style without macros, however I don't think there's any particular object system with all the major features (polymorphism, inheritance, etc) written without macros. Nor there should be since object systems with macros look easier to write and comprehend.

What exactly is the problem with macros? They are used to give you a nicer syntax, which is a Good Idea IMHO.

[Well it is well known that an object is one particular form of a closure, and a closure can be used as an object, which they often are, and is what the above paragraph is trying to convey. Sure, some languages give us a nice clean little package that wraps up polymorphism and inheritance into a thing called class, but one doesn't need either polymorphism or inheritance to have an object. Polymorphism and inheritance are used to specialize objects, closures are used to create anonymous one-shot objects that don't need specialization, they are complimentary abstractions. You are setting up a straw man here, polymorphism and inheritance are not required to build objects, polymorphism and inheritance may be required to call it object oriented, but not object based. No claim was made that Closures = OO programming, just that objects and closures are equivalent, which is true.]

[You can have both, don't be so closed minded. Objects wrap up polymorphism and inheritance in an easy to use fashion that would be more verbose with closures, however they don't give you anything special, just a shorter syntax. Objects are necessary for domain entities and such, well-defined objects so to say. Closures are necessary for one-shot objects that aren't well defined, and are different just about every time, but essential none the less. Polymorphism and inheritance are not essential to build an object, that is if you define an object as a thing with state and identity, which closures have.]

More precisely, equivalence means that two concepts can simulate each other without requiring a global transformation on programs. Otherwise we get caught in the TuringTrap.

The fact that you can use one to implement another do not an equivalence make. The same good old argument that says that Java and assembly are not equivalent programming languages, in spite of the fact all computations that can be implemented in one can be implemented in the other, and you can use Java to implement an assembly interpreter or you can use assembly to implement a Java (bytecode) interpreter.

The page as it stands now gives a false advice in the sense that if the claim was true, than it would be easier to program in ObjectOriented style in a language like StandardMl?, but it isn't. Neither functional programming subsumes object oriented programming, nor the other way around. It is beneficial for most programmers to know both styles.

Many modern OO languages, however, don't have full closures. Some languages have something similar (Java InnerClasses); others don't bother at all. It has been argued (GlobalVariablesConsideredHarmful) that full lexical closures aren't always a good thing - they dramatically increase coupling between modules (as the closure has hooks into the guts of the enclosing scope), and they complicate a language's implementation. (See also WhatIsClosure for more on that).

For many of the purposes (not all) that a Lisp user might use a closure - an OO programmer would use an object - and an object is often a perfectly good substitute.

[[Wait. Hold on. What exactly here makes a "closure" have to be functional? You can have a "closure" also be an object. Java's anonymous inner classes close over the final variables in scope. EeLanguage's object definitions are closures, which allows it to avoid the concept of a "class" without using prototypes and yet still having a central module used to spawn new objects of a given type. Yet, neither of those are exactly "lambdas". Yes, they could arguably be defined as HashMaps of lambdas, but the point still stands: closures can be defined as anything which can capture lexical variables and take them into other stack frames. They don't actually have to be lambdas.

Of course, most people think of them as lambdas. I know what a "closure" implies. Still, FoodForThought?.]]


[Moved from LexicalScoping]

In many cases, a bunch of related functions defined within a single outer function (and accessing - incestuously - it's variables) are parts of a single concept - the functions are all tightly coupled and using explicit parameters would become a nightmare (giving you a bunch of functions with dozens of parameters to pass around). Whenever I see this, I see an object waiting to be born. Refactor the whole mess into a class. If you are in a non-OO language; find the union of all the shared state and make it a struct/record - and pass that around (by reference). In both cases, you get the ability to make lots of instances of your abstraction (and to access the inner methods without calling the outermost method first). In the OO case, you get the additional benefits of OO - PolyMorphism and such.

This is (or is very similar to) a MethodObject.

Umm, this is not really the common case for closures. You see examples like that, but they're largely academic. Indeed in 90% of the cases where you'd see that pattern, it'd be wiser to use an Object in Lisp. Especially since CLOS provides such an incredible system. Most of the time we're using closures in lambda-passing functions. It is far superior to Java inner classes or C++ functors for this. As an example:

  (defun bizzare-example (list)
(let ((ctr 0))
(mapc (lambda (x) (incf ctr)) list)
ctr))

While trivial and academic, it demonstrates why lisp and scheme programmers like function-passing style with closures so much. The standard combinators like the mapping functions and sorting functions can take the surrounding context into account. Also, closures allow for awesome control-flow constructs like the lisp condition handling system. Heck, people have made a limited kind of continuation just with "goto" and closures (but I am not saying that I don't envy Scheme's real continuations).

Closures and Objects are closely related, but closures are more general. You can't really use objects to implement closures unless your object system is actually doing closures under the covers. You can implement objects with closures by linking methods across a set of variables (note in lisp this practice is considered archaic and generic functions are instead used to handle this).

I was unclear. You can make objects (without any kind of dispatch) by creating a closure over a set of variables (data portion) and defining a list of lambdas (code portion) to create rudimentary objects. Adding inheritance and polymorphism from there really isn't that hard. As I said though, these kinds of object systems are unpopular in Lisp and Scheme today, because of the prevailing style in Lisp. Also, some people claim the message-passing system (which people also note is "specialization on only one argument") which most systems base themselves off of is inherently inferior compared to the pure multimethods approach that CLOS favors. I personally am on the fence on this issue.

My point though, is that closures and objects aren't really equivalent. Objects are a subset of closures, so they carry some similar traits.


Note: objects and closures over lexical scopes are equivalent. Idiomatic style of programming in a language that allows nested function definitions takes advantage of this fact, rather than being confounded by it. E.g. a function in the SchemeLanguage can return one or more local functions that can use variables in the defining scope, and that are therefore fully encapsulated.

 (define (make-counter count)
(define (inc) 
(define old-count count)
(set! count (+ 1 count))
old-count)
inc)

(define the-counter (make-counter 1)) (the-counter) ==> returns 1 (the-counter) ==> returns 2 ... etc. ...


I'm not sure if I'm grokking the level of equivalence of the above example. It shows me how a single method object is equivalent to an closure, but I'm not seeing how this extends to a multiple-method object (does it?). Could somebody provide a translation of the following into closures:

  class Multiplier
def initialize(a,b) 
@a, @b = a,b 
end
def addA(value)
@a = @a + value 
end
def addB(value) 
@b = @b + value 
end
def result
@a * @b 
end
  end

mult = Multiplier.new(5, 4) puts mult.result# returns 5 * 4 = 20 mult.addA(3) puts mult.result# returns 8 * 4 = 32 mult.addB(5) puts mult.result# returns 8 * 9 = 72

My scheme skills are a bit weak, but try this:
  (define (make-multiplier a b)
(lambda (method-name)
(case method-name
((add-a)
 (lambda (value)
(set! a (+ a value))))
((add-b)
 (lambda (value)
(set! b (+ b value))))
((result)
 (lambda ()
(* a b))))))

(define mult (make-multiplier 5 4)) (print ((mult 'result))); prints 20 ((mult 'add-a) 3) (print ((mult 'result))); prints 32 ((mult 'add-b) 5) (print ((mult 'result))); prints 72

make-multiplier returns a function that takes a method selector, and then returns the appropriate method. The method can then be called normally.

This is probably not idiomatic Scheme though, which would look more like this:

  (define (make-multiplier a b)
(list
(lambda (value)
(set! a (+ a value)))
(lambda (value)
(set! b (+ b value)))
(lambda ()
(* a b))))

(define (add-a mult) (car mult))

(define (add-b mult) (cadr mult))

(define (result mult) (caddr mult))

(define mult (make-multiplier 5 4)) (print ((result mult))) ((add-a mult) 3) (print ((result mult))) ((add-b mult) 5) (print ((result mult)))

This time, make-multiplier returns a list of 3 functions: the methods in the class. A production scheme system would probably use a hashtable, alist, or record, some data structure where the methods are not positionally dependent. In any case, there're accessor functions that return the correct lambda, so that the client programmer doesn't need to worry about the actual data structure returned.

You can get polymorphism too, by replacing the functions within the data structure returned by the constructor, but subclass methods won't be able to access superclass data unless the subclass is lexically enclosed by the superclass. This is annoying, because any moderately deep inheritance hierarchy becomes virtually impossible to read. Granted, most moderately deep inheritance hierarchies in traditional languages aren't very easy to follow either, if the subclasses use large amounts of data in the superclass.

An approach which might match the original request better:

 (define (multiplier . x)
(case (car x)
('new (let ((a (cadr x)) 
 (b (caddr x)))
(lambda message
(case (car message)
 ('add-A (set! a (+ a (cadr message))))
 ('add-B (set! b (+ b (cadr message))))
 ('result (* a b))
 (else (error "Not a valid instance method of class multiplier."))))))
(else (error "Not a valid class method of class multiplier."))))

(define mult (multiplier 'new 5 4)) (display (mult 'result)) (newline) (mult 'add-A 3) (display (mult 'result)) (newline) (mult 'add-B 5) (display (mult 'result))
This code is a variant of the closure-object idiom demonstrated in StructureAndInterpretationOfComputerPrograms, chapter 3, which is widely known among Schemers; it has been tested under Dr Scheme and works as designed. While objects could be built manually this way, it would make more sense to define a set of forms to automatically generate the class form, which IIUC is how SOS works. HTH. - JayOsako


It's been claimed, on SmalltalkInsteadOfPython, that smalltalk is preferable because it has real closures, where Python only has one-shot lambda functions. But if you want to encapsulate a structured state, why not use an object? Perl gives you closures ... but the only use I've seen for them is to hide the data members of an object ... what else can you do with 'em?

Well, for one you can write an object system with them. This is a popular exercise in Scheme and at least one of the object systems in SLIB requires no macros.

A class is simply one form of a more general concept, a closure. In other words, classes are closures... with names, but not all closures need names. The use of closures goes beyond the class concept, and is more general and more powerful. Object systems are written with closures. Closures allow one shot specialized objects, and are far more general than classes.

In Smalltalk they are an enormous convenience. You could create a class with a value method that does just that one computation, but this is a lot of work for little or no gain, and you can't do ifTrue: and while: and such very easily at all. In Python they are unnecessary but still an enormous convenience, and their lack is unfortunate.

Can you give an ur-python snippet that would illustrate what could be done with them if they were in the language?

If closures were added to Python tomorrow, I could do this:'

 class Foo:
def iterator(self):
 i = 0
 def temp():
value = self[i]
i = i + 1
return value
 return Iterator(temp)

for node in Foo().iterator(): node.process()

This is a very easy xrange equivalent, if Iterator is a simple class that just handles __getitem__ and the usual Python iteration protocol. It is harder to do in stock Python and as a result people don't do it; I know of no equivalent in the standard libraries save xrange.

Maybe the above message is old, but closures exist in Python today. Only the example you given above doesn't work because Python doesn't distinguish between "let" and "setf"; It creates a new "let" block for i within temp, making it different from the i in iterator. The way around this is to uglify the code slightly with an array, which changes "i = i + 1" from assignment to a modification of an object.

 class Foo:
def iterator(self):
i = [0]
def temp():
value = self[i[0]]
i[0] += 1
return value
return Iterator(temp)

for node in Foo().iterator(): node.process()

You don't need closures for this:

 def my_xrange(start, stop=None, step=1):
if stop is None:
  start, stop = 0, start
while start < stop:
 yield start
 start += step

for i in my_xrange(7): print i

for i in my_xrange(4, 100, 3): print i

Locking can also be implemented without closures:

 >>> def Lock(func):
 ...print "Locking..." # your self.lock()
 ...try:
 ...return func()
 ...finally:
 ...print "Unlocking!" # your self.unlock()

>>> def func(): print "... Now in func!"

>>> Lock(func) Locking... ... Now in func! Unlocking!


Though this still doesn't seem like something I'd use, I see no difficulty. It can be done readily with python's builtin compile and eval:

 class Closure:
def __init__(self, statements, locals=None):
 import copy
 self.stuff = compile(statements, "<string>", "exec")
 self.locals=copy.deepcopy(locals)
 self.globals=copy.deepcopy(globals())
def __call__(self):
 return eval(self.stuff, self.locals, self.globals)

Lock.with_lock(Closure(""" statement1 ... statementN """, locals())
You might leave locals() out for the most part. I haven't tried to make this work, but it seems straightforward. Do Smalltalk closures do something more magical than this?

Yes. For one thing, they don't make deep copies of the variables referred to:

 | i sum |
 i := 0. sum := 0.
 aCollection do: [ :each |
i := i + 1.
sum := sum + each
 ].

or, in the hypothetical Python-with-closures:

 i, sum = 0, 0
 def act(number):
i += 1
sum += number
 aList.do(act)

actually works. For another thing they're much more efficient and transparent; they only refer to some locals, and you needn't do the full setup for so called downward-funargs, etc. The current Python Way of doing this is to create an object:

 class Counter:
def __init__(self):
 self.i, self.sum = 0, 0
def actOn(self, number):
 self.i += 1
 self.sum += number
 c = Counter()
 aList.do(c.actOn)

While certainly feasible this is by no stretch of the imagination convenient, particularly for short iteration methods.


Okay, I tried the Closure class above and had to ditch the copying because modules wouldn't let themselves be copied. No matter, any relevant state can be stuck inside the statements string.

 class Closure:
def __init__(self, statements):
 self.stuff = compile(statements, "<string>", "exec")
def __call__(self, *arg, **kword):
 return eval(self.stuff, locals(), globals())
So then you can do all this with map:
 sum, prod = 0, 1
 act = Closure("""
 sum = sum + arg[0]
 prod = prod * arg[0]
 """)
 map(act, aList)
The trick is that arg and kword get stuck into locals(). It seems possible to go into all kinds of conniptions over this - see http://starship.python.net/crew/mwh/bch/module-bytecodehacks.closure.html - but I'm still not getting what smalltalk does better than the little idiom here.

Well, Smalltalk works for one :) But try this:

 def foo(list):
sum, prod = 0, 1
act = Closure("sum = sum + arg[0]; prod = prod * arg[0]")
map(act, list)
return sum, prod

Although the original now works, you're right, this foo(list) doesn't work. sum and prod were going into globals, not locals. Unfortunately (?) locals() returns a read-only dictionary, which means Closure can't affect it. You can still do it in a horrible way with globals ...
 def foo(list):
global sum, prod
sum, prod = 0, 1
act = Closure("sum = sum + arg[0]; prod = prod * arg[0]")
map(act, list)
return sum, prod

Don't lose hope entirely, though. locals() is mutable in JavaPython, so foo(list) should work fine there.


The problems with Closure:

That one's not a big deal once the idiom is established.

So what? CodeUnitTestFirst.

Actually, there is. locals(). To make this look cute we should capture locals() without explicitly passing it in ... maybe we need to choose between the lesser of two evals? Ugh.

Bytecodehacks attempts closures, but has problems; you have to wrap all the local variables in lists if you expect to change them, it's very low level and in fact contradicts the reference, if not the reference implementation, and probably won't work at all for JPython.

Yes, agreed, the bytecodehacks stuff is more a suggestion to GuidoVanRossum and his crew, not intended useful to regular python programmers except under very special circumstances.

Since you can dodge the issue by using an auxiliary object, this is a flaw, not an immense gaping hole, but it's still irritating.


Just curious: using this idiom, can you do a non-local return?

Can you be a little more explicit?

Sure. Non-local return is basically the equivalent of longjump in C. Take for example, this Smalltalk block:

[^12]

The semantics of non-local return (the ^ symbol) is that instead of returning from the method the block is evaluated in, it returns from the method the block is defined in.

Here's an example:

BlockExample? class>>foo

Dictionary new at: 12 ifAbsent: [^'no such element'].

The implementation is Dictionary at:ifAbsent: looks like this:

at: key ifAbsent: aBlock
^self contents lookUpValue: key for: self ifAbsent: aBlock

the contents of the dictionary is actually a hash table, and its implementation of #lookUpValue:for:ifAbsent: looks like this:

lookUpValue: key for: client ifAbsent: aBlock
| assoc | 
assoc := self at: (self findKeyIndex: key for: client).
^assoc == nil
ifTrue: [aBlock value]
ifFalse: [assoc value].

The bold line will evaluate the block from waaaay back up in BlockExample?>foo and cause control to return to the sender of #foo, not the sender of #lookUpValue:for:ifAbsent:.

Can you do this in Python with closures?

The trick above is merely an attempt to do closures by glomming the local and global dictionaries. Stack frames don't seem so easy to pick up ... though maybe rooting around in the IDE traceback could show how to grab them. I seem to recall, however, that reusing them is very unsafe.

So instead you really ought to check out the new StacklessPython with its continuations, which I think are exactly what you're looking for. These should be standard pretty soon (1.6?). Or use python's try/throw exception handling. Does smalltalk have try/throw?

Smalltalk does have exceptions. Smalltalk also has non-local returns. They are useful for different things, and in fact quite a few languages have both: CommonLisp has RETURN-FROM, which uses lexical scoping to decide where to return from, as well as the condition system. It is arguable that non-local returns are easier to understand than using exceptions for non-exceptional purposes, just because both sides must necessarily be in the same textual location.


Combined with map, closures would provide a powerful tool that augments but doesn't compete with objects. The Python Closure class we got above is syntactically and semantically crippled by the current python implementation. The Continuations in StacklessPython, however, seem like they could provide the equivalent of Smalltalk closures. They already do CoRoutines and much else.

Pardon a doubting Thomas, but how does StacklessPython help the scope issues?

Good question. Dang, now I gotta go actually understand these continuation things ...

I wrote what I understand of ContinuationsAndCoroutines on the StacklessPython page -- ShaeErisson

Good job, Shae. Now to use a continuation as a closure we need a way to get data into it ... if those CoRoutines could message each other maybe ... ?


TCL is actually quite suitable for smalltalk-style blocks; uplevel and upvar can be used to get the scoping right, and non-local returns work as well. (This makes a certain amount of sense, tcl and smalltalk both use unevaluated blocks to generate control structures) I don't believe python is as plastic, but some of the examples above are just plain silly. [I am only an egg. -- PM] Here are some rewrites in a style that doesn't go against the GrainOfTheLanguage:

 class Foo:
def __init__(self, seq):
 self.seq = seq

foo = Foo("asdf") for c in foo.seq: print c, print ''

Note that the above code can be more simply written as:
 for c in "asdf":
 print c,
 print

def act(state, number): (i, mysum) = state return (i + 1, mysum + number)

aList = range(4) print reduce(act, aList, (0, 0))

or, ideally, you would go for absolute simplicity:
 mylist = range(4)
 print (len(mylist), sum(mylist))


Opinion: Closure ws. Class is not an issue for large code blocks; the size/effort difference in defining them pales in comparison to what you're really trying to accomplish. If you need a process/alogorithm object for some nontrivial purpose, it really won't hurt to go ahead and write it. It goes with the grain of, eg., PythonLanguage.

On the other hand, for trivial bits you have LambdaForms?, ListComprehensions, and functions/methods as first-class, accessable objects that you can pass around and call directly.

Lastly, you should implement the iterator protocol for interesting collections.

Here's the classic example of sum-and-product, with an internal iterator:

 def InternalIterator(func, state):
for x in range(1, 7):
 state = func(state, x)
return state

(sum, prod) = InternalIterator((lambda (s, p), x: (s+x, p*x)), (0, 1))

Tested on python 2.2 by IanKjos


BlocksInJava shows how one can fake closures in a language that doesn't have them. The idea is to construct an "expression object" that acts like the desired closure. Effectively, the constructors for expression objects form a small sub-language towrite closures in. In principle, a pre-processor to the real compiler could translate "real" closures into such an expression sub-language. This shows how closures can be implemented without back-end support.

-- StephanHouben


The PizzaLanguage extends JavaLanguage by adding FirstClass functions, and so encourages a functional style of programming. Much more so than the anonymous classes of Java.

I just don't understand why first class functions have to be unrelated to objects. A function is just a stateless ValueObject, after all. Why would melding functions and objects be problematic? I don't see why they they could not participate in inheritance, etc.

-- MichaelFeathers (or am I missing something?)

You're absolutely right. Furthermore, if a function is a closure, it may have a state. Closures thus completely implement objects, things with an identity and state that can reply to messages. In 1992 KenDickey wrote a fascinating paper SchemingWithObjects, which starts as follows:

There is a saying - attributed to NormanAdams? - that 'Objects are a poor man's closures.' In this article we discuss what closures are and how objects and closures are related, show code samples to make these abstract ideas concrete, and implement a Scheme Object System which solves the problems we uncover along the way.

One might think that ReferentialTransparency (e.g., lack of mutation) may preclude objects with a mutable state. Fortunately, this is not the case. For one thing, we can use general, proven techniques - monads and unique types. There are simpler approaches, for example OlegKiselyov's purely functional object-oriented system (http://pobox.com/~oleg/ftp/Scheme/index.html#pure-oo). It shows a fully referentially transparent OO system: objects with a distinct state and identity, inheritance and polymorphism -- and not a single assignment. This system shares the drawbacks of OOP as well.


I have used objects to implement closures in languages where they were unavailable. I have use closures to implement objects in languages where they were unavailable. QED. -- BottomMind


CategoryClosure


EditText of this page (last edited June 9, 2013) or FindPage with title or text search