Greenspuns Tenth Rule Of Programming

This is a humorous observation once made by PhilipGreenspun:

Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of CommonLisp.

http://philip.greenspun.com/research/

RobertMorris?'s addendum: "Including Common Lisp."

... except if you use a well-written Lisp interpreter library :)

Something like the "CommonLispCredo?" comes to mind here: "All programs have been written, and that program is Lisp. With perfection already attained, implementation is left to the ignorant who do not already know the truth of this." But see also SocialProblemsOfLisp.

More general variation: "Every sufficiently complex application/language/tool will either have to use Lisp or reinvent it the hard way."


What this is referring to is that programmers write code that makes up for the lack of useful features in the programming language. CommonLisp has an amazingly unified set of features that cover most of the bases. It seems that whenever a compromise could have been made between programmer convenience and implementor convenience, programmer convenience won. Throughout the language, ItJustWorks like it should almost always. Objects endure while they can still be referenced. Values carry their type with them in case you need it. Methods are dispatched on every parameter. The syntax can be extended arbitrarily if you need it. Execution can be restarted after a condition is handled. And so on and so on.

When features are missing from a programming language, programmers re-invent them for themselves. But these re-inventions are secondary to their real goals, and eat into their deadlines. Moreover, they are performed in isolation. So the quality of the results is no match for real programming language features that are brewed for decades by people who are dedicated to making a great programming language, and developers who provide highly optimized implementations of ever increasing maturity. Compared to the mature implementations of a real language, the in-program imitations are indeed "ad-hoc, informally-specified, bug-ridden and slow".

(Contrast the above with the argument from RubyOnRails' developer David H Hansson that frameworks need to be extracted, not created in a vacuum - otherwise, you end up with an overcomplicated, supergeneral, abstract-to-the-moon monstrosity. Indeed, many framework developers argue this.)

I don't see a contrast, I see agreement. I think Hansson is clearly correct; one famous example is VRML, which was created in a vacuum, and an academic paper eventually contrasted that with the 3D infrastructure of Quake, which was very successful in achieving sharply delineated goals, unlike VRML.

Common Lisp itself was not created in a vacuum, its features were each extracted from existing solutions, with quite a bit of political battling when a choice between conflicting solutions needed to be made. It suffers a little from the kitchen sink syndrome in sometimes adopting multiple prior solutions rather than picking just one (e.g. in its model of filesystems, most of which is unnecessarily addressing now-extinct OSes). On the other hand, some other things turned into a clever meta-synthesis of previously-competing solutions, such as the CLOS OO metaprotocol.

Perhaps you meant something regarding the lack of a GUI framework in Common Lisp? That's not usually regarded as a "programming language feature", though, which is what is being discussed here. -- DougMerritt

Do you think Lisp's propensity to promote patterns to language features is aided (enabled? encouraged?) by its ... um ... oh ... what's that word ... homoiconicity? -- EricHodges

Oh no, not the H word again! :-) In a way, no doubt it does in some cases, since it's much less frequent to need to change the compiler/interpreter than it tends to be for other languages. This tightly interrelates to the use of Lisp macros to implement such things, so it's not just pure uncut "H" by itself. -- DougMerritt


There is a similar RelationalWeenie version that goes: "Every sufficiently complex application will either have to use a database or reinvent one." I forgot what it is called. Tentatively called GreencoddsTenthRuleOfProgramming.


With apologies to all the SmugLispWeenies... perhaps the law should be refactored? To:

"Any sufficiently complicated program written in a low-level language contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of some higher-level language". Often times, CommonLisp is not the target, and the HalfAssedMetaLanguage (or meta-programming facility implemented) is loosely based on some other language.

It's a quote, you can't refactor it.

[To some extent, you can.] I agree. The problem is when you pass on the altered quote without the context (in which case trimming the quote is less useful), such as what happened to MurphysLaw. It got broadened and diluted from a specific comment (with the overtone: PAY ATTENTION DAMN IT) into general pessimism.

No you can't. Greenspun said what he said, you can't change it or it won't be Greenspun's 10th law.

[Indeed, but it wouldn't be the first time someone altered a "law" quote because they liked their version better. A perfect example is SturgeonsLaw... he never said anything was "crap"].

[OK, we'll call it something else then... but the observation stands]

He said Lisp because he understood that no matter the language, the best you could hope for would be to re-implement Lisp, and he said that because he understands that there isn't a more powerful language.

[In terms of sheer number of features, perhaps not. OTOH, for many projects, a far simpler language than CommonLisp suffices - there are many of these.]

Like what, almost all languages are far more complex than Lisp. Don't confuse Lisp's libraries with the language. Lisp and Smalltalk are two of the simplest languages around. Java, Perl, C++, C, C#, etc... are far more complex.

[The same can be said about Java, C++, C: don't confuse the libraries with the language. Java has a small number of keywords - and a huge standard library.]

No, it can't, that statement is absurd. Java 64 reserved words, C++ 48 reserved words, C 28 reserved words, Smalltalk 6 reserved words. Java is quite clearly very complex in comparison. Smalltalk and Lisp are small languages that believe in putting stuff in the library where it can be changed. The others put them in the language where they can't. Java, Perl, C++, C, C#, etc... are far more complex.

The count of reserved words in Java is inflated: it includes many that would almost never be used (when would you need volatile for application programming? And who does system programming in Java?), are used for thread synchronization which would normally come from a library (like synchronized) or are reserved specifically to ensure that future versions of java don't mistakenly implement things that the designers don't want (goto is reserved even though there is no such construct available - in order to make sure it stays that way).

A more accurate count of a language's complexity might come from the size of its parsing grammar; and Java's has far fewer rules than C++ (I don't recall the exact numbers but they're floating around elsewhere on the wiki).

[C has both a simple structure and a relatively small library - too small, many claim. Likewise with C++, most of the interesting stuff is in the library rather than the language proper. However, this distinction between "language" and "library", while interesting to LanguageLawyers?, is of less use to application programmers. And CommonLisp, if you consider all the stuff defined by the standard (including CLOS), is also large. (SchemeLanguage, I'll agree, is simple and small. CommonLisp, not.)]

C++ is a big for its age language with a moderate sized standard library. It has, what, 50 or so keywords? Some with overloaded meanings, plus all the idioms that the compiler doesn't enforce but must be used to avoid problems.

Lisp is the king of all languages, thus if you continue to add features to any language, eventually it will become Lisp, which already has all those features.

[There are many features that aren't in CommonLisp - at least not as defined by the standard. Many implementations of CommonLisp do add these, either as a nonstandard extension to the language or as a library/macro package implemented in the language - but at what point does a feature become part of the language?]

Since in Lisp, usage of library and language look the same. It's really up to you to decide how you see it.

[Hmm. In C/C++, the "special forms" (keywords) have a different syntax than function calls/macro invokations, whereas in Lisp they are the same (EssExpressions), give or take a few random ampersands, quotes, and the like. Does this imply that extensions to Lisp are a holistic part of the language, whereas extensions to C/C++/whatever are bolt-ons? If that is the crux of your argument, it's not very convincing]''

Are you kidding? The difference is huge. In C and C++, the keywords have special syntax, and you can't add new keywords because of this.

[Nor can you add new keywords (more specifically, new special forms) in Lisp. Macros can do wonderful things - but ultimately, they have to boil down to a set of primitives. Anything not expressable in terms of those primitives is not expressible. The fact that macros look like special forms (both are lists) is nice and convenient - but it's syntax.]

You're wrong, you can add new special forms in Lisp, they're called macros. Do you not understand that all languages are just syntax? In fact, that's what languages are in their entirety, syntax for machine code, 0 and 1, the only true primitives. The fact that macro's are Lisps special forms is everything, it allows you to redefine the language, or extend it. I don't think you quite grasp just how important that is.

[The fact that a C macro call has to be in the form foo(arguments) rather than something else isn't the issue in C/C++. It's that the preprocessor is so limited.

The CeePreprocessor has indeed been, more or less intentionally, designed to be limited. For example, the CeePreprocessor has an elaborate NameDisabling? scheme to prevent recursion. However, it is probably less limited than what a cursory examination might reveal. It is, in fact, as TuringComplete as any finite computer. More on this on the CeePreprocessor page.--VesaKarvonen]

Macro's "are" a language extension. Everything is expressable in terms of primitives, it's about what you choose to express in them. C/C++ macro's don't compare to Lisp macro's because Lisp's macro language is Lisp. C/C++ macro's are much more limited, they just do token replacement.

The language is what it is and you can't really change it. In Lisp, any method or macro you write is identical to the built in features.

[In appearance, yes. But again, it ultimately must be implementable in terms of said built-in features.]

Again, appearance is EVERYTHING, since built in primitives are capable of expressing any computation, macro's let you make any computation into a primitive.

This is where Lisp gets it's power, you can extend the language, you can rewrite the language, you can write new languages (See MetaProgramming).vC and C++ can't for that very reason.

[Of course you can extend C/C++. It ain't as convenient to do MetaProgramming, that's for sure. But where do you draw the line between an extension of a language, and a program implemented in a language? Macro vs function? Am I "extending" C++ if I write

 #define triple(x) ((x) * 3)

but merely implementing a generic function if I write

 template <class T> T triple (T x) { return x * 3; }

??? I thought not.]

define is allowing you to extend the language. The fact that it's a preprocessor in C/C++ is an implementation detail, so I'd say you thought wrong.

Syntax makes all the difference in the world. In C/C++ you write your code in C/C++, in Lisp you write your code in terms of the problem, after extending Lisp to be suitable for that problem.

[More precisely, you write your code in a combination of Lisp and your macro-language. It's not like Lisp goes away after the meta-language is complete, and you then code exclusively to the meta-language. That would be a silly thing to do, after all. And even if it did, you still would be writing EssExpressions, for better or worse.]

And what exactly do you think languages are? A compiler does nothing more than expand into assembly, it's a glorified macro expander. You're totally wrong about that by the way, it's not a silly thing to do at all. DomainSpecificLanguages are built expressly for that purpose, and you can code entirely in the new meta-language if you so choose. There's also no restriction that says you have to write EssExpressions, that's done for convenience, because they are easy to parse. Macro's "are" a part of the language. You seem to miss the fact that much of Lisp itself, is implemented with macros. CLOS is implemented with macros. Any macro language you build, is as much part of Lisp as its own macros. C/C++, in combination with a pre-processor, gives you limited ability to redefine the language, but no where near the capability of Lisp's macros. C/C++ macros expand at compile time, they are just text substitution on strings. Lisp's macros expand at read time, a concept that C/C++ doesn't even have. Read time can be during runtime, and lisp macros aren't string substitutions, they have the full power of Lisp available to them within the macro while expanding.

That's why it takes about a tenth of the code to solve the problem in Lisp, or as PaulGraham would say, OnLisp.

One of the examples given on ExampleOfGreenspunsTenthAtWork seems to be "dynamic typing" (really, a tagged variant record) library developed by MicroSoft. It is claimed that this is inspired by CommonLisp. More likely, it's inspired by the numerous OO languages out there that do support some sort of DynamicTyping (even something as simple as DynamicCast in C++) - the presence of such in Lisp was likely not considered.

A similar example is GeeLib? (GLib), one of the low-level libraries included in the Gnome environment. This library (written in CeeLanguage) provides platform-independent APIs for lots of low-level OS services (not abstracted by the C standard library), and more importantly for our discussion, a half-assed implementation of half of...CeePlusPlus. (In other words, a crude OO framework based on PointerCastPolymorphism, held together by a spate of macros which try to do what any C++ compiler will do for you better). No automatic memory management, no generics (templates), no multiple inheritance, let alone things like HigherOrderFunctions - just a quick-and-dirty way to allow applications to roll their own objects. Why was this done? Partly because much of the Gnome crew has a strong dislike for C++ (though coming from a C perspective - the old CeePlusPlusSlowerThanCee myth), partly because when the project got started, C++ still didn't have a stable ABI in many Unix environments (and thus you couldn't easily link together object files produced by different C++ compilers.)

In my career, I've also seen lots of attempts to give C++ automatic memory management - usually done through some half-baked ReferenceCounting framework (which requires that ref-counted objects inherit a particular base class, and only be accessed via SmartPointers).

Other than the YahooStores? example (which apparently consisted of an application being ported from Lisp to C++ as a result of a truly horrible management edict - in that case, I'd consider writing a HalfAssedVersionOfLisp? myself that compiled to/was implemented in C++, just to make the PointyHairedBoss happy), I'm not aware of too many examples of software written in SomethingOtherThanLisp? seeking to emulate Lisp. In most cases of this sort of (domain independent) MetaProgramming, a specific feature from a higher-level language is wanted in a lower-level one.

Any aspirations towards Lisp are purely coincidental.

Which actually strengthens Greenspun's point - it's not that everybody out there is consciously copying Lisp; it's that when programmers go about solving problems in the ways that seem natural and right, they end up recreating a bunch of features that are already present in Common Lisp. At some point - it seems - it's worth asking, why are we reimplementing all these features ourselves instead of using a language that has them built in?

[For most of the features that are emulated in such fashion in lower-level languages, there are many other languages that have 'em besides Lisp. Granted, Lisp/Scheme unrivalled macro capability - but I have yet to encounter a hacked-up, half-assed version of DefineSyntax. The things that are most-often hacked in to lower-level languages - some form of memory management, dynamic dispatch/typing in languages that don't support 'em natively, etc...can be found in lots of other languages besides Lisp - including several that are more likely to get management buy-in, and which have an ArmyOfProgrammers available. In other words, for the vast majority of the cases where one might ask "why not use Lisp instead?", one could replace Lisp with Python, Ruby, Perl, TclTk, etc. and come up with a satisfactory solution. Whether one should choose one of these over Lisp depends on the circumstances of the project and its team, I suppose.]

You could also argue that many of these languages (python, ruby, etc.) are following a meta-greenspuns. By trying to address limitations in some other language (usually C/C++, Java) they construct a new language that is *closer to lisp*. Iterate this enough times, and the result is lisp.

No, the result of such iteration is not Lisp. The reason for improving other languages instead of using Lisp is simple: Lisp is a mixture of good ideas and bad ideas, and it's easier to adapt good ideas to some other languages than to fix the old Lisp which resists any incompatible changes. Of course whether a given idea is good or bad is subjective, and thus different people will start from different languages, and some people would even think that Lisp is the ultimate answer to everything. Other people have the right to dislike EssExpressions, Lisp rules of choosing between compile time, load time and run time, conflating non-empty lists with pairs, tying variable rebinding rules (lexical vs. dynamic) to variable scope (global vs. local) and to variable kind (arbitrary value vs. function), five equality functions, case conversion during reading, a separate namespace for directly callable functions, "generic" operations like sequence operations not being specializable generic functions, underspecified representation of characters, or lack of guarantees about tail calls. No matter how many iterations people make, they will not introduce the other half of Lisp. Choosing only the good half, and implementing features which originated at Lisp but are not unique to it, sounds reasonable and not as ironic as the Rule, doesn't it? -- MarcinKowalczyk?

It's controversial whether the result is lisp if the syntax isn't sexprs, but theh rest of that I believe is uncontroversial. The odd thing is that python and ruby and others did such a half-assed job of the borrowing from Lisp. They tend to actually be nicer and more well thought out in the areas where they're not borrowing consciously from Lisp. Weird. Unless it's just that they heard of a Lisp feature, and borrowed from the rumor, rather than actually being in any sense familiar with it.

[On the other hand, a need for proper macros will drive you simplify syntax, probably to something morally equivalent to sexprs. One can imagine a language that introduces a 'simplifies syntax' for macros, which ends up replacing the 'normal' syntax in idiomatic usage --- once programmers of the language realize the benefits of code-as-data. I agree with you about the weirdness in python and ruby et. al. In pythons case at least, the creator has claimed no influence from lisp (which is silly, there is at least massive indirect influence); perhaps this is why some things are done poorly.]


By the way, what are Greenspun's Laws, one through nine? (Do they even exist? I can't find any reference to any law other than the tenth...)

They do not exist; calling it the "tenth" rule apparently was a joke. But the original source of the "tenth rule" is also lost, so it's unclear. It does however go back to at least 1996.


[Moved from AreLispersTakingOverThisWiki]

But to suggest that one language or another is technically limiting to the extent that a technical problem can't be solved in that language is unreasonable.

No, it is not. There are things you can do in, say, Lisp, which you simply cannot do in (say) C. By this I mean that to solve those same problems in C, you have to essentially write a Lisp (or equivalently powerful different language) system in it and program in that (a process known as Greenspunning). If you don't believe this, I'm sorry, you simply may never have had to solve those difficult problems.

[Y'all need to be careful about phrasing, or risk an inadvertent argument over mere differences in terminology. Any problem that can be solved in one Turing complete language can also be solved in another. On the other hand, the precise means of solving it may need to change. Lisp has higher level functions; a solution involving them will need to actually be redesigned when ported to C. ImplementingLisp in C, "Greenspunning", is usually the least effective way to move such algorithms to C, but it has the sole advantage of allowing the original algorithm to remain relatively untouched, important for organizations that are afraid they can't understand the original algorithm, such as one that will remain unnamed but that begins with Y and ends with exclamation and has an H in the middle. :-) ]

Yeah, gotta say, that Wikizen sure sounded like a SmugLispWeenie to me. To suggest that Lisp "or equivalently powerful different language" is the only solution to some software engineering problem tells me that the Lisper is the one who is technically limited. Granted, I've only been doing this stuff for 28 years, but maybe Lisp solves problems I don't know exist. I've never encountered one that made me want to reach for Lisp, though. Guess I've just had all the softballs for almost three decades.

Or perhaps you don't know Lisp well enough to realize how it could have more elegantly solved hard problems that you've worked on. :-) Seriously, I've been at this almost as long as you have (23 years). I'm not a regular Lisp user, but did some extensive work with it in the past, and I'm currently trying to bring my knowledge of it back up to date. I'm driven by the fact that I've recently worked on many problems that were hard (or at least tedious) to solve in languages like C++, and I can see several features in Lisp that would have allowed me to create much more elegant solutions with much less effort. The expressiveness of the language really is amazing, and the language's annoyances are mostly minor and easily overcome. The biggest problems I've been having with it are related to quality or cost of development environments on Windows, which is the main platform that I need it on. -- DanMuller

What most people seem to miss, any TuringComplete language can solve any problem any other language can. That's not the issue, the issue is how easily can your language do it. Exactly so. The Universal Turing Machine is a beautiful theoretical concept, but how many of us have written anything much above the level of "add two numbers" for a Turing machine? Anyone? I didn't think so. :-). It's clear that TuringComplete is far from the only factor involved in selecting a programming language. Lisp's expressiveness literally let's you rewrite the language to suit any problem, allowing simple solutions to just about anything with little code in comparison to other languages. Sure, you could do it in C++, but you'd literally be writing a Lisp interpreter first, just to get started on the real problem, because you'd have to reinvent many of Lisp's features just to get started. At some point, it's just easier to say only Lisp can solve a certain problem, because even your C++ solution would be running a slow hacked up custom version of Lisp. Oh, and I'm not a Lisper, I've just studied it enough to understand why Lispers are smug, and rightfully so.

[Agreed to a point, but only to a point. The thing that I think you are missing (in this conversation, that is...doubtless you do realize this operationally in real life) is that sometimes the appropriate thing to do is to create an inelegant solution in the language at hand, not reimplement Lisp in C++ so as to be able to write the elegant solution. The former approach is overall much more elegant than the latter approach, despite needing to bite the bullet on the language's shortcomings. Lispers have reason to be smug, but one should be able to cope with whatever language has been chosen for a project, if one can't influence the LanguageOfChoice. -- dm]

I agree, but having to write 10x the code necessary to solve a problem because of language restrictions, tends to make one smug about the elegant language.

Yes, certainly.

And sometimes, the only way to solve the problem is to write the interpreter.

No, phrased badly, Turing-complete, remember? You don't really mean "only way", because that's incorrect. Maybe you mean "best way"? But then you have a half-assed Lisp interpreter in C; why not just switch to Lisp altogether at that point? But besides, I was attempting to point out that sometimes it is best for the project at hand to just go with the inelegant, 10x solution, not to switch to Lisp in either a good or bad/half-assed way.

Turing-complete's degenerate case is writing an interpreter for the language... I would take "The only way" as meaning, the only way to do it with any reasonable amount of code.

You'd be correct, that's what I meant. Writing the interpreter is the 10x way in some cases. TuringEquivalent just means it can be done, the doing may still require the interpreter. Not all problems can be solved natively. TuringEquivalent simply allows any language to be an interpreter for any other, thus allowing all to be equal in power. TuringEquivalent does not mean all languages have the same power natively. C++ for example, may not be able to solve a certain class of problems natively, but can via an interpreter for a custom language. No language, to the best of my knowledge, natively has the power of Lisp. Thus Lisp can natively solve problems that would require other languages to write an interpreter to do the same.

I'd like to note that doing something "natively" is a very fuzzy concept, as is "interpreter" - I noticed the simplest way of implementing brainfuck in ocaml was to build native ocaml functions out of brainfuck code... does that mean that I didn't have to write an interpreter for brainfuck at all? Of course, this does not disprove the existence of problems that might need to implement language Y in some language X. However, whenever I make an abstraction a in language L, I'm implementing a new language L U {a}, in a way. BTW: I think the correct term is TuringEquivalent.

I don't think it's that fuzzy. If you write an abstraction for cons cells and make lists out of them in any language, no matter the technique used, you are writing a Lisp interpreter. I don't know ocaml, so I can't comment on your example, but there are some problems only Lisp or a Lisp interpreter can solve, at least until someone invents something more powerful that Lisp, but that doesn't seem likely. Lisp macros enable stuff that other languages simply can't do, without being a Lisp interpreter.

The only reason to call those abstractions lisp is that they were first widely deployed in lisp. But believe me, the idea of consed lists predates Lisp by many years.

Interesting point (although I never heard someone claim that Lisp outright invented them). When were they first used?

Note, I said "the idea of consed lists", which goes back to mathematic theory about the pairing function, and which can be seen as a building block of a computational model in lambda calculus, even though in lambda calculus it's not primitive.

I'm not comfortable with this. By the same logic, I could point out that the concept of pairs has been around since at least the late neolithic, at the dawn of the Sumerian and Egyptian civilizations, and cite that as prior art against Lisp. (Those languages have "dual" as a distinct obligatory grammatical inflection, so there's unambiguous evidence.)

I see. But the relation between (cons, car, cdr) and (\x\y\z(z x y), \p(p\x\y x), \p(p\x\y y)) is behavioral equivalence, nothing like the relation between dual and pairs as a mathematical model.

If the math made it obvious that it should be done in computer languages, then it wouldn't be so nearly-unique a characteristic of Lisp. Algol-68 would have had it, Snobol would have had it, etc.

I think it is always fair to note which computer language was first with a feature, and make it only a footnote to mention prior art in areas other than computer languages. On another page here the other day I gave Fortran credit for infix algebraic arithmetic expressions, contrasting it with e.g. Autocoder. This is worth mentioning even though obviously Fortran didn't invent the notation, math did (slowly, over hundreds of years). -- dm

I agree that deployment is important. But you cannot dismiss lambda calculus as a non-programming language: it has been implemented many, many times, and its formalisation goes back to the early 40's.

If that is your argument for why lambda calculus is a programming language, then (A) you'd have to call the combinatory logic being done in 1930 a programming language too, since it's Turing Complete, and (B) I disagree, and if there isn't more to the argument, we'll just have to agree to disagree. I think it is useful to distinguish math from programming languages, and that you are making an ontological category error. There are areas that are shades of grey, but this doesn't so far look like one to me.

Eep. Maybe you haven't programmed in lambda calculus, I have. But you might have a point that even though both combinatory logic (which, I think, comes after lambda calculus so I have to remember the times wrong) and lambda calculus are clearly programming languages, they were not programming language implementations prior to Lisp.

BTW, the SK-calculus (or SWBCI-calculus, whatever) of combinatory logic is clearly a programming language, even though it's not clear if the whole combinatory logic (with it's "is so" primitives) is. UnLambda? is an example of a side-effectful SKI-calculus.

Yeah, I've seen UnLambda?, and Joy, and a couple other SK/SKI/etc thingies. And I'm 98% sure that combinatory logic came before lambda calculus. And I've already said "yes, these things are all Turing Equivalent". The place where we seem to disagree is that you seem to want to call Turing Equivalent mathematical notations "programming languages", and I don't agree. The former is an abstraction for which the latter is a concrete realization.

[A formal language does not need to have a concrete realization in order to be a programming language. Nor does it need to be TuringEquivalent. It only needs to be able to express computations, IOW to have an operational semantics.]

Abstract model of A is not equal to representation of abstract model of A. Say, have you studied Group Representation Theory or Model Theory? Maybe there'd be some interesting points to drag in from that direction.

Now you're making a distinction which does not make sense to me. I'd gladly go with the distinction between a language and its realisation if "realisation" meant implementation. In that way, C would be as abstract as lambda calculus, which you seem not to be comfortable with. But if you think lambda calculus is something "more" abstract than a programming language, you should tell me what you think lambda calculus is. It has a notation, even a standard one; so its programs are symbol strings (or character strings, if you like); it has implementations; it's not the same thing as a Turing machine (but if you strip it of its syntax, it might as well be); just what would be this abstract entity that does not have any specific syntax, is not a programming language, is a TuringEquivalent system, but is not the same as the class of TuringEquivalent languages? Or, in other words, what is the difference that makes, say, Lisp a programming language and lambda calculus a mathematical notation?

I haven't studied group representation theory (or don't know what it is in my native tongue); I have studied model theory (if you mean the one from logic) and I don't think it supports your view in any way.

"Programming languages" are abstract entities: they're not dependent of any implementing medium. Actually, no TuringEquivalent programming language even can have an implementing medium because of the unbound storage requirement. What could be the difference between a mathematical notation and a programming language that uses the exact same mathematical notation (and has the same semantics)?

Maybe I have been unclear and digressing too much, too. Let me try again. I think there are cases where there's no point in making a sharp distinction between math and programming, but other cases where a sharp distinction should be made.

I think usually if an area of math is about algorithms or about programming, then perhaps a distinction doesn't need to be made. However I don't see that that means that all areas of mathematics are therefore isomorphic to programming or to programming languages.

In regard to lambda calculus in particular, it is an area that perhaps could be viewed either way. Perhaps it would be fair to say that, as a pure area of math, it doesn't necessarily have anything to do with programming, but as we know, it has also been an area of applied math, and as an applied math, it does have a lot to do with programming and programming languages.

ALL of this seems like too much of a digression, however. Even if we say, oh, ok, all of lambda calculus, even pure rather than applied, really is just a programming language, it STILL doesn't necessarily follow that you can appeal to it as prior art on the subject of cons lists.

First you would have to tell me that, not just pairs, but actual lists built out of pairs have nontrivial prior use in lambda calculus, which I have been thinking is not the case. But you sound like you're more experienced than I am in the subject - I have studied it, and combinatory logic, and of course predicate and propositional logic, and I detest all of these as overly low-level mathematical notations. They're the assembly language of math - powerful but absolutely not something I want to spend my life doing.

Joy and unlambda are perfect examples...this stuff should be emitted by a compiler, not programmed by humans (beyond playing around recreationally or making small examples).

Hmm. I like lambda calculus and sometimes regard side-effectless Scheme as a thin wrapper on it...

I agree with you, this is much a matter of differing points of view. The pairing function, usually implemented as \x\y\z(z x y), was used in lambda calculus as a primitive building block for almost any complex data structures. (The other primitive block for data structures, choice, seems not to have had similar impact in practice.) As for how widely it was used, that's hard to say, because before lambda calculus was implemented, "programming" in it was the business of researchers.


I acknowledge the importance of Lisp and love it, but it's an exaggeration to claim every lisp-ish feature you implement is because you're writing a lisp interpreter. My point was that abstractions don't belong to any specific languages.

Wait a minute...obviously Lisp has some powerful stuff that can't be represented natively in C/C++. But my experience has been that, nonetheless, there aren't any real-world problems that cannot be solved natively in C without needing to write the equivalent of a Lisp interpreter. So for example, some algorithms can be implemented in a very powerful, elegant way in Lisp using higher order functions. But they're just being used, ultimately, as a means to solve some real world problem - and I thought that there was always some way to solve that same problem natively in C, even if only in a clumsy way. So what's a counter-example of something for which that is not true?

Note that an answer to this would not be "higher order functions", it would be "e.g. real world problem X, which we would solve in Lisp using higher order functions."

Um... I'm not the right person to answer to this, but let's take objects/closures. Let's say you are building an operating system that is to support many filesystems. It is natural to model those filesystems as objects: they support certain operations, should be mutually substitutable, and you should be able to add them to the system afterwards without changing the filesystem dispatcher. Now, you would implement them in C by function pointers. There would be filesystem structures (in Linux, the filesystem is represented by its superblock structure) and they would contain a vtab of operations. Every time you make an operation on the filesystem, you dispatch via the vtab and pass the structure itself as the first argument (otherwise the function will not know which data to operate on). Doing so, you have effectively implemented objects/closures in C.

Or take dynamic typing. You want to represent a tree structure in C/C++. Let's say you're dealing with a parsed representation of XML, for example. To do this, you make a union record type, with one "type tag" (presumably an integer) and a varying-type data part: string + contents list for tags, two strings for attributes, string for contained data. Doing so, you have effectively implemented dynamic typing in C/C++.

Very interesting examples. However, remember that I said "I thought that there was always some way to solve that same problem natively in C, even if only in a clumsy way. So what's a counter-example of something for which that is not true?"...and your examples of virtual file systems and tagged variant unions seems to be to be actually an illustration of my point: the solution in C is more clumsy than it would be in Lisp, but most definitely not clumsy enough to be an argument for switching languages just to solve those problems. And certainly not approaching the level of clumsiness of needing to actually write a (half of a half-assed) Lisp eval. (In trying to be a moderate, I seem to be alternately taking each side of the argument on this page! Weird.) -- dm

<shrug> I can't really say if you solved the problem "natively" if you ended up implementing dynamic typing or objects. It the TuringEquivalence? sense, yes, they are native C; however, if the notion that they can be found in other languages as native facilities suffices to qualify them as "non-native", then you didn't do them natively in C.

Yes, you can say, because the turning point is when one has to implement eval in order to get the new features. That wasn't necessary here, so it wasn't greenspunning

What is enough reason for switching languages, of course, depends on the situation.

Of course. (Well, "of course" between us two reasonable people, anyway. :-)


I agree that some structurings of problems cannot be achieved without e.g. garbage collection, first-class functions, tail-call optimisation (which not every lisp has), first-class expressions (that can then be fed to eval), or dynamic typing. These are synergetic, but implementing one does not require implementing the others. In that way, they are orthogonal.

Some of them more clearly belong to Lisp; some less clearly. For example, first-class functions are quite much "Lisp", even though they predate Lisp by very long time and first Lisps did not have them. They belong to Lisp because Lisp built the programmer culture that evolves around higher-order functions (except that Lisp is quite distinct from the programmer culture of monads, which are one important technique one really can't do without higher-order functions. First-class expressions, on the other hand, exist in machine language: most random-access environments let you write self-modifying code.

Then, the concept of "interpreter". In my experience, a Lisp interpreter consists of three parts: reader, evaler and printer. Of these, I've never seen a reader written in any C/C++ program except those that really are Lisp interpreters (i.e. that's their primary purpose). An evaluator can be found in many C/C++ programs, but it often has domain-specific primitives, not Lisp ones, and calling any program that has eval() a Lisp interpreter would be quite arbitrary (any interpreter is likely to have eval()). A printer, then, can be found in many C/C++ programs, except that it seldom prints Lisp.

So I would change the claim to be: any sufficiently complex C/C++ program will have implemented approximately half of mechanisms natively supported in Lisp but not in C/C++. This is not surprising, because the mechanisms natively supported in Lisp are very useful. I've made a similar observation about Lisp and other languages: sufficiently complicated Lisp programs tend to have about 50% of the following implemented: lazy evaluation, automatic coercion of types, data-driven dispatching (objects), pattern matching, marshalling, data streams, ...


A question that puzzles me... what, pray tell, is the difference between "greenspunning" (writing an interpreter for a Lisp-like language, or LispLanguage itself, in an alledgedly-less-capable language), and using LispMacros (or DefineSyntax in SchemeLanguage) to implement some higher-level facility in Lisp/Scheme? To here the SmugLispWeenies tell it, Lisp is so gosh-darn powerful because you can design "application-specific languages" in it, and then solve your problem in these application-specific languages - in short, Lisp is being used as a sort of meta-language.

Other than the fact that Lisp evaluates its macros at compile time, and a hypothetical Lisp interpreter written in (for example) CeeLanguage would doubtless not execute until runtime - this appears to be a distinction without a difference.

Or to put it another way - is the CommonLispObjectSystem (an OO system implemented on top of CommonLisp via macros, essentially) itself an example of greenspunning - defining, via the macro facility, an "interpreter" for an OO language? If not, why not? (The fact that CLOS is part of the Lisp standard doesn't, by itself, strike me as a particularly compelling answer to the question).

The key thing about GreenspunsTenthRuleOfProgramming that you're neglecting is that it's about writing a "half-assed" version of "half" of Lisp, so when people talk about greenspunning, they're saying "why do a half-assed reimplementation of half of Lisp? Why not just switch to Lisp?" ...which is a reasonable question, and one which sometimes has a good answer and sometimes does not.

In most cases, folks doing Greenspunning aren't even considering Lisp; they're just trying to exceed the constraints imposed on them by the LanguageOfChoice (quite possibly not their choice, but that MainstreamLanguage of management.) Most practical examples I'm aware of consist of augmenting a language with a key feature or two - something as ambitious as implementing a HalfAssedVersionOfLisp? is (thankfully) seldom done. In many cases, if a greenspunner were to consider switching implementation languages, probably they would switch to SomethingOtherThanLisp?. (If I were the architect, I'd likely consider Lisp only if I needed all of its facilities; otherwise I'd go with Python or something like that. Depending on what I needed, of course, and what I could sell to the PointyHairedBoss...)

But at any rate...the question above still stands. When MetaProgramming is done in Lisp (any dialect, including SchemeLanguage), it gets trumpeted by the SmugLispWeenies as Yet Another Example Of The Power And Convenience Of Lisp (TM). When MetaProgramming is done in SomethingOtherThanLisp? - it's declared to be "greenspunning", and is trumpted by the SmugLispWeenies as Yet Another Example of the Limitations of SomethingOtherThanLisp? (that such hackery was resorted to). The question "why didn't you just use Lisp?" is bandied about. Even if the totality of Lisp's feature set was completely off the radar screen.

Seems to me to be a bit of a double standard. I'll grant that Lisp and Scheme are well-suited to such MetaProgramming (moreso than a language with a piss-poor preprocessor like C/C++, or none at all for that matter), but you can't have it both ways. MetaProgramming cannot simultaneously be an elegant design technique in one language, an amateurish hack in another.

Some people are unreasonable in exactly the way you describe, but please don't tar us all with that same brush. I personally think that whether it is elegant or an ugly amateurish hack is indeed the key issue, and that such things can in fact be done well in languages other than Lisp.

Agreed... if it isn't apparent, the set of SmugLispWeenies is a (proper) subset of the set of Lisp programmers. :) I certainly don't consider the two to be synonymous.

But if one such subsystem keeps growing out of control until you've essentially redefined the language, well...it's rarely a good idea to do an ad-hoc language implementation or re-implementation. That's part of Greenspun's point. Fiddling with a feature here or there is one thing, but large scale language design is a tricky business, and can turn into an inconsistent mess, and can suck up a huge amount of development effort unnecessarily.

Agreed 100%.

The other problem is that even small, controlled subsystems to implement higher levels of abstraction than a language offers natively are so often done badly, so you hear a lot of sturm und drang over that.

(But ok, yes, sometimes you just hear the sound of SmugLispWeenies being annoyingly smug, granted. :-)

I've seen quite a few in my career (and even have to admit to implementing one such beast, back when I was foolish enough to do so). The MetaProgramming page contains an analysis of when MetaProgramming is appropriate and when it isn't - if you are doing domain-specific, simple stuff it can be a powerful tool (implemented on top of any language). When you're re-implementing features from another domain-independent programming language, it comes time to ask the question: Would it be better to switch? Sometimes, of course, the answer is "no".

True. And in those cases, if the subsystem is done well, accusations of "greenspunning" would be groundless. In other cases, of course...


BjarneStroustrup has a similar observation, made in an interview in LinuxWorld?:

"I think that any language that aspires to mainstream use must provide a broad base for a variety of techniques -- including object-oriented programming (class hierarchies) and generic programming (parameterized types and algorithms). In particular, it must provide good facilities for composing programs out of separate parts (possibly writing in several different languages). I also think that exceptions are necessary for managing the complexity of error handling. A language that lacks such facilities forces its users to laboriously simulate them." (emphasis added)


Example of metaprogramming in C: Lex and Yacc. In Lisp you'd presumably write a macro that converts a Lispy-looking equivalent of a Lex or Yacc source file into Lisp primitives at compile time. In C, you set up your makefile to convert a Lex or Yacc source file into C at build time. The main differences are a) how convenient it is to write your own domain-specific language, and b) whether someone's already done it for you.

This technique is very common in the FunctionalProgramming community, where it is known as EmbeddedDomainSpecificLanguage.

If somebody made LALR(1) parser generation part of a programming language, where it was as easy to drop in an LALR(1) parser specification as a C switch statement, then the authorship of LALR(1) parser generators would be considered GreenSpunning.

I don't think so, and can't imagine why you say so. The widespread practice of doing half-assed hand-crafted recursive descent parsers is, rather, Greenspunning, in that domain.


Why not define a simpler grammar for your DSL, perhaps an SLR grammar, reduce (eliminate) the number of Shift-Reduce conflicts, limit the lookahead needed, and when you define a simple (to parse) grammar, a recursive decent parser for an interpret(ed/able) DSL is quite acceptable. And since Lisp is such a simple grammar, (ok, the closing ']' adds complexity, skip that), it really isn't hard to include a Lisp parser.

Frankly, an XML parser is more complex than a Lisp parser (more symbols, attributes, close tags, et al).

Note that I said parser.

The real question is one of language convergence. It is not the syntax of Lisp, but the features which get added to languages that do not have the features, or newer languages accrete Lisp features. Look at languages created in the past 20(ish) years.

Python - OO, Functional, Imperative, GC, interpreted/compiled, introspection, eval, dynamic dispatch, no macros, no pointers Ruby - OO, Functional, Imperative, GC, interpreted/compiled, introspection, eval, dynamic dispatch, lambdas/closures, no pointers Java - OO, GC, compiled to JVM, introspection, dynamic dispatch, can construct closure like behaviours, can approximate Functional, no pointers

I must quote, "Take a Lisp program, indent it properly, and delete the opening parens at the start of lines and their matching close parens, and you end up with something that looks rather like a Python program." [http://norvig.com/python-lisp.html]. Yes, but with different verbs and spellings.

--ChuckCottrill


See also GreencoddsTenthRuleOfProgramming (note spelling differences from this topic), ExampleOfGreenspunsTenthAtWork, all the Lisp hate pages.


CategoryRant, CategoryLisp


EditText of this page (last edited October 31, 2014) or FindPage with title or text search