LucaCardelli wrote a famous paper entitled BadEngineeringPropertiesOfOoLanguages, which discusses some (perceived) shortcomings of ObjectOriented programming languages. Recently, on ManifestTypingConsideredGood, Cardelli was paraphrased by a well-known Wiki user who suggested the existence of BadEngineeringPropertiesOfFunctionalLanguages.
ManifestTypingConsideredGood advances one argument in that direction; in that in many instances, explicit type annotations (as opposed to LatentTyping, whether via dynamic type-checking a la SmalltalkLanguage, or TypeInference as found in HaskellLanguage and MlLanguage) may be a GoodThing. I won't argue with this, though I will note that the latter languages at least do allow explicit type annotations to be inserted anywhere in the code. You can write ML with every label declaration, object component, etc. containing a type declaration if you want to.
This page is for other arguments against FP to be made, given that it's implied they exist. As a suggestion, arguments focusing on the properties of languages themselves should be kept separate from arguments concerning the maturity of toolsets or arguments concerning the acceptability of a language to the PointyHairedBoss.
I'd suggest that the most grievous property of functional languages is that they don't allow one to perform computation. Don't get me wrong - they allow you to describe transformations over values, such that a real computational system that understands the language (the interpreter) may perform them. They even allow you to describe arbitrary computations (with monads). But a pure functional system cannot actually perform computations that you describe. Even Haskell must embed a computation description with an IOMonad into a non-functional main program.
I can vaguely see what this is pointing at, but you're using words imprecisely - every program, regardless of the language, is trivially a description of a computation, and the computation is the act of processing that description. Turning a description into a series of actions might have some differences between functional and imperative languages, but this is in no way some clear-cut distinction.
- A functional language cannot be used to write a program, because it cannot directly describe computation. By definition, a (pure) functional language may only describe calculation, and another language entirely is needed to describe a computation that will meet the requirements of the calculation. It is true that any description of computation is trivially a value, and a functional language can describe values (and thus may describe descriptions of computations). What separates imperative and functional languages is that thin (but very precise) line between directly describing computation and describing descriptions of computations. If the language treats a description of computation as a description of communications to be performed rather than a mere 'value' to be observed, then you have an imperative language. Such is the case when attaching a Monad to main in Haskell is given a special meaning. The language as a whole is no longer functional (though it may certainly qualify as an imperative language with a pure functional subset). Upon moving to the imperative language, you'll start banging your head against all the normal problems with imperative languages... dealing with modifications to state as a result of communications, concurrent operations, etc. These problems aren't significant if you don't have multiple readers and writers operating in a concurrent environment, but the same is true for imperative languages in the same situation. This does not, in the least, diminish the distinction between imperative and functional. And in the future, when you start discussing whether other people are using words imprecisely, then move on to discuss how there's "no clear-cut distinction", you should note your hypocrisy. Perhaps you meant inaccurately? Even so, you'd be wrong by most definitions I possess.
This is because computation, itself, is a process - a 'doing', involving the sending and receiving of messages (e.g. between the CPU and memory, human->keyboard & mouse//CPU->VGA Adaptor->monitor->human, etc.)
Functional languages are excellent at describing descriptions of computations, since they are excellent at describing values. However, if one were to speak a functional language, one could never give an actual command - an imperative; one cannot describe actions to be performed. A functional language lacks, by definition, the three fundamental computational primitives (send - deliver a message, receive - accept a message, and value representation - representing a value in some sort of external, addressable, physical state or wave-form for delivery or receipt). While one can describe values that describe computations, the most one can ever do is describe calculations that emulate the execution of those computations...
... unless you decide to go Imperative.
IOMonads, in Haskell, are values that describe computations - sending and receiving. While, for the most part, Haskell is functional, the special meaning given to an IOMonad when used in the context of a Main function turns it into an imperative: 'Do this IOMonad.' This allows the basic human -> keyboard -> CPU -> monitor -> human signal system.
Of course, adding those sorts of primitives means you no longer have a pure functional language or system. The special meaning given to 'Main' by Haskell is not fundamentally different from the special meaning given to any particular description of computation in a widely imperative language. Common protestations from those in the Haskell community don't change this. Assuming a keyboard-video-human system is functional is the same as assuming that humans are functional and that the same outputs to a human would necessarily result in the same inputs to the keyboard. (As an aside, it's quite rude to turn humans into a primary storage device!)
Pure functional languages are largely useless.
... But starting from a PureFunctionalLanguage and adding the minimal necessary set of computational primitives (send, receive, cells, and value representation) is probably the way of the LanguageOfTheFuture - one may produce a true ComputationalLanguage rather than a crippled half-form (a PureFunctionalLanguage like Haskell or PureImperativeLanguage? like Fortran). Ready access to functional primitives is incredibly valuable. The content of any meaningful message or cell is always a value held by a ValueRepresentation?, so a language good at describing values is good for calculating what should be placed in cells or communicated to other processes. At the moment, only PureFunctionalLanguages? fill that role; those that mix the two without care cannot be sure they will deliver a value. It's best if one can send '~(f 1 2 3)' as a message, and have it understood meaningfully as 'the result of lazily applying 1 2 3 onto some <commonly known function f>', as this allows arbitrary abstraction... a true language with sentences. ImpureFunctionalLanguages? that mix IO with functions, like Scheme or OCaml, are broken for this purpose because 'function' isn't synonymous with 'value'.
What is really needed is a ComputationalLanguage. This needs a pure functional language subset, with which it can create arbitrary values, including those values that describe computations (things to do) and computational services (objects or actors that may receive a message and respond). To make such descriptions usable, a ComputationalLanguage must describe computations in terms of a common set of computational primitives that the underlying system knows how to 'do'. Finally, it needs a minimum of one primitive operator that, given a description of a computation, essentially, says 'do it!'
Pure functional languages are fundamentally crippled when it comes to Computation. Of course, pure OO languages, where such things as integers are passed around as 'objects', are just as bad - the content of a message is NOT the object by which it is delivered. (See MutableState).
You say that "the underlying interpreter will only ever be emulating a computational environment". As far as I can tell, that statement is devoid of any real meaning. First, functional languages don't require an interpreter any more than imperative languages. Both typically require an operating system of some sort, and a CPU of some sort. The OS and CPU actively consipire to "emulate a computational environment". So, by your argument, imperative languages must have the same flaw that you claim for functional languages (whatever that flaw may be; you never really state it). -- MichaelSparks
Perhaps I'm not making myself entirely clear. The CPU and memory systems (including the registers) on a hardware unit constitute a physical environment in which messages are delivered and received with certain protocols. Messages are delivered along a hardware bus via messages represented (physically) by signals in the form of electrical impulses (in the typical electronic computer, anyway). Further, memory (by definition) provides some sort of cell by which to ground messages, and an addressing mechanism for retrieval. Thus a CPU, by itself, fulfills all three basic requirements for a computational environment above and beyond the functional requirements: the ability to send, receive, and capture messages. No emulation is necessary. To request the contents of a particular memory cell, the CPU must first deliver a message (e.g. get:(0x..addr..)) to the memory subsystem, then must await a reply. The reply will be another message (e.g. contents:(0x..addr..,0x..value..)). Use of transistors or whatnot as the physical medium for the storage is arbitrary.
Computation is a process. To perform any computation will involve all three of those steps (send,receive,capture), usually in copious amounts for any non-trivial task. It doesn't matter whether it is a human or a computer performing the task. However, there is also the two functional requirements: values, and transformations on values - the ability to take one or more input values (e.g. operators and numbers) and produce an equivalent output value. I.e. math. '3 + 5' = '8', so '8' => '3 + 5' and '3 + 5' => '8' are both valid transformations (though the value we're most regularly interested in reduces the size of the represented value). Of course, there is a physical plane where these values must be both represented and transformed.... a CPU does so through physical manipulations (of transistors and electrical impulses), while a brain does so through its own slightly wetter versions.
Here's the issue: A pure functional language can only ever describe values and transformations. A functional language cannot directly describe the computational process because it intentionally excludes all the necessary components (send/receive/capture). Thus every pure functional language needs an interpreter - something or someone that can look at it and produce the steps necessary to transform the values as described in the language. Whether said interpreter is a human, a procedure implemented in a computer, or a compiler isn't relevant. It's still necessary.
The values described by a pure functional language, quite trivially, can describe computations. E.g. one could store the meaningful contents of an ASM-language file in an addressed list. It's even legal for a functional system to describe all the transformations and events a compiler would perform, given such a list, through use of monads and such. However, it is illegal for an interpreter of the pure functional language to actually act on these descriptions - e.g. to compile and said ASM-code directly upon the CPU. It's legal to describe, but illegal to act. If acting ever became part of the language definition, then the language is no longer purely functional. WRGT Haskell, one may describe IOMonads and the transformations performed all they wish, but once interpreters start taking liberties when they see that a main exists and decides to turn IOMonads into real IO, the language isn't functional anymore. I suppose if you weren't expecting that to happen, you could make a counter-case, but the fact is that once you start describing things in your language with the expectation that certain actions will be performed upon it, it's no less imperative than any other turing-complete imperative language. (And by reflection, no turing-complete, imperative language is any less functional than it.)
Now, here's the issue you claim I'm circumlocuting: If you have a functional language, you can't do anything. The moment you change things so you can do something (e.g. attach IOMonads to Main), you no longer have a functional language. It's rather fundamental. Functional languages don't allow you to perform computations. And being useless, to me, seems to qualify as a BadEngineeringPropertyOfFunctionalLanguages?.
Imperative languages skirt this problem by simply allowing one to describe computations. If the language is fully specified (like Assembler), then these do not require interpreters in the sense above - the described actions would speak for themselves. However, such a language might need to be transformed a few times prior to use.
Isn't that a roundabout way of saying pure functional languages aren't practical? There are numerous academic illustrations that are similarly impractical, but this doesn't deprecate the value of products derived from or inspired by them. What does it say about non-pure functional languages? -- DaveVoorhis
- Uh... yes, I suppose it is roundabout. Of course, no impure functional language is truly a functional language. (It's that "one drop of blood//Aryan race" type thing.) And what did this say about impure functional languages? That they are dead useful, for many of the reasons I listed above. A proper understanding of values and value transformations should be part of any language, and the imperative languages lacking convenient support for said features is surely hobbled (though not truly crippled). That said, most modern impure functional languages are go about it the wrong way by mixing communications with functions. (That prevents you from describing and typing values as separate from procedures.) The LanguageOfTheFuture will need all five fundamental things (send, receive, capture, describe, and transform) as fundamental things.
- Then perhaps this is essentially a debate about paradigms, to which I suggest ThereAreNoParadigms. -- DV
- Why are paradigms coming up at all? This is about BadEngineeringPropertiesOfFunctionalLanguages, not BadEngineeringPropertiesOfTheFunctionalParadigm?. If the language isn't functional, it shouldn't be discussed here.
I'm not quite sure why you think that the IO monad leads to impurity. The best guess I have is that you are confusing determinism with functional purity. Is that it? Also, what if I don't use the IO monad and declare my program as "main :: String -> String"? Here, I can take input from stdin and give output to stdout, but I'm not using monads. Would you consider that program to be impure as well? --
MichaelSparks
- It isn't the IOMonad that leads to impurity. An IOMonad is simply a description of communication behavior - a pure value, as all descriptions are. In fact, it's just as pure as "sscanf(...)" from C. Both are merely descriptions. It's when you attach special meaning to a computation description by having it stand for commands to a real (not emulated) executional environment that these things become imperative. Thus, it is attaching an IOMonad to "main" is the culprit, as would be using "fscanf(...)" and "fprintf(...)" to interact with a user. The moment this description of communication is utilized to shape actual communication, you have an imperative language, no longer a functional one (in the pure sense).
- On the latter point, supposing you made main::String->String, then it can be a pure function. That does depend, however, on the implementation environment. If you are required to enter your entire string prior to receiving any feedback, then yes it is functional. A 'value' went in, a closed computational process was created and executed over time, then a 'value' comes back out. No communications to or from the closed process occur in the interim. Thus, it's functional by definition. In fact, any closed process (with no inputs or outputs) is a value and, therefore, functional (since it is timeless - describing such a process at any point is sufficient to describe it at all points). However, in the event that you are allowed to 'respond' to fractions of the output string by inputting more of the input string, then it is no longer functional. You, the human, are acting as a non-functional component in that system. You really don't have a choice. You're not timeless.
OK, I think I am starting to understand what you mean. Let me run another hypothetical by you. Suppose I define my program as a function of type "Main = String -> (String, Main)". When I give the program to a user, I instruct him that he should run a function called "main" with his input string, and that it will return an output string and a new function, and that if he is so inclined, he may then run the new function in the same manner. Would you consider this program to be pure? --
MichaelSparks
- Yep. 'Main' would be pure. The imperative parts of this particular system are you (who commandeth the user to run "main" with his input string) and the user (who interpreteth your command and Main, and provideth a computational environment subject to whim, time, etc.). And, in deference to EwDijkstra, you didn't even tell the poor user to use a computer. :-) While Main is pure, the computational environment as a whole is imperative, and the language you utilized upon the user is imperative.
In this system, to use your terminology, the user is playing the part of the "interpreter". So it is the interpreter that is impure, not the program being interpreted. Do you agree? --
MichaelSparks
- Not entirely. The program being interpreted above is: "You should run a function called 'main' with your input string, and that will return an output string and a new function. If you are so inclined, you may then run the new function in the same manner." That program is in a language that is very much imperative. While you describe my program is a function named 'main' of type... Main, such a description is technically incorrect - main is a function, not a program. However, I can quite reasonably assume that what you meant is: main is a program that will perform the computation necessary to calculate the result of <my pure function here> when given a <my input type here> as input. A 'program' is a scheduled list of activities/instructions/events. Further, one may describe main as functional because it is a program that computes the result of a function, and has no non-computational side-effects (where computational side-effect is loosely defined as: space,time,money,power,energy,etc. costs associated with the algorithm, and effectively encompasses all side-effects except purposeful or accidental communication.) I do agree that the program you call "main" can qualify as pure functional. It's just not the program being directly interpreted... or, more accurately, it's only interpreted as a requirement for interpreting the other program (a do that imperative).
The function I ask the user to run is called "main", not "Main". "Main" is the type of "main". Your latest reply is pretty confusing to me. I take it that you are trying to redefine the symbol "Main" to refer to the instructions I gave the user. I have no idea why you would do that instead of just using a new symbol. But I should have expected as much, since this whole debate, as
DaveVoorhis pointed out, is almost entirely about catering to your peculiar definitions. I'd like to get back to the part of the discussion that isn't about definitions. Can you restate your last message in simpler language so that I can understand it? --
MichaelSparks
- Sorry; I reworded it a bit. All I really paid attention to is that "main" is a function of type: "fix \arbitrary_name -> (String -> (String,arbitrary_name))", and I didn't really care much that you were distinguishing between "Main" and "main". To make it clear, I do not wish to redefine the symbol "Main." In fact, I really don't want to redefine any symbols at all, though I did feel the need to correct your use of "I define my program as a function..." because programs aren't functions, and programs can't be defined as functions. Programs already have another definition. Programs are defined as a type, and that type is (in Haskell language): [Instruction] - a list of instructions.
- Since you said that "main is a function", there is only one program in your example. That program is the list of instructions you gave to the user: ["(You should) run main.","To do so, input your string."," This will return a string and another function; If (you are so inclined) then (you may) run the new function in a similar manner."] This is a list of instructions meant for someone that understands such things as obligations (you should) and rights (you may).
- However, I note that what you probably meant by saying you have a 'program that is a function called main' is that you have some sort of: "program_main :: [Instructions]" such that, with an <inputstring> in hand, one can perform that list of instructions and get, as the only non-computational output, a result that is equivalent to calculating (apply main <inputstring>). I'd agree that program_main qualifies as a purely functional program. I agree to this because program_main should have no side-effects outside time, space, power, etc. costs associated with the computation necessary for the calculation.
- ... Of course, it's still logically impossible to tell anyone to actually use program_main without embedding a use it command into an imperative language.
- Anyhow, technically the "program being interpreted" is the one you gave the user, not program_main, so I must ultimately answer: "the program being interpreted is imperative, not pure functional." Nonetheless, program_main, which may be what you actually meant, is purely functional. Thus my original, somewhat evasive answer: "Not Entirely".
EwDijkstra once said: Computing science is no more about computers than astronomy is about telescopes. Functional languages make a point of not being about computers.
- Unfortunately, functional languages aren't even about computing. Computation involves both calculation and communication, and functional languages only describe half the job. While such a language is useful for mathmaticians, it's not a redeeming feature for forays into computing science.
- You have a peculiar notion of "computation" - you speak above about CPU and electrical impulses, as if they matter (- as per Dijkstra they don't). Also a peculiar notion of computing science. So what are your sources ?
- I had an issue with MichaelSparks calling a CPU an emulated computational environment. The electrical impulses, CPU, hardware etc. create a real computational environment. It has all the necessary components - communication, storage, transformation, values, and value representation. If it were emulated, the CPU would need to be running on top of something else. CPUs, Human Brains, and many Human Systems (e.g. paper-pushing bureaucracies, ChineseRoom), all qualify as true computational environments. As far as my sources: look to a dictionary. Computing is the process of calculation. A process is a series of actions, and the only actions any closed process may ever perform are communications with an environment that said process can't even deductively proves actually exists (read up a little on Solipsism). Calculation, itself, boils down to its natural mathematical foundation - simple transformation of value to value, substituting equals for equals. Computing science, naturally, is the empirical study of computing: the study of the process of substituting equals for equals. As such, it is a study of both calculation and communication. This is all obvious, natural, and fundamental. I expect that your own notion of computing science isn't in conflict with mine... but it might be incomplete and/or from a different perspective.
When a mathematician writes "let x = 1" on the chalk board, this is assigning a value to variable X (or constant). This is a side effect and there is state, isn't there?
- No. `x` here is not mutable or stateful, just a name for a value. There ARE side-effects: consuming space on the chalk board, consuming chalk, taking time and energy to write, and borrowing hand and brain of the mathematician. But all of these effects fall well below the semantics of the expression, and thus should not be observable from within the language.
Math is a lot like imperative programming but math doesn't make much distinction between variables and constants. Lots of people write x=1 on the chalk board and forget about saying "let x=1". Notice the "let" in front. It could be argued that X is a constant and doesn't need assignment. I think this needs to be cleared up and math needs to be merged with programming, as I find programming a far more powerful "math" than regular math. Math also deals with numbers, but doesn't really care much about string concatenation and text (parsing, substrings, etc.).
- You argue from a position of ignorance. I'm guessing you never got far past undergraduate maths.
How do you set x to a value of 1 without having state?
- The concept of `setting` a variable to a value does imply mutable variables and state. But if you instead `declare` x to be a name equivalent to the value 1 for some region of code, that would be stateless.
I am very skeptical about the whole purity arguments of functional programming, as I think math can also be done using state. Does not a calculator that you use, store state? Can you not store things in your calculator? I consider functional purity to be sort of
MentalMasturbation. I could change my mind in the future, but it seems odd to massage everything into a final result rather than do it in sections with state like imperative programming. If you were going to clean the windows on a building, would you break the job up into pieces and clean each window (foreach window do Clean();) or would you think about it functionally? I think imperative is more natural (foreach person GiveOneNapkin
?(); at the dinner table).
- There are many who share such skepticism about expressing effects from functional. I have such doubts myself with regards to open systems and functional. But I think the reasons you express for your skepticism are not very cogent. Among other things, you should distinguish between `state` (a semantic property of a language) vs. `space` (a property of the implementation).
Having just re-read this whole debate, there's something I'm not clear on: If functional languages require imperative programming - i.e., must be "impure" - in order to meet real-world requirements, why is this a problem? Doesn't the expressiveness of the pure functional aspects, used in conjunction with imperative facilities, overcome the limitations of using purely imperative languages to meet the same requirements? If so, this is hardly "bad engineering," as used in the title of this page. At worst, it is perhaps inaccurate to call such languages "functional," but it in no way deprecates the value of languages generally (though informally) regarded as functional. This page strikes me as a problem of definitions, or of theoretical ideals getting in the way of practical issues, or even of a preference for imperative programming over functional programming; it is not identifying a real problem with practical functional languages. If there is a problem using theoretical "pure" functional languages to meet "impure" real-world requirements, then so what? -- DaveVoorhis
- It's only a problem in that it's a paradox to say you are using a functional language to do anything at all. Thus, they are 'bad' for 'engineering'. ;) I agree that once you have an imperative langage, including so-called impure functional languages, that particular problem goes away. But imperative languages aren't functional, at least not in any pure sense, and thus the paradox exists. Saying that the functional languages are somehow overcoming the limitations of using purely imperative languages is in error. While there are limits to imperative languages, there is no means by which functional languages may overcome them, since any use of a functional language first requires creating a program in an imperative language that computes the calculation described by the functional language.
- But I'd readily agree that pure functional languages are useful as a strict subset of a truer ComputationalLanguage. In fact, I have several times already. Their expressiveness regarding value descriptions is very convenient. The impure ones that people regard as largely functional, though... I'd disagree heartily to a claim that they are any less 'pure imperative' than any other. Your so-called 'practical functional languages' aren't functional.
Thank you. I believe you've confirmed my hypothesis that this page is a problem of definitions rather than of languages themselves. I.e., a
LaynesLaw debate. Essentially, you are objecting to the use of the term "functional" - which appears to imply "pure functional" to you - to refer to languages that are (like most languages!) hybrids. Most programmers, to my knowledge, simply use the term "functional" to refer to those hybrid languages that are significantly inspired by - and typically implement - first class functions, higher order functions, and/or
LambdaCalculus.
Regarding your claim that my "so-called 'practical functional languages' aren't functional..." I would actually prefer to simply call them languages with a given set of features, and leave the buzzword labels like "functional" in the marketing department where they belong, because they too often lead to debates over definition like this one. If you have a different, and arguably better way of mixing functional and imperative approaches, then you may be onto something. I'm not sure introducing such ideas benefits from deprecating the existing ones, however, at least not without showing how your way is better. -- DaveVoorhis
- Functional is only appropriate for describing either pure functional languages... or (where it has real value) pure functional subsets of a non-functional language or program (i.e. "this procedure is functional", "that piece of state is functional"). Otherwise, the word has no meaning at all... and loses even its mathematical value. If one were to describe a fundamental ComputationalLanguage, it could not mix its functional components (that may be values stored to cells and delivered as messages) with its non-functional components (that perform communications, including writing to cells) in an arbitrary manner - evaluation must be separate from execution. Languages that mix execution and evaluation are just as purely imperative as Assembler, though higher-level imperative languages may provide convenient support for higher-order values.
Ah good, a new buzzword for the marketing department! ;-) Of course, the marketing department would like to know what
ModelOfComputation a
ComputationalLanguage is based on. Imperative programming is ultimately based on the
TuringMachine and
VonNeumannArchitecture, and functional programming is based on
LambdaCalculus. What model is the theoretical basis for a
ComputationalLanguage? From a cursory reading of it, I sense aspects of
FlowBasedProgramming and
ActorsModel. -- DV
I think physics would be the best model. Why arbitrarily limit yourself? ActorsModel is based off physics, too, which is why any ComputationalLanguage will share considerable aspects with it. If some core aspect of Computation is not fundamentally blocked by physics, then one should not arbitrarily limit it.
Physics and information theory are fields of study, not models. -- DV
In that case, no model at all should be used. One only needs to implement the necessary primitives for general computation (that occurs in a physical environment) in a convenient manner. Base it off the physics of computation, and you won't run into problems until physics changes. A ComputationalLanguage should be designed to work just as easily in the ChineseRoom as it does on a VonNeumannArchitecture or TuringMachine.
That being so, a ComputationalLanguage is a bag of arbitrary language features, i.e., a MultiParadigmLanguage. Is there a characteristic that makes it distinct from other MultiParadigmLanguages? -- DV
Paradigms such as OO, Functional, etc., attempt to explain computation from a certain viewpoint; in doing so, they create computational primitives that aren't... well... primitive, at least in the sense of computation and physics. Through libraries, a ComputationalLanguage can be used to implement any computational paradigm that is physically viable. In a sense, that makes it a MultiParadigmLanguage, since one can expect the standard libraries to come equipped for several such paradigms. However, a MultiParadigmLanguage does not necessarily qualify as a ComputationalLanguage. It would not qualify, for example, if it did not implement every single one of those multiple paradigms with the same set of underlying language primitives. It'd also be disqualified as such if either the Typechecker or Lexer/Parser needs different 'modes' depending on the paradigm you are currently using, as then it isn't really a unified language. Finally, it'd be disqualified as such if there are any paradigms that are physically feasible that it cannot implement.
You've provided a basis for, perhaps, distinguishing a flexible MultiParadigmLanguage from an inflexible MultiParadigmLanguage, but I still don't see what characteristics make a ComputationalLanguage distinct from a MultiParadigmLanguage. Unless it is your intent is that a ComputationalLanguage refer to a MultiParadigmLanguage in which every (!) paradigm can be implemented via the language primitives? If that is so, are C or C++ examples of ComputationalLanguages, since these are effectively generalised assembly languages sufficiently powerful, and with adequate primitives, to implement any language in any paradigm? And if that is true, then would not almost any language qualify as a ComputationalLanguage, since any language can be implemented in any other general purpose language, subject to performance constraints and programmer effort? -- DV
- It seems like you need a paradigm shift, too. ;) A paradigm is merely a way of looking at things. A multi-paradigm language is simply a language that supports many different ways of looking at things. English, for example, is a multi-paradigm language. In the context of languages that are intended to direct computation, multi-paradigm adopts a more specific meaning of: supporting many computational approaches. To describe a language as multi-paradigm does not require that these different paradigms be unified by more than a grab-bag collection of syntax. There may be different primitives associated with different paradigms. The type-checker might need to do different things for each paradigm (a'la C++, which supports procedural and object-oriented paradigms, but introduces such things as class as a special primitive for the latter.)
- ComputationalLanguages are considerably more constrained than MultiParadigmLanguages. However, they must capable of implementing any computationally valid paradigm. It's a fundamentally different concept.
- A ComputationalLanguage is any language that provides exactly only one way of looking at things: the provably necessary logics and physics of computation. And it does so with only one syntax, one semantics, and one minimal set of computational primitives (such that nothing can be taken away without removing the fundamental ability to describe computation), one correctness prover//Type checker... one ring to rule them all. If there are any extended syntax to support various other paradigms, it must be in the form of macros provided within the language, as opposed to additional language primitives.
- It is because a ComputationalLanguage must provably be limited only by physics and the nature of computation that it can provably implement any implementable paradigm for any other computational language. Use of a good macro system would do well to make use of other paradigms pretty, but isn't fundamentally necessary.
- Languages like Assembler cannot be ComputationalLanguages because they don't describe things in terms of primitives of computation. They describe things in terms of instructions to a processor.
- The primitives of computation are quite fundamental. I wouldn't be surprised if some Chinese or Greek guy came up with them thousands of years ago. They are values, value transformations, state and signal (which carry values and require value representation by some sort of digital, analog, or quantum means that may be added to as physics opens new avenues), behaviors (which, for any closed process, consists only of delivering and accepting signals - this is proven in Philosophy and inherent to Physics), procedures (activities that may be described by the composition of behaviors), and computational services (foundationally: systems that physically perform value transformations, such as CPUs and Brains; above that one might describe services that rely on such services).
- It'd be somewhat ridiculous to claim any modern, regularly used programming language is a ComputationalLanguage, since they don't use ComputationalPrimitives. Those that take the functional approach are missing ValueRepresentation? and Behavior. Those that take the imperative approach are missing a solid concept of values as separate from their underlying representations. (A value is a value, no matter how it's represented.) Those that take MultiParadigm? approaches manage to fly circles around ComputationalPrimitives, but don't quite hit the mark. Any purely imperative language will fail because they can't properly represent values as separate from their representation, while any purely functional language will fail because it cannot describe behaviors (which are a necessary part of computation). Further, any impure language that mixes values with behaviors will also fail as a ComputationalLanguage because signal and state cannot carry behaviors, and behaviors cannot be values. Both pure and impure fails. One needs both, but must keep them separate.
- Actually there is a modern, regularly used programming language, that is a ComputationalLanguage: VhdlLanguage. At least by the requirements you outlined. Particularly it has value representation (though this is restricted to a few digital values including tristate, open and unknown) and behavior (the corresponding syntax element actually is called "behavior"!). I agree, that this language is special purpose, but it does include loops, conditions and memory and can do everything a normal cpu can do - no wonder it is used to implement them! On the other hand its support for functional abstractions is admittedly limited. -- GunnarZarncke
- VHDL's support for values, as you say, isn't very broad or high level, and yet values are primitives to computation. It also fails on the calculation primitives. As such I wouldn't qualify it as a full ComputationalLanguage. VHDL does manage several of the lower level components, though. It is certainly cleaner than the generic imperative languages regarding computational primitives such as signals and messages.
It'd be somewhat ridiculous to claim any modern, regularly used programming language is a ComputationalLanguage, since they don't use ComputationalPrimitives.
After the arm-waving about ways of looking at things, mixing values and behaviours, signals and state, and so on - all of which, I'm afraid, conveys little meaning without illustrative examples - in the comment I've quoted above it appears we have the beginnings of a theoretical (hopefully) and distinguishing basis for a ComputationalLanguage. If I understand correctly, then, a ComputationalLanguage is a language built from ComputationalPrimitives. Yes?
- That's true, though that isn't the only constraint I described above.
If so:
In answering this, by the way, I would hope for presentation of, and/or reference to, a formalism a bit more rigorous than passing mention of philosophy, physics, or Greek and Chinese guys. --
DaveVoorhis
I took the freedom to annotate your requirements for a ComputationalLanguage by the corresponding implementation by VhdlLanguage. I have to point out that I like your systematic treatment of the matter, but fear that you developed a PrivateLanguage for something that is quite well-known - even if not in the usual programming curiculum. But maybe you know VHDL already and just propose an integrated treatment of it and "normal" programming languages. -- .gz
Computation is the process of calculation. As such, it is a series of actions. Therefore, you need to support at least a set of action primitives. It is provable that a sufficient set of action primitives are to send and receive messages.
- VHDL has these two. receive looks like variable access, send is "<-" and looks like assignment (which it isn't).
- Messages carry values. But messages, in any physical system, cannot carry values directly - the value is not paper, ink, electrical pulses, etc. Values are not signal or state. This is because values are not physical things. Therefore, for a message to exist, value representations must exist - a means of encoding a semantic value into a physical system - into signal or state. This makes value representation another computational primitive. Most imperative languages support at least rudimentary value representation (in the form of allowing the user to describe the exact form of the digital, binary strings into which a value may be encoded). One should also consider signal or state to be primitives as they are what messages are carried or held by in the physical plane. I.e. A Message cannot exist without signal/state, and has no value without a value-representation.
- VHDL has representations for signals. There are predefined representations for bits, numbers (signed, unsigned, length can be specified), bit vectors etc., but you can roll your own. There is no state, see remark about memory below.
- This value representation must necessarily be joined by a means of decoding a semantic value from a physical encoding, or anything you encode is useless. If you join a system that can encode and decode to the same value representations, you have a codec, by common terminology. Therefore, it is necessary to conclude that codecs are computational primitives.
- VDHL doesn't separate between encoding and decoding.
- A description of a codec, itself, constitutes a value. Thus, to communicate a codec requires that the codecs themselves be encoded... and same with the codecs that encode the codecs, etcetera, ad nauseum. If there is no base codec, this problem is unsolvable. Therefore, another computational primitive is the base codec - something that is fully self-descriptive, and doesn't need another codec to understand. To be of greatest value and avoid AbstractionInversion, base codecs should be coupled directly to the underlying physical systems - digital//binary, quantum, analog, etc. and must be written primitive enough that higher level codecs can be composed from them. Higher level codecs could conceivably be representing values as other values (as other values, as other values... all the way down until one reaches some sort of base codec).
- VHDL cannot represent a representation. In fact its reflective capabilities are effectively nil. No wonder for a language that developed from descriptions of hardware.
- Calculation may be formulated as the substitution of equal values for equal values. Continuing with the physical side of things, though, you should note that the physical world contains no values... only value representations. Therefore, to perform calculation, at least one more physical, computational primitive is needed: a means of substituting equal value representation for equal value representation. Lacking another name for it, I'll call this primitive the foundational computational service, which is necessarily some sort of physical mechanism. Examples of this service can be seen in the Modern CPU, the Human Brain, punchcard machines, etc.'' One may build other computational services atop these through composition and abstraction, but you can't survive without the foundation.
- I'm not sure on this one. The fundamental services are essentially value tables associating output values from input values. And obviously there are predefined operations for AND, OR, NOT, PLUS, MINUS, TIMES and so on (note: these tables usually explicitly include tristate and unknown values and are thus really more down to the physical level than normal prog langs).
- Not quite so fundamental to computation, but a very key feature in the sense that if you take it away there is no convenient means to add it again, is a remote memory service: a computational service that fills a contract by accepting and storing values (by some arbitrary but known representation) so they are accessible again upon request. Memory physically consists of placements of objects in the physical realm - neurons, neurotransmitters, electrical charge in transistors, waveforms, etc. One should not consider memory to be a computational primitive (because computation can happen on an analog machine, without any memory)... but, like the compiler, it should be considered a key feature of any generalized computational environment.
- VHDL has NO MEMORY primitive! Memory is realized as feedback signals, that change on a clock signal. Obviously to implement this efficiently the compiler has some means to discover common memory patterns and replace them with flipflops or even memory cells and route the clock signal efficiently. Thus your treatment of memory unnecessaryly complicates matters.
- Memory realized as a computational service doesn't really complicate matters in a system where the fundamental operations are to send and receive messages containing values. The computational process may draw a conceptual black box around how that memory is actually stored. Use of memory on a feedback loop, however, implies that you're delivering a message (that carries a value via a signal and value reprentation) out to blue yonder, and there exists some sort of service contract with to whomever you delivered the message that this message will be echoed back on an input line after some delta-time. While this may be wonderful when implementing hardware lines (to which it corresponds naturally), it doesn't seem to be a natural approach to use of memory within computation as a whole. Memory's use in algorithms is mostly to store stuff away so you can come back to it later. Of course, memory isn't a computational primitive... and either mechanism would fulfill the need for memory. The black-box approach will be more efficient for use in most computations, though... whether they are being computed by humans or machine. (Using a feedback signal is bit like shouting in the Grand Canyon, doing some computations and, upon hearing the echo, using that message to move the computation forward. Imagine doing that instead of using pen and paper. However, it'd probably work well as an approach to memory in Analog machines.
- Regarding correctness: Messages containing value payloads must be delivered from one system to another as part of computation. However, even assuming the communications are flawless, a computation will fail if the receiver cannot accept the message. Such an occurence may happen if the message falls outside of protocol. Protocols are between two individuals, but may be composed into wider contracts for communications behaviors (i.e. I talk to you, then you talk to him). Therefore, to any correctness checker on a ComputationalLanguage, proving adherance to protocols (at the least) is part of proving computational correctness.
- Because VHDL is about generating hardware, infinite embedding is obviously not possible and generate compiler errors. Otherwise I know of no checking, that matches your description.
- Computation is the process of calculation. Calculation is, itself, the substitution of value for value, equals for equals - transforming one or more input values into one or more output values (though words like 'transformation' aren't entirely appropriate for values, as values aren't physical things. It's better described as 'finding' values that may be substituted for other values). Thus exactly one sufficiently powerful approach to substitution (or transformation, or value identification) must exist as a primitive, as must values.
- VHDL has macros and subelement descriptions (usually in separately checkable files), otherwise there are nor transformations other then the operations mentioned above.
- However, there is a fundamental limit on the sort of substitution primitive one may use in any physically occurring computation. The underlying computation must be able to occur in finite time and space, or it is physically impossible to perform. Therefore, if the substitution system itself allows one to describe interminable substitutions (like the lambda calculus does), it falls as a primitive to a correctness checker to ensure that all calculations actually requested (or that might be requested) will terminate. Whether that correctness checker is a human or a machine is not relevant. If the substitution primitive does not allow this, then it isn't a problem... so long as it can describe all necessary calculations. (Because calculation description is not the whole of computation, there is no need for the calculational primitives to be turing complete. They only need to be capable of describing any transformation over values that can be described.)
- By construction (generating hardware descriptions) the result of VHDL compilation runs and fits in finite time and space. There may be problems to map-place-and-route the design on a specific piece of silicon (FPGA, ASIC, whatnot), but thats another story. There are complex means to specifiy timing constraints.
- Similarly, if it is possible in the language to describe invalid substitutions (e.g. with typed lambda calculi) then any correctness checker must ensure that there are no errors in substitutions in those programs it passes. That's not a primitive of ComputationalLanguages in general, though.
- Such sepcs are not possible in VHDL.
- More on physical limits: there are some fundamental limits on the set of all possible values that one may use in any physically occuring computation. The most basic of these limits is that the values one may use are limited to those that may be represented in finite space and time. It is physically impossible to breech that requirement. Therefore, it is reasonable to understand that all useful values are concepts described in finite spacetime. This is mitigated by the fact that coinductive values (including lazy values) are capable of representing many (but not all) infinite concepts within finite space, and that many (but not all) analog values may be represented as functions of time described in finite space, or functions of space described in finite time. (Unfortunately, most modern computation machines, including humans, are largely unable to directly handle values of time. Of course, they're also bad at quantum values. Computers are largely digital, not quantum or analog.) Anyhow, those concepts that cannot be described in finite spacetime simply aren't physically usable in computations.
- VHDL: see above I guess.
- As an aside, pure functional languages are hands down the best at describing values of all the languages currently available. (Imperative languages generally, and unfortunately, equate value-representations to values, while impure languages allow value descriptions but fail to separate values from behaviors.)
- Languages are, fundamentally, capable of describing things in order to communicate them. (No communication or no description -> not a real language.) The 'representation' of those values within the language is described in its syntax, semantics, and grammar, which make those primitive parts of any language (including ComputationalLanguages). The inherent connection between syntax and semantics is what allows languages to communicate ideas.
- As above: VHDL has no reflection on these elements, but then so don't many mainstream prog langs.
- Further, one may reasonably assume that any particular use of a language in our universe will occur over finite space and time. While it is technically possible to never stop describing some particular value, such values are unusable for actual computation. Thus every component and every use of the language is, itself, a finite description - a value. In line with simplicity, a proper ComputationalLanguage should treat language components as values because that's what they are. For convenience in this task, any reasonable ComputationalLanguage should be a HomoiconicLanguage. When the language descriptions are properly treated as values, especially as homoiconic values, it is suddenly easy to perform transformations over the language itself. For convenience, once again, a proper Macro system should exist in any ComputationalLanguage to perform these substitutions at compile time. Macros are not primitive (as taking them away doesn't eliminate the ability to specify any parts of a computation), but they're very nice to have. Among other things, they can be used to implement many other paradigms above the minimal set created for all computations.
- Here we really leave VHDL behind.
- Finally, a means must exist to attach those primitives of calculation to the primitives needed to process those calculations. Among these, one needs an executive primitive that takes as input a description of a procedure, and says do it. There must also be a means of attaching value representations to values, at least when it comes time to deliver messages - even if this isn't used much by a human programmer, it will be used a lot for lower level programming and especially for hardware drivers that have very fixed value representations that they except. And, primitive to the environment in which the language is utilized (but not to the language itself), there must be a computational service that can accept a description of calculation and identify and return a procedure that performs to the need... i.e. a compiler. (Calculations are not executable, despite being fundamental to the entire concept of computation; of course, computation is a bigger concept than "execution", which is why imperative languages don't qualify.)
- There is a "process" syntax element, that I guess best matches your "do it" statement, but I'm not sure.
{Note: some damn
WikiPuppy injected all the VHDL comments. This section wasn't even about VHDL.}
I'm trying very hard not to be impolite, but could you please try to understand the technology you are ranting against? First off, a thing like 'getChar' is not an "IOMonad". It's a value of type 'IO Char', commonly called an 'IO action'. The monad is IO. This may seem like nitpicking, but actually statements like these show clearly that you lack understanding.
Secondly, a CPU is a purely functional device. It has type '[Response] -> [Request]'. You explained it yourself: CPUs communicate by way of messages. They receive a stream of responses and the produce a stream of requests, interleaving them of course. This is exactly the way IO worked in Haskell 1.2 and before. The type of main was exactly this. Given this setting, I can implement the IO monad in a purely functional way, using nothing but Haskell 1.2. There goes your argument about embedding another language.
Even considering a more practical implementation (more practical on a common register machine of course), the IO monad still has clean denotational semantics. Sure, stateful computation gets ugly really fast, but that's just life. That's why you separate pure computation and stateful computation. That causes far less headache than giving up from the outset and enduring an imperative language even if not needed.
Strictly, I never said anything about 'getChar' being an IOMonad. However, you're absolutely correct that I should have been more careful to separate the use of an IOMonad (description of IO action, accepting an IOMonad and returning a character and a world) from the nature of the IOMonad (which describes the existence of input and output message ports as part of the world, making them accessible for IO actions). I thank you for pointing this failure out, and I'll take more care in the future. However, a failure to utilize your precise terminology does not indicate a lack of understanding; it is merely a failure to communicate the understanding I do possess.
In response to another of your remarks: I'm not "ranting against" the use of IOMonads as a technology. On the contrary, I think they're a fine idea. But it's important to recognize that when attaching these sorts of monads to 'main' with the sort of special meaning Haskell imparts, you no longer have a (purely) functional language. You have an imperative one. Just as 'getChar' is a word in a language that is interpreted as a command to go fetch a character from a commonly known input stream, so too is attaching an IOMonad to 'main' interpreted as a command to connect certain real IO to the functions so the internal 'IO actions' can, themselves, become just as imperative. The nature of imperative vs purely descriptive sentences, in languages as a whole, exists only in how one expects for them to be interpreted. After all, every sentence in any language is merely a value - a string of words, held together with a shared syntax. That includes the use of 'getChar'. Yet I'm not saying it is bad that Haskell gives special meaning to an action description (an IOMonad and associated IO actions) attached to 'main'. Overall, that's a good idea; it overcomes one of the rather BadEngineeringPropertiesOfFunctionalLanguages. But, in doing so, the language isn't truly functional anymore (at least not in the 'pure' sense)... and so it doesn't qualify as a contrary example to the BadEngineeringPropertiesOfFunctionalLanguages that I've presented.
Finally, you are quite mistaken in your statement that "a CPU is a purely functional device. It has type '[Response] -> [Request]'". A CPU is a physical device, in the real world. Inputs to a CPU are signals that carry messages, and energy; outputs are signals that carry messages after some delta-time, and heat. The transformations that determine the output signals occur by physical means, and involve physical alteration of the real world. CPUs are fundamental computational units... and, as such, are just as far from 'functional' (in the mathematics sense) as are you and I. You need to add quite a lot to that little '[Response]->[Request]' description before you even get remotely close to a valid abstraction of a CPU. But even if you did, you're still mistaken about "receiv[ing] a stream of responses and the produc[ing] a stream of requests" being even remotely functional. Any boxed process that is receiving messages from and sending messages to external processes (services, etc.) is interacting with the world around it in a very real manner. That is not functional. Period. The closest you can say is that the computational process is stateless and timeless, such that the response you receive to a message is independent of all other messages the process has received. Functions cannot send messages because messages are inherently founded in time and space, signal and medium, while functions may only abstract and describe values. The moment messages are involved, you don't have functional. That's fundamental.
Wrgt: "separate pure computation and stateful computation" - you should say separate conceptual 'calculation' from stateful or message-oriented 'computation'. There is no such thing as computation that isn't, in some way, involving signal and state. There is no "pure computation" in that sense.
Reading some of the above descriptions, in comparing functional or imperative software to CPUs in their effort to dissociate computation from communication, I just had to chuckle.
See, computers are composed entirely of these things we call gates, and there are a handful of them. AND, NAND, OR, NOR, XOR, XNOR, pass-thru (aka "buffer"), and inverter. All of these gates are pure functional in nature; given the same set of input values, they will always produce the same output value(s). None of these gates are stateful.
Yet, using two NAND or NOR gates, we may implement state: the R/S flip-flop. This flip flop is the basic unit of static RAM -- it literally is a 1-bit SRAM. Other gates may be added for things like output enables, write strobes, et. al., but again, we're adding pure functional elements to arrive at a stateful system.
Since it's established that a computer both computes and communicates its values to the external world, and we see that those components responsible for directing the communication are built entirely from pure functional elements, we therefore have no choice but to conclude that being pure functional is not a bad engineering property -- on the contrary, it is the very feature that enables us to argue this on wiki in the first place.
OctoberZeroSix
Why don't you move this page to PurelyFunctionalLanguagesDontExist?, nothing here is about BadEngineeringPropertiesOfFunctionalLanguages, unless you claim 1. Functional is equivalent to PurelyFunctional, 2. The convention of "'running the program' means evaluating main and executing the returned side effects" makes a language impure (and thus, by 1, not Functional), and 3. Nonexistence is a BadEngineeringProperty?.
Why don't you?
CategoryFunctionalProgramming