Denotational Semantics

The field of DenotationalSemantics was founded by DanaScott and ChristopherStrachey.

This discipline is alluded to on HoareTriple.

See the FreeOnLineDictionaryOfComputing at http://www.foldoc.org/foldoc/foldoc.cgi?denotational+semantics for a more precise explanation.


As far as I understand, DenotationalSemantics works as this: a function is defined that maps a valid expression onto some mathematical object. For example, if I have the expression 2+2, then the denotational semantics of this expression might be the natural number 4. (At least, that would be the case in any conventional programming language.) The construct fun x -> x + 2 might be mapped to the function from N to N that adds 2. This works most naturally for FunctionalProgrammingLanguages, but you can also do this for languages with assignment. (Although no-one ever wrote a DenotationalSemantics for CeePlusPlus.) -- StephanHouben

In denotational semantics, jumps are modeled using continuations (see ContinuationExplanation). Also see "Continuations: A mathematical semantics for handling full jumps" (1974) by ChristopherStrachey and Christopher P. Wadsworth (reprinted in Higher-Order and Symbolic Computation, 13(1/2):135-152, April 2000).


Denotational Semantics : The Scott-Strachey Approach to Programming Language Theory by Joseph E. Stoy

ISBN 0262690764 Review from Amazon: "You can read this for pleasure or personal edification. It's probably quite hard to get hold of now. It's a classic, and it's completely out of date. A bit like Plutarch."

He was my undergraduate college tutor. Seems to know what he's talking about. LambdaCalculus is very big in Oxford University's computing department, if that helps your websearches.

-- MattBiddulph


ChristianQueinnec's "LispInSmallPieces" has a chapter that lays out DenotationalSemantics for SchemeLanguage. Queinnec also wrote a conversion program, LiSP2TeX, that allows you to describe a denotational interpreter in Lisp syntax, execute it in Scheme, and convert the description to the Greek-heavy format that LambdaCalculus junkies prefer. -- SethGordon

Gadzooks! Seth, you are a star. Thanks for the hint. Google reveals that LiSP2TeX lives at http://youpou.lip6.fr/queinnec/WWW/l2t.html -- KeithBraithwaite

Now at: http://pagesperso-systeme.lip6.fr/Christian.Queinnec/WWW/l2t.html -- VolkerStolz?


DavidHarland? wrote a couple of books in 1984 and 1986 respectively in which I first read about DenotationalSemantics - highly underrated or overlooked books in the history of program language design I would say, from my highly limited experience of the genre:

-- RichardDrake


Linguists also use DenotationalSemantics for natural languages such as english for example http://cl.snu.ac.kr/nam/papers/abstract.PDF


DenotationalSemantics is similar to "{}" in Bison for example in a definition of expressions:

  exp:    NUM                { $$ = $1;         }
        | exp '+' exp        { $$ = $1 + $3;    }
        | exp '-' exp        { $$ = $1 - $3;    }
        | exp '*' exp        { $$ = $1 * $3;    }
        | exp '/' exp        { $$ = $1 / $3;    }
        | '-' exp  %prec NEG { $$ = -$2;        }
        | exp '^' exp        { $$ = pow ($1, $3); }
        | '(' exp ')'        { $$ = $2;         }
  ;

before each set of curly brackets is the synax and in the curly brackets is the semantics. Usually grammars only give syntax semantics specifies meaning. In formal DenotationalSemantics double square brackets are used for "semantic functions" so "+" above might be

  [[A + B]] = plus([[A]], [[B]])

Where plus() is known in the meta (specifying) language. Can also be sub or superscripted with other meta variables such as environments, continuations etc. Ie for a simple programming language statement1;statement2;... may become

 C[[S1;S2]]rho = C[[S1]]rho.C[[S2]]rho

Where C is "Command", rho the environment, "." means composition. DenotationalSemantics allows you to completely reduce and manipulate languages and their instances (programs or utterances) to mathematics and logic.

This doesn't look like a notation for "semantics", but rather an alternative algorithm for implementing interpreters/compilers, or at least a syntactic pre-digester, similar to an intermediate meta-language for a CommonLanguageRuntime engine . And/or perhaps a system to verify an interpreter/compiler is doing its job, somewhat similar to DesignByContract. I question that this and similar notations are directly "semantics". -t

{It isn't semantics; it's a notation for expressing semantics, and the above example is rather poor. See http://en.wikipedia.org/wiki/Denotational_semantics for a better description, and in particular, a reasonable set of references.}

{What's wrong with using a formal notation for semantics to automate creating compilers and interpreters, in exactly the same manner that a formal notation for syntax -- EBNF -- allows us to automate creation of parsers?}

My issue is in calling it "semantics" or "only semantics", and has nothing to do with its suitability as a compiler tool.

{If it's used to formally describe the semantics of a programming language, why would it be describing anything but semantics?}

Just because one claims it describes semantics does not by itself make it so. If I get a letter from an alleged Nigerian Prince (with a special money-making deal), that by itself does not mean it's really from a Nigerian Prince. That should go without saying. And if one is misusing the output, then they are misusing the output (Re: "If it's used to formally describe..."). That should go without saying also.

For example, suppose the inventor of Prolog first introduced Prolog as a "semantics encoding technique" in 1972 or whatnot, and claimed Prolog "denotes semantics" and writes an interpreter for Algol in it claiming that the result "denotes the semantics of Algol". How does one objectively know if the inventor is full of it?

[Between this and your response to me above, you sound pretty desperate. In both cases, you aren't actually pointing out a problem or issue with it describing semantics. Maybe learning about the subjects you pontificate on before speaking would help.]

Projection; you are having difficulty answering basic questions. Your approach seems to be, "I know semantics when I see it because I'm more knowledgeable than you." This kind of thing is a fake get-out-of-rigor card.

{Why question whether DenotationalSemantics represents semantics or not? It's as odd a question as asking how we know that the Python language reference manual makes reference to Python and not assembly language. DenotationalSemantics either makes reference to syntactic constructs of a language in a coherent fashion and provides a description consistent with run-time language behaviour -- in which case it describes semantics -- or it does not. This is rather like being asked how we know that a blueprint represents a floor-plan and not a circuit diagram. The best response is probably, "Huh?!??"}

A description of an algorithm or a restatement of syntax structures in a meta-programming-language (similar to what Microsoft CLR may do under the hood) may fit those characteristics, but are not "pure semantics" by most accounts. If a notation represents semantics and only semantics, one should be able to describe characteristics of such similar to how one would look for characteristics of a circuit diagram. Circuit diagrams typically don't have bathroom sinks, for example. There are patterns and patterns can be identified by examining specimens. As far as I can tell, DenotationalSemantics fits the pattern of a meta-programming-language. But, there are no agreed-upon specimens of "pure semantics" to compare that side. And that's because semantics are ideas in human minds and so far the human mind remains mostly unencodable.

{I don't know why you think DenotationalSemantics "fits the pattern of a meta-programming-language". I don't know what a "meta-programming-language" is, nor how it's "similar to what Microsoft CLR may do under the hood". DenotationalSemantics maps language statements to mathematical objects so that we can reason about them formally and logically. That's it. It provides a way of unambiguously stating that, for example, whenever we have a language statement of the form "x + y" it shall be interpreted as (i.e., it shall mean) numerical addition of 'x' and 'y'.}

That transformation by itself doesn't make it "only semantics".

{Sorry, I don't know what that has to do with what I wrote.}

And original code can also "reasoned about formally and logically". True, it may not be the most easiest notation for such work, but it's a matter of degree rather than a Boolean condition.

{Imperative code is unusually difficult to reason about.}

But "difficult" is a continuous scale. I don't dispute that the transformation may simplify the ability to analyze a language, but ease is not the issue at hand. Determining if it represents semantics and only semantics is the issue, not whether it's "easy to analyze'.

{A (programming) language is composed of syntax and semantics. If we're not talking about its syntax, we must be talking about its semantics. Of course, it's possible not to be talking about the language at all, but that's trivial to determine: We're either making reference to the language, or we aren't.}

But we are talking about the classification of what DS produces, not the language by itself. Again, describing what languages are composed of does not necessarily limit what DS output is composed of. The human using DS appears to be injecting algorithm artifacts into at least part of what DS produces as output.

{I don't know what you mean by "DS produces". DenotationalSemantics doesn't "produce". It's as odd as asking, "What does English produce?"}

{What does "injecting algorithm artifacts" mean?}

Human interpretation of the syntax is turned into an algorithm or part of an algorithm in one's mind, and that algorithm is then written down as DS notation.

{I don't know why anyone would do that. It wouldn't be DenotationalSemantics, and I don't know how one would represent an algorithm in DenotationalSemantics.}

How do you know that's not the mistake being done with DS?

{How would we make that mistake with DenonationalSemantics?, when DenotationalSemantics doesn't let us represent algorithms?}

How do you know it's not presenting algorithms?

{Because it can't.}

Where's the stopper-fier?

{DenotationalSemantics defines mappings from language statements to mathematical objects, in which the mathematical objects represent the semantics of the language. Mathematical objects aren't algorithms.}

Those mathematical objects may be representing implementation (or perhaps implementation "hints"). See SemanticsDiscussion for discussions on what DS's output is.

{No, those mathematical objects are representing semantics. We use mathematical objects as a substitute for informal human-language descriptions of semantics.}

To me they just look like a re-statement of syntax patterns with implementation-oriented "hints". That does not make them semantics. They may be influenced by semantics, but that's not the same as being semantics. Anyhow, without a rigorous and widely-accepted way to classify a given notation as being "semantics" versus implementation versus "observations" etc., we will probably never settle this. I consider "semantics" to be human ideas in human minds. Notation that reflects such ideas is usually considered "implementation". DS generates partial implementation information in that it provides implementation hints, but is incomplete. And these "hints" may indeed be structured in a way that makes analysis of program behavior easier. But implementation information can be used for such also, similar to how a security expert may analyze machine code to figure out what a virus is doing. Being analyzable for clues about behavior characteristics does not (by itself) make it "semantics".

{How the DenotationalSemantics "look" to you, and the fact that you "consider 'semantics' to be human ideas in human minds", is irrelevant. You're free to be wrong all you like. That doesn't affect ComputerScience.}

It's not "science" if you cannot give decent evidence you are measuring semantics and not something else. Why should someone just take your word for it? Science is not ArgumentFromAuthority.


A DenotationalSemantics of (a small part of) XSLT: http://homepages.inf.ed.ac.uk/wadler/papers/xsl-semantics/xsl-semantics.pdf


A FormalSemantics? of PrologLanguage is given using PiCalculus in http://scholar.google.com/scholar?cluster=6035347962773538537.


DenotationalSemantics: A Methodology for Language Development by DavidSchmidt?.

Been out of print several years, now available online at: http://www.cis.ksu.edu/~schmidt/text/densem.html

ISBN 0697068498 (no image available)


http://4.bp.blogspot.com/-HpwWEkmsczQ/TsyAJ-yh4aI/AAAAAAAAIvI/dDXY_JSUfw0/s400/DSC_0134.JPG

Slots in this kind of shelf is what I mean when I say "cubby hole". Maybe there's a better term. (For some reason, the image is not directly displaying in the page; one has to click on it. Wiki bug?) -t


So, what does "operational equivalence coincides with denotational equality" imply for Perl?


See also PossibleWorlds


CategoryBooks CategoryJargon


JanuaryFourteen


EditText of this page (last edited January 11, 2014) or FindPage with title or text search