SyntacticSugar shouldn't be presented as an issue. It should be presented in terms of solving the abstraction between the humans and the machines. SyntacticSugar is more often good, as it improves productivity.
Bytes are SyntacticSugar for Bits.
Assembly Language is the SyntacticSugar for Machine Code. C is a SyntacticSugar for Assembly Language. C++ is a SyntacticSugar for C.
Java is a SyntacticSugar for C/C++. C# is a SyntacticSugar for C/C++, with a hint of Java and Pascal. Objective C is a SyntacticSugar for C/C++.
Google Web Toolkit is SyntacticSugar, where you write in Java, and compile into Javascript. JSON can be considered SyntacticSugar for XML. HTML can be considered SyntacticSugar for Rich Text Files or LATEX.
SyntacticSugar per se, is not bad.
Really, it all gets compiled down to 1's and 0's eventually anyway :)
If "all programming languages are syntactic sugar", what about programming languages that are never compiled, only interpreted? Certainly it would be possible to design a language in which there wasn't a consistent 1-to-many mapping. For instance, imagine an operator who's meaning varied depending on the time of the program's execution. Morning it means >, Evening it means <.
Further, what about Pseudocode then? Some people use very very rigorous pseudocode (that could almost be a real language suitable for compilation), yet that code is not ever executed by a binary computer. However, it still can serve to communicate to another human your intentions.
What about languages based solely off of mathematical notation? Or logical notation? They exist. Is all of math and logic just syntactic sugar? Is ALL language just syntactic sugar? I mean, suppose we define a subset of english, limit it so that we don't use things like contractions or improper grammar (in fact, limit the grammatical constructs) and then call that a programming langugae. It's still clearly english.
Is english then syntactic sugar? Something about that statement makes me think we're wrong here.
[All language is syntatic sugar, both written and spoken. Words are mere symbols that represent complex ideas. When any idea takes too many words to express, we create a new word that stands for that complex expression, which then becomes the definition of that word. Now the word alone will suffice, it becomes syntactic sugar for the idea it now expresses. The process of creating syntactic sugar... abstraction, key to speaking, thinking, and programming.]
Surely after all that's been said on this and related pages, it's time to come up with a really specific definition for "syntactic sugar", rather than just take extreme positions like yours while taking the definition for granted.
[There's nothing extreme about my position.]
You are implicitly taking the position that "syntactic sugar" is a synonym for "symbolic expression". I don't think that definition is at all the most widely used one assumed.
[No I'm not, nor did I say that. I didn't even mention symbolic expressions. I said words are symbols, which they are.]
Also, you said "mere", which is a danger signal. Words like "mere" and "just" are dismissive, implicitly claiming their referents are insignificant, without direct examination of whether they are or not.
[Don't over analyze it.]
Also, I find it interesting that "all languages [are] syntatic sugar..." What, praytell, is english candy-coating? It's a fairly well-accepted fact that people who speak different languages think differently when they "think" in that language. Bilingual people everywhere confirm this. So Language is not just "syntactic sugar" for thought. What exactly is this "root form" of communication that Japanese, English, Cantonese, and Portugese are all covering up? Without language, anything beyond the most basic of communication is not possible.
[Different languages offer different abstractions, so what, how is that at all relevant? People who can't speak the same languages can still communicate, they just revert to weaker syntactic sugar like motioning and pointing, no where near as efficient as words, but effective none the less.]
You're going to have a hard time convincing me that abstraction is syntactic sugar, because by every definition of syntactic sugar I've seen, it can't represent new concepts, just present current ones in a more compact or convenient fashion.
[Abstraction is the process of simplifying something by removing irrelevant details. Syntactic sugar is essential the same. Function declarations are sugar for the more complex process of pushing variables onto the stack. void fun(arg) is an abstraction. Take any syntactic sugar you want, it's an abstraction for something, for some process that you don't want to have to do or think about, so you invent sugar.]
If you're saying this is the case, how do we teach our children concepts like abstraction? To claim concepts like this are fundamental to the human brain and therefore a priori is problematic, because not everyone is equally good at it, and some people can barely think in the abstract.
Everyone can think abstractly quite easily, some are just better at it than others. Children learn like everything else learns, through experience. When does a child understand time? They figure out the concept via experience, watch em scream at the checkout line when you take the candy bar away to pay for it, because they don't understand that you'll give it back, but eventually they figure it out, they start thinking into the future, they grasp time. I'd venture to say abstraction is a fundamental concept to all brains, human or not.
SyntacticSugar ...
What are we arguing about?
Do the people on opposing sides of this argument mean the same thing by "syntactic sugar"?
All languages are eventually translated to machine code, and therefore all language constructs could be termed (extremely useful) "sugar" around the operations of the machine.
A popular notion, but untrue. True syntactic sugar is free of semantics. The parts of a language furthest from "syntactic sugar" are those where the language implementation performs valuable semantic checking (e.g. for type errors) and/or semantic transformations, and the grammar supports those semantics.
The whole point of the term "syntactic sugar" is to denigrate comparatively useless syntactic structures, not to denigrate all of them; that devalues the term to the point of uselessness.
The increment operator x++ in C is syntactic sugar; it is merely shorthand for x = x + 1
results in x == 6 and y == 5.
While,
int x = 5; int y = (x = x + 1);
results in x == 6 and y == 6.
In other words, x++ has very specific semantics that are different from x = x + 1. You might be able to argue that ++x is syntactic sugar, because it does mean the same thing as x = x + 1, although it does have a higher level of precedence than either the = or + operator. -- ChrisHines
If I remember correctly, the things I have read about optimizing C code say that the post-increment, pre-increment, and binary-addition operators are all represented differently when they become machine-code. I have never specifically checked for this behavior though.
Your LanguageLawyering misses the point. If C did not have the ++ operator (nor the += operator) then you would write x = x + 1, and so in that sense it is merely a shorthand syntactic sugar, even though, yes, once the shorthand got introduced, some subtleties came along with it.
You are taking the stance that, for something to be sugar/shorthand, it has to pass the LiskovSubstitution test, but I don't see why that is required.
Actually I was demonstrating that the example given does not meet the criteria "... free of semantics."
Ah. You're certainly right about that. Probably that means that my phrase "free of semantics" was excessive.
it all gets compiled down to 1's and 0's eventually
I apologize if I am interpreting a light hearted comment as a serious statement, but the point of programming languages is to aid in getting the correct 1s and Os written.
In my observation there are generally two kinds of developers: one set who tends to think linguistically, and the other who tends to think in terms of data structures, or at least a bunch of bins (variables) and ropes (pointers). The latter will tend to look at code an think of an AbstractSyntaxTree as a data structure to be operated on by the interpreter/compiler, while the former will see the language as a language directly. The latter tend to dismiss syntax as a means to an end while the former tend to see the language as an important communication facilitator between developers and machines (or other team members).
A song from MaryPoppins? comes to mind. A spoonful of sugar helps the medicine go down.
Programming is syntactic sugar, get out your soldering irons and do it all in hardware as it was meant to be :-p
See SyntacticSugar, BetterSyntacticSugar, CodeAvoidance, SyntaxFollowsSemantics
Compare SyntacticSemtex