Question regarding your signatures in TypeSystemCategoriesInImperativeLanguages
+(integer, string) returns string
If a parameter is designated "integer" in your model signature for a dynamic language, does that mean its type tag/indicator must be integer, or that the value/representation is "parsable as" (interpretable as) integer?. Both variations may exist in various dynamic languages. A more thorough model would address and clarify that issue.
The notation used in the signatures is based on the assumption that a value is V=(R,T) where R is the value's representation (a bit string, typically) and T is the value's type (which is always "string" in the case of Category D2 languages). T is what I presume you mean by "type tag/indicator". Where values can be encoded within strings, they are treated as type "string". For example, the signature of a binary operator DateDiff that returns an integer encoded in a string and parses its string operands to identify encoded calendar dates within them would be written as "DateDiff(string, string) returns string".
- I'll probably have to explicitly illustrate it when my pseudo-code is further along. My explanation appears to have failed at communication here.
- Really? I thought I understood exactly what you meant.
- Hell, that would be a first :-)
And another possibility is "voting" parameters where plus (for example) will consider the hard (tag) or soft (parsable) type of both parameters before making a decision. I've tried to enumerate all the possible combinations for two parameters and two types, but haven't been able to clean out the equivalence-related duplicates for presentation yet. (
ThirtyFourThirtyFour has a sampler plate.) -t
That's conceivable, but I know of no popular imperative programming language that uses the "soft (parsable) type" for operator dispatch. There would be considerable performance overhead for doing so, because it would require parsing operands on every operator invocation. Implementing values as V=(R,T) means parsing occurs once (typically during LexicalAnalysis, when values are created from the literals that specify them) and operator dispatch can be done efficiently based on T rather than require the inefficient overhead of parsing R.
Scriptish languages often don't worry much about efficiency. And implementation can use the tag as a shortcut to avoid parsing a good portion of the time, but the result is the same AS IF parsing always occurred (but slower) such that we can simplify the model by saying it "always parses" and avoid conditionals in the model. I'm not going to assume the language was designed with efficiency as it's primary signature processing choice, for that would limit the possibilities that have to be considered. Partly for that reason, I don't like your signature approach. Plus, nobody's done a thorough survey of languages such that "soft polymorphism" cannot be ruled out. Further, such a language could say, "if speed is your goal, then make sure the tags (explicit types) are of the intended type to avoid a parsing step." Some of my earliest exposure to scriptish languages was for minicomputer OS scripting, and those were not designed for math and accounting, but rather gluing the likes of FORTRAN or BASIC programs together and managing or automating system tasks. (Those languages were often still far better than the shit one finds with Windows.)
Perhaps upper-case can be use to indicate "hard" signatures (tag-only) and lower-case for "soft" signatures. ("Soft" meaning "interpretable as").
+(NUMBER, NUMBER) returns NUMBER
// matches: 2 + 3
// does NOT match "2" + "3"
.
+(number, number) returns NUMBER
// matches: 2 + 3 and "2" + "3"
But I'm still not sure this is thorough and/or clear enough to cover the full gamut. The coding details are dumped onto the kit user without any help [sentence added later]. For one, the priority rules could get long or messy because soft types can overlap. I'm still trying to see if it can be fully regimented per
ThirtyFourThirtyFour example. -t
Again, I've not seen a dynamically-typed imperative programming language that uses "soft" signatures for operator dispatch. I'm sure it could be done, though. The operator definitions would need some mechanism to identify what "soft" type their parameters expect.
Again, without a wider survey, it's difficult to say with certainty. Have you explicitly tested many? If you don't have a wide and thorough survey to rely on, I'd suggest it at least be covered in your "model" page as a disclaimer of some sort. I'm not comfortable at this point with the "hard polymorphism" assumption.
I am quite familiar with the TypeSystems of popular imperative programming languages and have explicitly and extensively studied them. Among C, C++, C#, Java, Javascript, Python, Perl, Ruby, VB.NET, VB, VBA, Objective-C and PHP, none perform operator dispatch on "soft" signatures, i.e., parsing string values at run-time to identify other values encoded within them, and then invoking operators based on the types of those values.
- Perl does not have tags and so couldn't do hard polymorphism if it wanted to (at least for scalars). Thus, your claim is suspicious. Although, if it has no overloading on the built-in operators, then such is not testable. CF might, I'll have to check one of these days (although it's not a tagged language).
- Every Perl scalar value has a type reference, which can be Integer, Double, String, Reference, or something called a "Blessed or magical scalar". Typical Perl mess, that last one. Perl does not do "soft" polymorphism as described here. Whatever ColdFusion might or might do -- I have no idea; I've not examined it -- it's certainly not in the "popular imperative programming language" category due to obvious lack of popularity.
- Can you present a test that demonstrates that Perl scalars have "type references"? And I'm not sure we should limit our models to the top 50 or so languages. Some of the scripting languages I've used before were bundled with obscure or domain-specific tools. -t
- Why would I need a test? Perl internals are documented. I limit the descriptions on TypeSystemCategoriesInImperativeLanguages to "popular" languages because I haven't done a comprehensive survey of all languages.
- Sigh.
- Further, even if existing operators usually follow such a pattern, user-defined functions don't have to. When discussing "types" we should probably also consider activities besides the built-in libraries, or at least include a disclaimer saying that such is not being considered in a given model.
- It's self-evident that operators can do whatever they like with their argument values. No popular imperative programming language parses values as part of its built-in operator dispatch mechanism.
- See above regarding "popular". There is no rule of the universe that prevents such a language/technique such that it shouldn't be summarily dismissed. A disclaimer is warranted in your model in my opinion. And that is one advantage of my model: it can model "soft polymorphism".
- There is no rule of the universe that prevents any of an infinite number of techniques. Should I provide a "disclaimer" for every one? The descriptions on TypeSystemCategoriesInImperativeLanguages certainly can support "soft polymorphism". As described below, TutorialDee -- which is a Category S language -- supports it.
- Something like "as typically found in popular languages" is warranted in my opinion. And I will agree that if you limit a model to a sub-set of languages, it can perhaps be simpler than a model that could potentially target a wider variety of languages. It's a matter of trade-offs.
- You mean the part that says, "Note that the following categories are not intended to be a definitive list", at the top of TypeSystemCategoriesInImperativeLanguages, isn't good enough?
- (Having "+" parse both sides to see if they are number-able, resulting in addition instead of concatenation, even for a "tagged" language is not unrealistic or "wrong". It's not as efficient as tag-centric polymorphism machine-wise, but not "wrong" from a language behavior perspective. Thus, a model/kit that can handle such is a good thing. It's a niche that you don't cover.)
- It's a niche that doesn't have to be covered. There's nothing in TypeSystemCategoriesInImperativeLanguages that precludes it -- and, as noted below, TutorialDee does permit "soft polymorphism" -- but as there are no popular imperative programming languages that do it, there's no point in explicitly mentioning it. The descriptions certainly admit what is done, which is that the "+" operator in certain languages parses its string-typed operands to see if they encode numeric values, and it adjusts its behaviour accordingly. However, that is not the same as polymorphic operator dispatch; there is only one "+" operator.
- It doesn't preclude it, but as-is, your signature model does NOT cover (distinquish?) soft poly. D2 languages have to use soft poly.
- I know of no D2 language that uses "soft poly" as part of its operator dispatch mechanism. However, the effect you're calling "soft poly" is covered in the part that says: "Some languages do not distinguish operand types outside of operators and treat all values as strings, so the only signature (for "+") is effectively: +(string, string) returns string In such languages, when "+" is invoked it internally attempts to convert its operands to numeric values. If successful, the operator performs addition and returns a string containing only digits. If the conversion to numeric values is unsuccessful, the operator performs string concatenation on the operands and returns the result." If you want to call that "soft polymorphism" (please don't use "poly", that just sounds like a juvenile attempt to sound like part of some vernacular-using "in"-crowd), that's fine -- but be clear it isn't part of the operator dispatch mechanism, it's merely the behaviour of the single "+" operator definition.
- Your notation is limited or misleading in that the parsing step can determine whether "+" adds numerically or concatenates.
- There are all manner of things, some reasonable and some ludicrous, that a "parsing step" could do. It is hardly reasonable (or even feasible) to delineate all of them, especially as popular imperative programming languages do exactly what I described. I can only imagine what some of the unpopular ones do.
- The return type is of secondary concern, and calling it "string" may be misleading.
- The return type is used to determine expression types in those languages that do so, and it doesn't matter what it's called. The examples are intended to be illustrative, not definitive.
- And "operator dispatch" may not have a clear definition.
- It's clear enough. "Operator dispatch" is the mechanism by which operator invocations are matched to the right operator definitions.
- Further, we can't assume the interpreter is implemented in an OOP language. It may be If/CASE statements under the hood such that we shouldn't get too caught up in OOP terminology.
- Who said anything about OOP? The term "soft polymorphism" is your term, not mine, and polymorphism is not limited to OO.
- That may depend on how OOP and "invoke" is defined. But, never mind. Let's not open those cans of worms here.
- The definition(s) of OOP is(are) certainly contentious, but I've never seen evidence that "polymorphism" is particularly contentious (though I suppose it could be), but I would have thought "invoke" was about as unambiguous and clear as a ComputerScience term could get.
- I meant in the context of our discussion, especially in terms of signature matching.
- Really? How is "invoke" (or signature matching) contentious here? I'm using the term "invoke" the usual manner.
- There are different ways to implement the "invoking" under the hood.
- What are the different ways?
- OOP call signature polymorphism, case statements, IF statements, etc.
- The first is ad-hoc polymorphism, if I interpret correctly what you mean by "OOP call signature polymorphism". The latter two are not different ways of invoking an operator. They are flow-control constructs.
- Either way, it's an interpreter implementation detail (in typical languages).
TutorialDee supports specialization-by-constraint, in which each type in an inheritance hierarchy is associated with a boolean constraint expression. When a value belonging to a particular type hierarchy is instantiated (or "selected", in TutorialDee parlance), the constraint expressions are evaluated to determine the most specific type to which the value belongs. A programmer can define constraint expressions that are based on parsing values. Since TutorialDee supports MultipleDispatch based on the most specific type of values, a programmer can use it to implement "soft" polymorphism as you've described it. I know of no popular imperative programming language that permits this. I know of no other language -- popular or otherwise -- that does this internally.
The closest approximation to "soft polymorphism" that is sometimes found "in the field" is the behaviour of certain operators in "string typed" (i.e., category D2 languages described in TypeSystemCategoriesInImperativeLanguages) languages, in which a few operators vary their behaviour depending on what values are encoded in their (always) string-typed operands. For example, a '+' operator -- whose operands are always a pair of strings -- may check to see if the operands encode numeric values, and perform addition if (and only if) they do. Otherwise, it performs string concatenation. Due to the performance overhead this represents -- the '+' operator has to parse its operands on every invocation -- its use is (appropriately) very limited. Generally, it's an order of magnitude more computationally efficient to associate type references with value representations once -- at the point where they're parsed in the source code -- rather than have to do it repeatedly by doing it at the point of each operator invocation.
Regarding efficiency, technically that's not true. An "internal" type indicator/tag can be kept to make processing shortcuts where appropriate. For example, a language parser could determine if a given literal has quotes or not (such as numeric) on load or pre-parsing stage (such as p-code). If it's a numeric (non-quoted), then an internal tag can track that fact and associate that fact with a variable upon assignment from the literal. When an overloaded operation comes along, the tag can be checked, and parsing skipped because it already knows the "value" represented by the variable is from a literal that's known to be numeric. But that's an implementation detail and one may have no behavioral way (IoProfile) to tell whether such tag is actually used. I use a programmer detectable tag as the category separation criteria, not implementation tags, because language implementation can change or have new vendors building interpreters that mirror existing behavior, but not implementation of the existing specimens. It's like caching in a sense in that it can be invisible to the programmer such that they don't "model" it in their head when reviewing code. An (internal) explicit type indicator/tag can improve efficiency without making its presence known in terms of I/O. As I prefer to describe it, a tag-based language acts like it has tags from a programmer's perspective and a non-tag-based language acts like it has NO tag. If tags are "inside" merely to speed up implementation but don't otherwise change observable behavior (IoProfile), then it's okay to model them as not existing, if prediction is the goal. (Modelling speed is another matter.) -t
It was established early on that IoProfile can't detect whether a "tag" is detectable or not. Remember the trivial C example? Your notion of an "internal type indicator/tag" is simply the conventional notion of a language with types, e.g., Category D1 on TypeSystemCategoriesInImperativeLanguages.
- Sorry, I don't know what scenario you were referring to. Please use a PageAnchor. And you've yet to find a clean, objective way to measure "conventional notion". I don't want to repeat the "semantic canon" fights from the ValueExistenceProof series here.
- See DefinitionOfTypeTag, section starting with "Top, would you consider C to be a language that uses tags?"
- Sorry, I don't see the relevancy. And why talk about C? The context is dynamic languages. Also remember that IoProfile takes ALL possible I/O into consideration, not just CSR's. CSR's are irrelevant to IoProfile. CSR's only mattered to a discarded candidate definition and are not part of the model in question that starts with "t".
- The relevancy is that you claim to "use a programmer detectable tag as the category separation criteria", but the C example demonstrates that it's not possible to reliably find "a programmer detectable tag". If it's not reliably detectable in C, how can we trust it to be reliably detectable in other languages, especially as many dynamically-typed language interpreters and compilers are written in C? As for your "IoProfile [which] takes ALL possible I/O into consideration", that's obviously practically infeasible, because "ALL possible I/O" is infinite.
- Your own example detected it with "foo(x) == foo(y)". Fooyaa!
- How so?
- Let's back up here. What exactly are you claiming IoProfile won't detect? Remember, IoProfile does not rely on a single example and does not rely on CSR's. That particular example is very limited.
- Actually, if we're backing up here, I'm not clear what your IoProfile will detect. As described, IoProfile requires an infinite amount of I/O to complete. As described, IoProfile makes no mention of detecting tags or anything else -- I thought it was a technique for comparing languages. Therefore, if it "does not rely on a single example", precisely how many examples -- which I presume will be something in the range of 2..∞ -- are required in order to detect a "type tag", and what is the process for doing so?
- Almost all empirical testing of systems of nature and human construction are approximations since we don't have infinite resources at our disposal. We've been over this already; I don't want to go into that again. In this case, "detecting a tag" would mean the language's behavior "matches" that of a model that uses tags. Similar testing probably could NOT detect caching because caching is intended to improve performance without changing expected language behavior (IoProfile). An "internal" tag could do the same thing: improve efficiency by reducing repeated parsing of values/representations WITHOUT changing expected behavior. Thus, I'm making a distinction between "internal" tags and "observable" tags.
- From your description of IoProfile, it appears that the only way to use it to detect "a tag" is by comparing one language implementation against an implementation of exactly the same language which is known to use "type tags". Is that not the case?
- Pretty much, yes. I know of no better alternative. I bet you are itching to claim "read the manual", but again most manuals on dynamic types are pretty crappy in my assessment.
- Given the choice of reading a crappy manual vs implementing a whole new interpreter for language <x> just to determine whether language <x> uses "type tags" or not, I think I'll stick with the former.
- Once you play with such a model a bit, you don't have to create a new one for each language, you "run" it in your head. That's the general intent. And a crappy manual won't necessarily answer questions even if you re-read it a thousand times. Empirical testing may be the only way to know for sure.
- Your "play with such a model a bit" doesn't sound like a particularly rigorous approach, and why is it superior to simply playing with the language you're investigating a bit?
- It's common to learn by playing. Since when is learning a "rigorous" process? And why are you assuming experimenting with the language and experimenting with a model are mutually exclusive? When I experiment with a language, I like to (eventually) develop a model for how it "works" in my head based on the experiments. The pseudo-code kit is just a semi-formalization of that. Otherwise, one is working with mostly feeling and vague notions, which are typically harder to remember and apply. Note that I have documented with sample code various experiments to try, but they are not meant to be complete, for each language is different. One cannot predict the design of future languages with anything "rigorous". Thus DontComplainWithoutAlternatives regarding "rigorous". -t
- You didn't answer my question.
- What question? The "superior" question? I didn't claim it superior. BOTH should be used such that ranking one above the other is the wrong approach anyhow, as already explained.
- Playing with a model of a language is not superior to playing with a language? Then why do it?
- Why even question why one step is more important than another? Why does the ranking matter? They both should be done. Experimenting with the language is generally the first step, and then building a model (in head or code or paper) to mirror the experiments is the next step. If a "user" doesn't want to bother forming/tuning a model, that's fine by me. It's their choice. I'm merely offering them a suggested model that they can use or ignore on their own volition.
- (Moved "science" discussion to HomelessContent.)
- Further, it is possible that caching, for example, may change the IoProfile, but nobody has discovered or noticed the situation (input data and/or source combo) that triggers the difference. Generally these are called "bugs in the interpreter". We normally don't expect caching to change the behavior of our interpreters, outside of performance issues (speed/RAM).
As for using an "internal type indicator/tag", once you do that you no longer have "soft polymorphism", you have conventional polymorphism.
Please clarify. I don't know what you mean. The programmer could NOT detect an internal-only tag anymore than they could detect caching.
If you're dispatching on type, that's ad-hoc polymorphism. See http://en.wikipedia.org/wiki/Ad_hoc_polymorphism
If they can detect tags, the language may employ conventional polymorphism. If they can't detect tags, the language may employ conventional polymorphism. There is no language which parses strings prior to invoking a polymorphic function in order to determine which implementation of a polymorphic function should be invoked. Internally, a function may parse strings and alter its behaviour accordingly, but every language can do that. It's not a distinguishing characteristic.
- Sorry, I'm not following. See below.
Basically it's a rule that says:
1. If you already parsed a literal (value/representation) during code parsing
2. You copy that value/representation into a variable during a regular assignment
3. The variable has not been changed since (via a non-tracked event)
4. If you tracked the result of the first parsing (tag)
5. Then you don't have to parse it again.
But the programmer cannot see this take place nor detect it through non-speed-related experiments (unless there is a bug). Also note that if the programmer cannot see how the dispatching is implemented under the hood, then classifying it as OO polymorphism versus case statements versus something else may be moot. That's essentially an implementation detail.
Who said anything about OO?
What exactly do you mean by "conventional polymorphism"?
See http://en.wikipedia.org/wiki/Ad_hoc_polymorphism which is "a kind of polymorphism in which polymorphic functions can be applied to arguments of different types, because a polymorphic function can denote a number of distinct and potentially heterogeneous implementations depending on the type of argument(s) to which it is applied."
Again, that's in implementation detail (and OO specific).
It's a characteristic of certain programming languages, typically referring to a facility that can be exploited by the programmer. A '+' operator that supports concatenation and addition may be described as a semantically overloaded operator, but the language does not necessarily support ad hoc polymorphism. It is common to many OO languages, but not unique to them. In particular, it is not dependent on "objects" in the usual OO sense, only on types.
There are multiple ways to skin the [insert preferred animal]. Note that I consider "soft" polymorphism to involve (at least) parse-based (or equivalent) examination of the "representation" to determine how to process the arguments. Perhaps "polymorphism" is a confusing word choice, but I don't want to dwell on vocab. The "selection" (dispatching) can potentially be based the explicit type indicator(s) and/or the value/representation, per specific language and/or operator. For an illustration, contrast lines 4837 and 94838 in TopsTagModelTwo. Whether OO polymorphism or case statements or HOF's are used inside the interpreter is a swappable implementation detail. The outsider (programmer) can only observe the patterns. "Tag and value combo X result in results pattern A, combo Y result in pattern B," etc. The explicit dispatching mechanism to do this combo "lookup" is an interpreter/model implementation detail.
There is no popular dynamically-typed programming language that performs operator dispatch based on parsing values. Some, however, have semantically overloaded operators.
TypeHandlingGrid discusses why I dismiss the "popularity" issue.
As has been pointed out, your dismissal is weak. If you're going to include unpopular languages, why not incorporate (say) predicate dispatch, instead of something obscure like "soft polymorphism"?
Somewhere else, I forgot where, I pointed out that in the field there is a pretty good chance one may end up having to use an embedded or bundled language, often a "scripting" language, that comes with a specialized product. For example, an automated telephone answering system (voice menus etc.) may come with its own embedded scripting language to code phone response logic. I've encountered at least a half-dozen of such embedded/bundled languages over the years. Most of them were AlgolFamily-styled languages and/or with domain-specialized key-words and usually didn't try to introduce "fancy" paradigms etc. because they are not marketing to experience programmers only, but rather a wide variety of IT shops with perhaps limited programming experience. They are selling primarily a domain product, not a programming tool, and don't want to hurt their market by using "high brow" techniques unnecessarily. (I didn't analyze the specific type characteristics of such languages in detail.) -t
Do you know of any language -- popular or obscure, embedded or general-purpose -- other than TutorialDee, that supports soft polymorphism? I.e., that parses operands to determine what type they encode, prior to operator dispatch, as part of the general operator dispatch mechanism?
- I haven't directly tested enough such that I can't say I do, but we shouldn't assume the narrow case if we haven't done a thorough survey. By what logic should the narrow case be the base assumption? Unless it's very "costly", it's best to not assume or hard-wire-in the narrow or most-common case. -t
- That's exactly it: The non-"narrow case" here is very costly. Parsing every operand prior to operator dispatch would be a prohibitively expensive waste of CPU. Parsing operands only when needed to disambiguate certain operators -- such as "+" in many languages -- is common, and it's described in TypeSystemCategoriesInImperativeLanguages, as is noted below. Of course, "+" is just an example. Other operators could be similarly overloaded; the description on TypeSystemCategoriesInImperativeLanguages is not meant to imply that the mechanism is limited to just the "+" operator. A language could define other such operators, but the mechanism is the same.
- The goal of the model is not machine-related efficiency. As far as a production language doing it, if it's intended primarily as glue or an interface to some product, then the cost of parsing performance may not matter there either. Further, an invisible tag (non-detectable to programmer) can in theory be used to reduce parsing; but there is no need to model such. This was discussed already somewhere in a comparison to caching. A model interpreter wouldn't need to bother with caching-like efficiency shortcuts.
- The issue here is not the goal of the model, but what it's modelling, isn't it? Isn't that why you mentioned whether it's "costly" or not? However, in a production language -- even in a language "intended primarily as glue or an interface to some product"-- it makes no sense to parse operands needlessly. It only makes sense to parse operands in specifically those operators where some ambiguity, dependent on operand content, exists. Like "+". If you're speaking purely in terms of a model, your "soft polymorphism" is merely a subset of PredicateDispatching. Why not model PredicateDispatching and thereby cover every possible "embedded or bundled language"? Certainly PredicateDispatching is more recognised -- and therefore more likely to be implemented -- than "soft polymorphism".
- What you call "needlessly" is a design decision involving trade-offs between WetWare, machine speed, etc. I have no faith in your trade-off choices because of your uncommon thought processes. Besides, vendors do sometimes make poor decisions. If you are asked to work with a language as-is, it's part of your job and you are not in a position to bitch to the vendor about their language design. (Personally, I wouldn't overload "+" for concatenation and math.) Remember we are not designing new languages here, but modeling EXISTING languages. And why introduce something potentially foreign to model users like PredicateDispatching when a plane-jane IF statement can do the job just fine? It appears to be GoldPlating to me.
- What I call "needlessly" is simply the fact that in a string-typed language (e.g., ColdFusion, or what is called a "Category D2" language on TypeSystemCategoriesInImperativeLanguages) -- which is the only sort of language where "soft polymorphism" (of the sort you've described) appears to apply -- most canonical operators work on strings and do not require operands to be parsed in order to select the correct operator for dispatch. Thus, always parsing operands would be needless. Practically, whether operand parsing is needed or not would have to be specified on a per-operator basis, along with some explicit specification of what pattern the parser should expect to find.
- I don't see how this adds anything new to the discussion. It appears to talking about how languages "should" be designed, which irrelevant for reasons described above. As for how it may happen in the wild, a vendor may create a scripting language add-on/embedding for their tool that resembles JavaScript so that they can brag in their brochure that it "closely resembles JavaScript and is thus easy to learn". However, to keep the implementation simple (perhaps at the price of speed), they may choose to skip implementing type indicators (tags), among other things. This would typically require parsing the "values" to determine how to apply operators like "+". Whether such a decision is "wise" or not is probably off topic. (I doubt most programmers would notice the subtle differences in behavior compared to real JS, by the way.)
- Such a language would be a flawed implementation, which would (at best) demonstrate the obvious need for associating different kinds of static type references with values, instead of relying on repeated parsing. It certainly wouldn't be worth modelling.
- It is NOT flawed in the "2 + 2 = 5" sense. You appear to be drama-queening.
- It's flawed in the sense that "soft polymorphism" (as you've described it) demonstrates no benefit that would make it worth implementing let alone modelling, and it demonstrates some performance problems that would need to be addressed. What is likely to be implemented -- and therefore is worth modelling -- is PredicateDispatching, and PredicateDispatching can subsume "soft polymorphism" so there's no need to consider "soft polymorphism".
- You are pretending or assuming that all or most embedded language designers or implementers are rational, careful, and/or have plenty of time to design/implement. That's too big of an assumption in my judgement. HumansSuck. I prepare for the messy world as-is, not as it "should be". -t
- When a language appears that implements your "soft polymorphism", that will be a point worth considering, but only as an example of a poor implementation of PredicateDispatching.
- It might already exist, nobody's taken a decent/public inventory, regardless of quality or value judgements of said languages.
- If it already exists, than it's nothing more than a needlessly inefficient form of conventional type-based dispatch, or it's a variant on PredicateDispatching that's rare enough not to be worth considering. It's no more worthy of special distinction than any other poor implementation. Sometimes, language implementers create slow implementations of matrix multiplication. It doesn't require a special name; it's just slow matrix multiplication. Likewise, your "soft polymorphism" is either a pointlessly slow implementation of conventional type-based dispatch, or it's a limited form of PredicateDispatching.
- This doesn't appear to make any new points. And it's not necessarily about speed: it affects the IoProfile.
- It's not making new points; it's a summary. Whether it "affects the IoProfile" or not is both irrelevant and self-evident. In this case, without any specific detail, all that "affects the IoProfile" can possibly mean in familiar terms is that "language constructs affect language behaviour", which is unhelpfully obvious.
- I'm not sure what you mean by "[my] uncommon thought processes" (is that an AdhominemAttack?), or what any of this has to do with poor vendor decisions. If you can find any example of a language -- even one that is "intended primarily as glue or an interface to some product" -- that implements "soft polymorphism" as you've described it (i.e., that always parses operands to determine what type they encode prior to operator dispatch as part of the general operator dispatch mechanism) then there may be some justification for it. I've never heard of such a language, and there doesn't appear to be any justification for there to be one. Even TutorialDee -- which can essentially support it -- does not apply it to every operator invocation; only those associated with types defined to use specialisation-by-constraint. However, there are real languages that implement (or can implement) some form of PredicateDispatching (e.g., JPred, TutorialDee as noted) so it makes sense to be able to model PredicateDispatch.
- The model generally targets AlgolFamily-style languages, which is what one usually finds in the wild, especially with "embedded" languages which usually don't want to "rock the boat" because they are usually distributed by a tool vendor, not a language vendor. AlgolFamily-style is the Lingua Franca of programming for good or bad, and tool vendors (embedders) will likely choose the Lingua Franca to cater to the widest possible audience. Business 101. If you want to make a model/kit for obscure, specialist, or academic languages, be my guest; it's a free wiki and the reader can choose whatever one they want.
- JPred is an extension to Java, and TutorialDee is an AlgolFamily language. There's hardly anything obscure or particularly academic about either.
- Implementation of or mentioning PredicateDispatch in such a model would more than likely confuse the model user and not be less code.
- Why is PredicateDispatch more likely to "confuse the model user" than "soft polymorphism"? "Soft Polymorphism" is a term that currently only appears on the C2 Wiki, aside from one sneering deprecation of the term at http://www.coderanch.com/t/369364/java/java/Polymorphism and an on-the-fly coinage at http://www.gamedev.net/topic/589615-c-lua-storing-references-to-lua-functions-in-c/ PredicateDispatch is a recognised term that you can search for and read about elsewhere. Notably, "soft polymorphism" scores 91 hits on Google, and most of them have nothing to do with programming. "Predicate dispatch" scores 4,390 hits, and all of them appear to be programming-related.
- I'm not sure whether you are talking about documentation issues or implementation. In my opinion, neither term needs to be mentioned. The samples show that one can make any expression or code needed to mirror a given operation's behavior and I see no need to classify such code in the primary documentation. As side info or a footnote, that's fine. I used the term "soft polymorphism" as a working term when describing a key difference between our approaches. But I wouldn't consider including such in the primary documentation because I see no need to compare. I know you probably believe your model is a canon and/or important, but I don't want to re-re-re-debate it's alleged importance again.
- My response was to your claim that "implementation of or mentioning PredicateDispatch in such a model would more than likely confuse the model user", not some "documentation issues or implementation". You claimed that an alleged limitation of TypeSystemCategoriesInImperativeLanguages is that it doesn't support "soft polymorphism". If "soft polymorphism" is not a term that needs to be mentioned, then why mention it?
- The purpose of the primary documentation for model X would be to describe model X, NOT contrast model X with model Y. This should be obvious. For example, the online Php documentation does not compare Php to Lisp anywhere. IF one were contrasting languages, then it may make sense to introduce further terms/concepts to simplify the writing about said contrasting.
- I'm not sure what you mean by comparing PredicateDispatching to "a plane-jane IF statement". How does a "plane-jane IF statement" implement "soft polymorphism"?
- Plus-Sample-1 at TopsTagModelTwo gives an example. See line near comment "4837".
- Dispatch is dependent on the predicate expression 'isParsableAsType(valLeft, "Number") And isParsableAsType(valRight, "Number")'. It's PredicateDispatching.
- If you wish to call it that, fine. The actual label/classification is of very minor importance to discussion.
- That's what it is, in a rather ad-hoc and case-specific sense. It isn't the "plane-jane IF statement" that is fundamental to making it work, but the predicate expression. There are ways you could implement it without an IF statement, but you'll always need the predicate expression.
There are numerous languages that overload certain operators -- most notably (and typically) "+" for both string concatenation and numeric addition -- but these do not rely on parsing operands prior to operator dispatch as part of the general operator dispatch mechanism. In every example I've seen where operands have to be parsed to determine whether "+" should be string concatenation or numeric addition, it's done by the "+" operator itself.
Now, you could argue that the "+" operator itself is a specialist dispatch mechanism that delegates to string concatenation or numeric addition, but that's covered by TypeSystemCategoriesInImperativeLanguages in the part that states:
"Some languages do not distinguish operand types outside of operators and treat all values as strings, so the only signature (for "+") is effectively: +(string, string) returns string In such languages, when "+" is invoked it internally attempts to convert its operands to numeric values. If successful, the operator performs addition and returns a string containing only digits. If the conversion to numeric values is unsuccessful, the operator performs string concatenation on the operands and returns the result."
Something like TypeHandlingGrid may be more flexible and compact for describing combinations of hard and soft polymorphism. Language usage frequency is also discussed.