Typeful Programming

A programming style identified by LucaCardelli where types are important for much more than detecting some errors at compile time. In the hands of experienced programmers types become a powerful design and program construction tool.

From Cardelli's 1993 paper (http://citeseer.ist.psu.edu/243526.html) :

There exists an identifiable programming style based on the widespread use of type information handled through mechanical typechecking techniques. This typeful programming style is in a sense independent of the language it is embedded in; it adapts equally well to functional, imperative, object-oriented, and algebraic programming, and it is not incompatible with relational and concurrent programming.

TypefulProgramming is a day to day reality for people programming in some modern functional languages, and has been so for more than a decade.

Taken for example from A Gentle Introduction to Haskell (http://www.haskell.org/tutorial/) :

Haskell [HaskellLanguage] is a typeful programming language: Types are pervasive, and the newcomer is best off becoming well-aware of the power and complexity of Haskell's type system from the outset. For those whose only experience is with relatively "untypeful" programming languages such as Perl [PerlLanguage], Tcl [ToolCommandLanguage] or Scheme [SchemeLanguage], this may be a difficult adjustment; for those familiar with Java [JavaLanguage], C [CeeLanguage], Modula [ModulaThree], or even ML [MlLanguage], the adjustment should be easier but still not insignificant, since Haskell's type system is different and somewhat richer than most. In any case, "typeful programming" is part of the Haskell programming experience, and cannot be avoided.


What is TypefulProgramming actually about? Pretty much everything that follows seems to be a debate about static vs. dynamic typing and whether one is Better Than The Other or not. Which doesn't really help much in elucidating what TypefulProgramming actually is. As far as I can make out (which I'm stressing may be completely out of whack) TypefulProgramming relates to the richness of the declared type system as an algebraic/programmatic structure - something that could just as easily be applied at runtime as at compile time. If an explanation is in there it's drowned out by the noise over whether OCaml makes for shorter code than Ruby.


Maybe it's time to put to rest the fights over the RedHerring argument that unit tests make static types unnecessary (see DoesUnitTestingMakeStaticTypingLessUseful)? -- CostinCozianu

Seems to me that TypefulProgramming is a completely orthogonal issue to the static-vs-dynamic issue. A lot of the time, I do think about my program in terms of types, even in dynamically-typed languages; I just don't want the system to try to resolve those types before runtime. Is there some reason why TypefulProgramming needs to be StaticTypefulProgramming?? -- AdamSpitz

Why shouldn't it be statically typed? Mind you, HaskellLanguage, ObjectiveCaml et comp. do support dynamically typed values, but they are the exception and not the rule. So why not let the compiler verify as much as possible, and use the information for other good purposes?

If you say that dynamic types buys you greater flexibility that you desperately need, I'd like to see a realistic example. -- CostinCozianu

I have an example: ObjectiveCee, though it offers dynamic binding and a generic object type, lacks the Block construct that its predecessor Smalltalk had. I endeavoured to implement these blocks as an object, without the aid of a special preprocessor. In short, I wrote an Objective-C interpreter. Were it not for the fact that message sends and similar all happened dynamically, the task would not have been possible. When I had some problems, I explained the pointer troubles to a competent CeePlusPlus programmer. He hadn't even conceived of such things as were made possible by dynamic binding(which is not possible with static types, unless I'm mistaken). But I was able to implement blocks as an object in objective-c which take a string as the code they are to execute. It would not have even been possible in a statically-typed language.

Interpreters abound in Ocaml, Haskell, ML and other statically typed languages, it's even the most common way to hack some code, so it kind of contradicts your claim. Not to mention that they do have closures. It is hardly a realistic example that one can't live without a hand made interpreter for the same language they program in.

Besides, UnitTests do make static typing unnecessary-- if the tests pass, then why should the types matter? -- JoeOsborn

That's one of the claim of this page that types are more than type checking. I disagree that unit tests make static typing unnecessary, but that's not the subject of this page. -- CostinCozianu

Here is my attempt at a realistic example of the benefit of dynamic typing. For me the big flexibility of dynamic typing is the ultra-interactive/incremental development style it supports. I can bring up an empty Erlang or Lisp system and a fresh Emacs, and start writing a webserver function-by-function in tiny microsteps. I can keep on hacking it while it runs, without ever restarting it. This can go on for a week, then I can publish it, and keep on hacking it while people use it. The same principle applies when developing large applications, and debugging or patching real running systems. I love working this way. As far as I know, you can't do it in Haskell or ML. -- LukeGorrie

Use JavaLanguage, and you can do the same thing. You just need to replace the class that has been redone and place it on the application/web server. During development if you need it, use VAJ and get on with it. It does not matter if the compiler has type system. Only thing you need is an environment that supports dynamic loading.

StaticTyping lets the programming environment offer useful information to programmers while they're working on their code, even when it's not running. DynamicTyping only lets us have that information while the code is running - but it also makes it really easy to work on the code while it's running.

I do it all the time in SmalltalkLanguage. A method doesn't exist, I hit the StubButton, and now I'm writing the method, and I've got all the information a statically-typed environment could provide me, and more - not only do I know the types of each argument, I know what the arguments actually are.

The system doesn't need to "infer" types, it doesn't have to force me to "declare" types - it just knows the types, because the objects are right there. I just look at the object and see what type it is.

Is that TypefulProgramming?

-- AdamSpitz

First, have you tried TypefulProgramming? Second, how exactly are you hindered by static type? Code examples, please. For example, in Ocaml, if you need one more method, you just add it and you're there: you can use it.

[Some discussion on the effects of static typing on refactoring and testing moved to StaticTypingHindersRefactoring -- ct]

I don't think I understand what the phrase TypefulProgramming means. This page has gotten side-tracked into all the usual static-vs-dynamic arguments; does that mean that "typeful programming" just means "static typing"?

It doesn't sound like that's what it means. The blurb at the top of the page talks about using types as a "design tool and program construction tool." It doesn't sound like a technical concept; it sounds like a way of thinking (which the programming environment can aid by providing information about types).

It sounds to me like "typeful programming" means something like, "using type information to help me reason about my program." Is that close?

If that's what it is, then I think I do TypefulProgramming in Smalltalk, whenever I write code inside the debugger. Smalltalk objects have types, and Smalltalk tells me what those types are, and I use that information to help me reason about my program.

What am I missing?

-- AdamSpitz

has You're right. Except that I doubt you can safely reason about more advanced types with merely a debugger. Smalltalk objects typically have very simple types. Objects that you see in typeful languages have much more advanced (and powerful) type constructs. The feedback of a typeful compiler allows you to abstract more than you'd do otherwise.

Plus there's the problem of ProofObligation. To quote EwDijkstra every programmer a proof obligation to be able to prove that his program works as specified. A typeful compiler helps you with a substantial portion of that proof. And while doing the proof you have to demonstrate the static type safety anyways. So, why not let the compiler do it ?

Give me an example of one of these more-advanced-and-powerful types? -- as

For example, let's extend the example above. I didn't use standard functions so that the code can be self sufficient:

 let computeSomething a b d= a#doTheAThing (b#doTheBThing true) (d#doTheDThing "MyString?")

let compose g f = function x -> g (f x)

let fold_left operator start list = let rec rfold accumulator l = match l with [] -> start | head :: tail -> rfold (operator accumulator head) tail in rfold start list

let composeParamList list = fold_left (function f -> function y -> match y with (a,b,d) -> compose (computeSomething a b d) f) (function x -> x) list

Now let me see an example of Ruby or whatever. You can't be taking this attitude of Static Typing Is Guilty Until Proven Innocent.

A short description of what the code above does would be very useful... -- ct

I don't have the slightest idea what the above code is doing (I think it's a little bit confused - why in the world do you need that rfold function?), but here's what I think is a pretty direct translation into Ruby:

  def computeSomething(a, b, d)
a.doTheAThing(b.doTheBThing(true), d.doTheDThing("MyString?"))
  end

def compose(g, f) lambda { |x| g[f[x]] } end

def fold_left(list, start, &operator) if list.empty? start else fold_left(list.rest, operator[start, list.first], &operator) end end

def composeParamList(list) fold_left(list, lambda {|x| x}) { |f, y|a, b, d = y;compose(computeSomething(a, b, d), f) } end

Of course, in Ruby I wouldn't write the fold_left function that way - it seems a lot simpler to just do it the (gasp!) imperative way:

  def fold_left(list, n, &operator)
list.each { |x| n = operator[n, x] }
n
  end

Have I misunderstood what the OCaml code is doing?

-- AdamSpitz

The Ocaml code assumes that computeSomething(a,b,c) is returning a function from X to X, then it defines a function over a list of triplets (a,b,c) that returns the function composition of all the functions generated by computeSomething for all the parameters in the list. The inner function rfold in fold_left implementation it's only one of my coding habits, if I can I like to cut down the length of expressions, so the recursion is defined only in terms of 2 parameters instead of three, because operator never changes. I ventured a wild guess and I think I corrected a mispelled code in composeParamList, please correct me if I'm wrong. I'm a little puzzled why you have to treat operators different than functions, but every language has its own curiosities, that's not big deal.

But anyways, it looks to me that you haven't gained much with your DynamicTyping in Ruby, isn't it ? The Ocaml code is as concise and as flexible as the Ruby code. More importantly if somebody reads match y with (a,b,d) -> ... gets a clear idea that y has to be a triple, and when he sees the typing from the compiler, he gets a better picture of specifically what kind of triple there need to be in that list, somebody who reads computeSomething(*y) hardly gets a clue, unless he has access to the source code of computeSomething(a,b,d).

It looks to me that important design information gets lost in a dynamically typed environment. There's also the aspect that ocaml runs orders of magnitude faster, and a very substantial part of ocaml optimizations are result of its advanced type system.

Sigh... :) I don't blame you for trying to correct that code in composeParamList; it's related to one of the Ruby idiosyncrasies that I hate most. Ruby gives you a special syntax for passing a closure into a function, so that you don't have to type the word "lambda" all the time - but if you want to pass more than one closure into a function, you're stuck; you've just gotta type out the word "lambda" and pass it in as an ordinary parameter.

I've fixed the *y thing, too. I thought that *y was just as clear (and more direct) than expanding it out to (a, b, d), but if we had meaningful variable names it'd definitely be better your way. :)

As for the runtime speed, the Self VM is the DynamicTyping world's poster child for cool optimizations; take a look at some of the papers on the SelfLanguage website if you're interested. We can still do type-based optimizations, even without a type system. ;)

-- AdamSpitz


I'll take the blame for the digression. If anybody wants to delete or move the standard "static-typing-vs-dynamic-typing" stuff on this page, I for one won't be in the least bit offended. I think part of my confusion is that I'm having a hard time seeing TypeInference as StaticTyping. Maybe somebody could clear that up for me. Up above, Costin wrote:

Please note that you don't need to declare any interfaces a priori, you have a perfectly valid function definition, and the code looks very much the same like the code you'd write in a dynamically typed language.

To which, I want to say "Well, if you're not declaring any interfaces a priori, aren't you using DynamicTyping after all?" Any help in wrapping my mind around this concept would be greatly appreciated. -- francis

Well, not really. TypeInference is not the same as DynamicTyping and in the same time it is essential for typeful programming. To begin with, if you try to pass whatever you feel like to any of the functions we discuss above. You'll get a veto from your faithful compiler. The same thing does not happen in Ruby. This shows that even if I don't declare any types, the compiler infers the types for me.The fact that the compiler does it for me allows me to do tricks with non-trivial types. You can imagine a type B being defined as the "union" of all class types (not everything in Ocaml is an object) that have the method #doTheBThing bool, speaking of course informally. That's pretty much what you'd calculate and manage in your head when doing Ruby, and that's already something that Java is incapable of.

But things get even more complicated, for example the computeMethod has a type that is basically the solution of a system of equations. Again informally speaking, it is a function f: A x B x D -> R where A, B, D, R are any types that satisfy: (1)

B has a method #doTheBThing bool -> B1
D has a method #doTheAThing string -> D1
A has a method #doTheAThing -> B1 x D1 -> R

Again. basically this is pretty much the same reasoning that a Smalltalk or Ruby programmer has to keep in his head. When you go into deeper abstraction the type equations may become quite complicated for a programmer, even if intuitively he manages it, the details that affect the code are very prone to errors. It is much more comfortable to get those errors reported by the compiler together with what is expected for typing to be reasonable. In the same time, there can be warning bells, not necessarily errors: a function that you write is reported to have a type different than you'd expect. It is likely that you've got a programming error.

TypeInference is absolutely essential for TypefulProgramming because otherwise it'd be prohibitive for the programmer to explicitly describe the typing relations. Just the sheer annoyance of writing so many symbols will put everybody off, like the sheer annoyance of writing Java interfaces will put off people when trying to create more powerful abstractions in Java.

There are also limits to TypeInference that makes it strictly less expressive than DynamicTyping. There is correct dynamically typed code that you just can't write in Haskell or Ocaml. The most common example is:

 let f x=
  if x#test then
x#doSomething
  else
x#doSomethingElse

In Ocaml x is required to have #test and both doSomething and doSomethingElse. If x's class has only the #doSomething method but always returns true to #test, you'll get a type error in Ocaml even if the code would be running perfectly well in Ruby. It is believed however that such code is rare, unnecessary, and most likely a sign of not so good design. In any case one can always write code that does the same thing as the "untypable" code, albeit in a different form.

(You can't avoid such code in cross-browser JavaScript. --VladimirSlepnev?)

Another hard impossibility to statically type things even in a typeful language is when you read objects from (not fully trusted) external streams, like for example in Java serialization framework, and for that to happen you need to have escape hatches. So inevitably there will always be an element of DynamicTyping in any typeful language, but that is the exception and not the rule.

I think it's possible to imagine a case where this sort of logic might be quite useful. For example, Ruby lets you override the method method_missing: this method gets called any time at runtime that your code looks for an undefined method. I'm writing code for domain object retrieval that uses this the help one object -- called the ObjectStore -- act as the Facade for a bunch of utility classes behind it.

For example, say I want to retrieve a domain object by unique id, those method calls will look like:

 objectStore.getClient 143 # return client 143
 objectStore.getInvoice 15 # return invoice 15

then my method_missing signature looks like:

 def method_missing (methodId, objId)
# get the metaclass -- Client, Invoice, etc -- for the objectType
methodName = methodId.id2name
methodName =~ /^get(.*)$/
objectTypeName = $1
objectType = ClassUtil?.getObjectTypeFromString objectTypeName

# now get the object get objectType, objId end

(methodId is a Ruby-defined object describing the method call, and I'm free to add as many parameters after that that I like.)

But if I also want to retrieve a collection of domain objects by querying for a search term, those method calls might look like:

 objectStore.getClients "%International%", "name"
 # return a collection of clients whose "name" fields contain the string
 # "International"

then my method_missing signature needs to handle both cases, in which case the signature might look like:

 def method_missing (methodId, objIdOrSearchTerm, fieldName = nil)
# if we just want a single object
objId = objIdOrSearchTerm
# use the old code to search by objId
# but if we want a collection
searchTerm = objIdOrSearchTerm
# write your new code to search using searchTerm and fieldName
 end

This is messy, in its way, and if I keep cramming stuff into method_missing I'm going to have to look into different ways to deal with it. (I certainly don't want to have parameters called objIdOrSearchTermOrCacheDurationOrSummaryField). But right this minute, it's working fine, and it makes my life quite a bit easier. -- francis


So as far as I can tell the typing spectrum looks like this:

PrototypeBasedProgramming (DuckTyping)
At method dispatch time ensure an object has the required method
DynamicTyping
At method dispatch time ensure the type of an object has the required method
StaticTyping
At compile time ensure the type of an object has the required method

I think the names we have for typing are really bad. In any case, by this logic Ocaml is a statically typed language--it's just a lot smarter about it than, say, Java. You could probably figure out most of the types in a Smalltalk program by analyzing the code too. The RefactoringBrowser must do something like this.

I think (and I may be wrong here) some of the push back that's coming from us DynamicTyping types (!) is that the more you use these languages the more you start to think of typing the way it happens in PrototypeProgramming. Thus Self grew out of Smalltalk, and in Ruby you can attach methods to an individual object. Types start to seem not very fundamental to ObjectOriented programming. I send an object a message and it gets back to me with an answer. It doesn't really matter to what type it is.

But when considering StaticTyping it's a tradeoff, balancing the extra safety against the annoyance of the compiler getting in your way. It's pretty clear to me that "traditional" StaticTyping isn't worth it, but maybe Ocaml's is. I'm going to have to use it to do something to know for sure. Please RefactorMe, I'm rambling like never before ;)

--ChristianTaubman

It's fun to imagine having real conversations in a statically-typed way. Someone asks you, "Where do you want to go for lunch?" and you think, "OK, that's a Question asking a Person for a Location," and you tell him, "Burger King," and he thinks, "OK, that's a Location. Types check out. Let's go."

Honest question: does anybody here actually think like that?

(It's even worse in languages like C++ or Java, where the person actually has to say, "OK, I'm going to say a Question asking a Person for a Location now: where do you want to go for lunch?" :)

I think Christian hit the nail right on the head: types just aren't very fundamental. It's easy, when you've got a fancy type-checker, to imagine that all the poor people without type-checkers need to go around doing the type-checker's work manually inside their heads, but it isn't true.

--AdamSpitz

Ok, Adam, please let us know how you face up to your proof obligation, without doing type checker's work. And you haven't responded to the fundamental question: what's the drawback in submitting your programs to a typechecker, are you afraid of what it may find ?

Do you prove the correctness of the systems you work on? If so, how does TypefulProgramming help with this? -ct

If you read Dijkstra's writings on ProofObligation you'll see he doesn't actually require you to do a formal proof of everything, it requires you to be able to prove. Although even partial or full formal proofs may be coming in a decade or more :) The problems are not so much with the science of it as the qualifications of the workforce and the commitment of the software industry to invest in quality and to face up to its moral and social obligations.

But to answer your question, I do an informal proof of the critical parts. For example, any part that is concurrent or distributed should be considered critical by default. TypefulProgramming is an essential help, I can do local reasoning about functions and programs. In any case, I don't believe any programmer can be freed of his ProofObligation unless he can at the very least prove that his program is correctly typed. I get that for free. The Smalltalk folklore I read tells me that even the most experienced programmers chased after type errors in complicated systems. --Costin


Adam, you seem to think that the natural-language analogy you present somehow gives evidence for the 'naturalness' of dynamic typing. I don't believe this, for many reasons.

--AnonymousDonor

I didn't mean the Burger King thing to be evidence. :) Just a way of looking at things that I find useful. Or at least fun. :)

"Lettuce" is wrong because it isn't an answer to the question, not because its type isn't the same as the question's return type. (Correct; it is not an answer to the question precisely because the type is incorrect. Had I answered, "Joe's place," instead, such that we knew that Joe ran a restaurant, then it would be a proper answer, since the type matches. --SamuelFalvo?) I don't think we have static type-checkers in our heads; I think we notice that "lettuce" is wrong because "lettuce" is wrong (circular reasoning is a fallacy. --SamuelFalvo?), not because the type of "lettuce" is wrong. (Proof that it is the type that is wrong: suppose I opened up a restaurant named "Lettuce"? Then, suddenly, responding with Lettuce becomes a valid answer. Back in Utica, NY, there used to be a real dive of a bar called "Nowhere." When some drunk got home, and his wife asked where he was, he could honestly answer, "Nowhere." --SamuelFalvo?)

Are you sure that "where" is really an unfilled data cell that can only hold a "place" value, and not any other kind of value? I think that "I've got some work to finish; you go on without me," or "I brought my own lunch today," or "Can we go a little later?" would also be perfectly valid answers to the question, "Where do you want to go for lunch?" (These are not answers to the question asked at all; instead, you're discarding the first sentence and asserting your own question in its place. If the original person approves of this, garbage collection occurs, and he just goes, "OK." Otherwise, he might press you harder for an answer. "Oh no, you're not getting off the hook this easily this time, Adam. WHERE do you want to go to lunch? I have ways of making you talk!!" --SamuelFalvo?)

And I never, ever want to go back to a programming environment that isn't highly interactive. ;)

--AdamSpitz


Is it worth discussing "strong" vs. "weak" typing here as well? One of my annoyances with Java is the ability (driven by necessity) to cast references to other types, making the type system less useful than it could be. I'm hoping that with generics in Java, I'll never have to cast again. Now that I've looked through the Wiki a bit, I see that these terms are poorly defined. Basically, I mean casting vs. non casting languages. --AndrewMccormick


But I still can't see where would dynamic typing prove really useful in a standard business application scenario (which has, say, one million lines of code). Agreed that dynamic typing will be useful when the code base is small and only one or two developers are required to maintain a code. [DynamicLanguagesAndLargeApps] I find static typing to be useful even as a design tool and more importantly a tool to enforce some rules and assumptions regarding the design. To take an example from Java's JDBC specification, JavaSoft has given some guidelines regarding how an implementor can provide implementation of a Driver (using Driver interface), Connection (using connection interface), etc. The developer is guaranteed that atleast the interface and the datastructures are reliable (even if he finds the implementation bug-ridden; but its another case). In case of a dynamically typed environment, the software designer cannot really give specifications and guidelines about how to use his APIs. Agreed, documentations do help, but if the software designer has to ensure that developers use his code as he intended, then he must provide test suits for ensuring that (not only that his code is working right).

Base classes essentially provide limited scale dynamic typing. In C, there is "void," which provides a dynamic type, and void is returned by malloc(), one of the basic C library routines. C++ provides templates, which essentially provide dynamically typed classes. This would suggest that some business value is provided by dynamic typing. I am comfortable with static typing and prefer it, but I don't think it is a big deal that others may prefer dynamic typing. Both have strengths and weaknesses.


More-or-less this very same debate is raging on LambdaTheUltimate:

http://groups.google.com/group/comp.lang.scheme/msg/caa5d2e147ba9bd2?q=g:thl3243730853d&dq=&hl=en&lr=&ie=UTF-8&safe=off&rnum=4

The UseNet post by Paul Wilson (U Texas at Austin Lisp expert, not PaulWilson) perfectly sums up my view. I suggest advocates of both sides read it.


I think Christian hit the nail right on the head: types just aren't very fundamental. It's easy, when you've got a fancy type-checker, to imagine that all the poor people without type-checkers need to go around doing the type-checker's work manually inside their heads, but it isn't true.

But, it is true. It's just occuring in parallel with your other coding efforts, so you don't consciously see it. It's like pipelining in modern CPUs. Instructions do not take one clock cycle to execute. They can take as many as 22 now-a-days. They only appear to take a single cycle because, while executing one, it's also executing another in a different pipeline stage (and so on for all pipeline stages, ideally).

This clearly raises an important point -- typing is a design issue. Compilers that are statically typed are trying to help enforce a consistent design, because that is the one thing that can be checked automatically (does slot A fit into socket B?) to varying degrees of success. On large (multi-million line) programs, where hundreds to thousands of engineers are involved, static type checking becomes a necessity. On projects where lives are at stake (a pacemaker program, the SpaceShuttle, et. al.), static type checking becomes a necessity (and then some; now you're also getting into formal proofs too!). For web applications, where the worst thing that can happen in the event of a failure is a double-charge on your credit card and 48 hours of VoIP downtime, probably not.

Languages may not have any obligation to verify consistency of a design; obviously, there are plenty that do not. Python, for example, couldn't care less about your code's design. You obviously do, though, which is why you use TestDrivenDevelopment to make sure it all works. (Well, you do use TDD, right? Right? Anyone?) However, it is unmistakable that most languages that are widely deployed are making efforts to recognize the importance of automated type checking. PythonThreeThousand is following the route of ANSI CommonLisp here -- allowing for optional type declarations which the interpreter does nothing with, but which remain available for use by third-party tools. I can foresee tools that check these optional signatures, maybe even going so far as to infer types, to help verify consistent design in a Python program. But the interpreter? It couldn't care less. If my memory serves me correctly, I believe PerlSix is doing the same thing.

Actually many people use logical reasoning and design to make sure it all works. TestDrivenDevelopment isn't required for a program to be correct or work, but proving a program correct is much more difficult than throwing sample tests at the program to see if some things work. Tests never cover all cases, they only cover a small tiny sample, which is why logical reasoning about the program is far more important than throwing tests at it. In fact one has to ask: how do you know the tests are right, do you test your tests with more tests to make sure those tests are even testing the right things? Can one test the tests to make sure the tests are correct? It would lead to infinity, hence why logical reasoning about algorithms is far more important than testing.

The proof is in the pudding, folks. Dynamic languages really do rock. But application-for-application, static languages are vastly more widespread because of the greater assurance of correctness that they offer (to say nothing of the optimizations they allow a compiler to make), as well as the benefit to code maintenance. The USA could have saved a few cents off of at least one Mars probe, had the compiler they used included measurement units into the notion of types. Had they used Haskell, they could have used this package: http://code.google.com/p/dimensional/ and my tax dollars would not have been wasted.

--SamuelFalvo?


A (random) comment:

Why are we assuming here that Types == Classes? I personally think of the Type of an object as its interface.

In fact, I imagine a programming language where you can specify an Interface for methods, EG, in this PerlSixish/.NET example:
    sub with(IDisposable $value, mu &block)
    PRE { &block.arity == 1 }
    does {
        try {
            block($value);
        } ensure {
            $value.Dispose();
        }
    }

There. with accepts an IDisposable, and a block with Arity 1 (takes one argument, as specified by the precondition).

Example code:

     with open("File.txt") -> $file {
         # do something with filehandle $file.
     }
TypesAreInterfacesNotClasses?.


Comment on the code

    let f x = if x#test then x#doSomething else x#doSomethingElse;;
where this works in dynamically-typed systems if x#test is always true when x#doSomethingElse is undefined... Equivalent code (which can be statically proven not to have a *type* error) be something like:
    let f x =
       if x#test then (
           assume (x : {doSomething : () -> ?}) in
               x#doSomething()
           else (* Error handling or throw exception. *)
       ) else (
           assume (x : {doSomethingElse : () -> ?}) in
                x#doSomethingElse()
           else (* ... *)
       )
Where assume (typedeclaration) is an explicit run-time type check, and ? means "infer this for me, woudja?"


CategoryLanguageTyping


EditText of this page (last edited May 7, 2014) or FindPage with title or text search