[2] Luis Damas contributed a close formal analysis and proof of the method in his PhD thesis.
Algorithm W is an efficient type inference method in practice and has been successfully applied on large code bases, although it has a high theoretical complexity.
Being scope sensitive, it is not limited to deriving the types only from a small portion of source code, but rather from complete programs or modules.
[citation needed] In 1969, J. Roger Hindley extended this work and proved that their algorithm always inferred the most general type.
The successors of the languages mentioned, like C++ (1985), focused on different types of polymorphism, namely subtyping in connection with object-oriented programming and overloading.
While subtyping is incompatible with HM, a variant of systematic overloading is available in the HM-based type system of Haskell.
The remainder of this article proceeds as follows: The same description of the deduction system is used throughout, even for the two algorithms, to make the various forms in which the HM method is presented directly comparable.
in polytypes, type variables can also be bound by occurring in the context, but with the inverse effect on the right hand side of the
in the context prevents the use of the generalisation rule for any free variable in the assignment, this regulation forces the type of parameter
On the contrary, the generalization rule is part of the definition of HM's type system and the implicit all-quantification a consequence.
Now that the deduction system of HM is at hand, one could present an algorithm and validate it with respect to the rules.
A contemporary treatment of HM uses a purely syntax-directed rule system due to Clement[7] as an intermediate step.
An only slightly weaker version of completeness is provable [8] though, namely implying, one can derive the principal type for an expression in
The presentation of Algorithm J is a misuse of the notation of logical rules, since it includes side effects but allows a direct comparison with
This is in strong contrast to many other attempts to derive type inference algorithms, which often came out to be NP-hard, if not undecidable with respect to termination.
Efficiency is slightly reduced because the binding of type variables in the context has to be maintained to allow computation of
The presentation of algorithm W in the sidebar still makes use of side effects in the operations set in italic, but these are now limited to generating fresh symbols.
Before formulating the proof obligations, a deviation between the rules systems D and S and the algorithms presented needs to be emphasized.
While more properly implementing the type inference would have enabled the algorithm to deal with abstract monotypes, they were not needed for the intended application where none of the items in a preexisting context have free variables.
The remaining downside is that the proof of the algorithm with respect to the rule system is less general and can only be made for contexts with
The side condition in the completeness obligation addresses how the deduction may give many types, while the algorithm always produces one.
A central property of the lambda calculus is that recursive definitions are not directly available, but can instead be expressed with a fixed point combinator.
But unfortunately, the fixpoint combinator cannot be formulated in a typed version of the lambda calculus without having a disastrous effect on the system as outlined below.
Alternatively an extension of the expression syntax and an extra typing rule is possible: where basically merging
), to allow the programmer to write arithmetic expressions in the same form, even for different numerical types like int or real.
Because a mixture of these different types within the same expression also demands for implicit conversion, overloading especially for these operations is often built into the programming language itself.
While ad hoc overloading has been avoided in functional programming for the computation costs both in type checking and inference[citation needed], a means to systematise overloading has been introduced that resembles both in form and naming to object oriented programming, but works one level upwards.
Research in second order lambda calculus, one step upwards, showed that type inference is undecidable in this generality.
[10] More recently, Dolan and Mycroft[11] formalized the relationship between typing scheme simplification and NFA simplification and showed that an algebraic take on the formalization of subtyping allowed generating compact principal typing schemes for an ML-like language (called MLsub).
On the other hand, type inference has proven more difficult in the context of object-oriented programming languages, because object methods tend to require first-class polymorphism in the style of System F (where type inference is undecidable) and because of features like F-bounded polymorphism.