Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

I have seen stated in various places that in order to allow an \"increase in pol

ID: 652146 • Letter: I

Question

I have seen stated in various places that in order to allow an "increase in polymorphism," functional dependency analysis should be performed, and type inference should be used for every declaration group in topological order. It seems like a well-known fact... except that nobody provides any reference or example.

In fact I don't understand what is really meant by this statement. Does it mean that without doing dependency analysis I wouldn't be able to type some programs (i.e. some valid programs would fail type-checking) or only that I would type them with a type that is more specific than what really is?

I'm particularly interested in consequences in a Haskell-like language/type system. In particular during kind inference, is dependency analysis actually necessary to avoid rejecting valid programs? Can you provide an example that wouldn't be well-kinded if typed without performing dependency analysis?

Since kinds are monomorphic I struggle to understand why dependency analysis should matter in this particular case.

Explanation / Answer

Any use of a type anywhere may place constraints on that type. So, in the absence of explicit type signatures (i.e. when doing type inference) it is necessary to perform inference in a way that doesn't allow specific usages of a type in a particular form to "infect" the most general type it may have. This involves a dependency analysis. Consider

id = x -> x
foo = id 1

Now if we type foo first, we would unify the first argument of id with Int and thus conclude id had type Int -> Int. So therefore we must first type id and then generalize its signature to the polymorphic type forall a. a -> a, and then instantiate that at type Int when we apply it and attempt to unify the argument type with Int.

Hence, dependencies and topological sort!