Word count: 2208

In the last post about Amulet I wrote about rewriting the type checking code. And, to everybody’s surprise (including myself), I actually did it.

Like all good programming languages, Amulet has a strong, static type
system. What most other languages do not have, however, is (mostly)
*full type inference*: programs are still type-checked despite (mostly)
having no type annotations.

Unfortunately, no practical type system has truly “full type inference”: features like data-type declarations, integral to actually writing software, mandate some type annotations (in this case, constructor arguments). However, that doesn’t mean we can’t try.

The new type checker, based on a constraint-generating but
*bidirectional* approach, can type a lot more programs than the older,
Algorithm W-derived, quite buggy checker. As an example, consider the
following definition. For this to check under the old type system, one
would need to annotate both arguments to `map`

*and* its return type -
clearly undesirable!

```
let map f =
let go cont xs =
match xs with
| Nil -> cont Nilfun x -> Cons (f h, x))) t
| Cons (h, t) -> go (compose cont (in go id ;;
```

Even more egregious is that the η-reduction of `map`

would lead to an
ill-typed program.

```
let map f xs =
let go cont xs = (* elided *)
in go id xs ;;
(* map : forall 'a 'b. ('a -> 'b) -> list 'a -> list 'b *)
let map' f =
let go cont xs = (* elided *)
in go id ;;
(* map' : forall 'a 'b 'c. ('a -> 'b) -> list 'a -> list 'c *)
```

Having declared this unacceptable, I set out to rewrite the type checker, after months of procrastination. As is the case, of course, with such things, it only took some two hours, and I really shouldn’t have procrastinated it for so long.

Perhaps more importantly, the new type checker also supports rank-N
polymorphism directly, with all appropriate checks in place: expressions
checked against a polymorphic type are, in reality, checked against a
*deeply skolemised* version of that poly-type - this lets us enforce two
key properties:

- the expression being checked
*is*actually parametric over the type arguments, i.e., it can’t unify the skolem constants with any type constructors, and - no rank-N arguments escape.

As an example, consider the following function:

`let rankn (f : forall 'a. 'a -> 'a) = f ()`

Well-typed uses of this function are limited to applying it to the
identity function, as parametricity tells us; and, indeed, trying to
apply it to e.g. `fun x -> x + 1`

is a type error.

### The Solver

As before, type checking is done by a traversal of the syntax tree
which, by use of a `Writer`

monad, produces a list of
constraints to be solved. Note that a *list* really is needed: a set, or
similar data structure with unspecified order, will not do. The order in
which the solver processes constraints is important!

The support for rank-N types has lead to the solver needing to know
about a new kind of constraint: *subsumption* constraints, in addition
to *unification* constraints. Subsumption is perhaps too fancy a term,
used to obscure what’s really going on: subtyping. However, whilst
languages like Java and Scala introduce subtyping by means of
inheritance, our subtyping boils down to eliminating ∀s.

∀s are eliminated from the right-hand-side of subsumption constraints by
*deep skolemisation*: replacing the quantified variables in the type
with fresh type constants. The “depth” of skolemisation refers to the
fact that ∀s to the right of arrows are eliminated along with the ones
at top-level.

```
@TyForall{} = do
subsumes k t1 t2<- skolemise t2
t2'
subsumes k t1 t2'@TyForall{} t2 = do
subsumes k t1<- instantiate t1
(_, _, t1')
subsumes k t1' t2= k a b subsumes k a b
```

The function for computing subtyping is parametric over what to do in
the case of two monomorphic types: when this function is actually used
by the solving algorithm, it’s applied to `unify`

.

The unifier has the job of traversing two types in tandem to find the
*most general unifier*: a substitution that, when applied to one type,
will make it syntatically equal to the other. In most of the type
checker, when two types need to be “equal”, they’re equal up to
unification.

Most of the cases are an entirely boring traversal, so here are the interesting ones.

- Skolem type constants only unify with other skolem type constants:

```
TySkol{} TySkol{} = pure ()
unify @TySkol{} b = throwError $ SkolBinding t b
unify t@TySkol{} = throwError $ SkolBinding t b unify b t
```

- Type variables extend the substitution:

```
TyVar a) b = bind a b
unify (TyVar b) = bind b a unify a (
```

- Polymorphic types unify up to α-renaming:

```
@(TyForall vs ty) t'@(TyForall vs' ty')
unify t| length vs /= length vs' = throwError (NotEqual t t')
| otherwise = do
<- replicateM (length vs) freshTV
fvs let subst = Map.fromList . flip zip fvs
unify (apply (subst vs) ty) (apply (subst vs') ty')
```

When binding a variable to a concrete type, an *occurs check* is
performed to make sure the substitution isn’t going to end up containing
an infinite type. Consider binding `'a := list 'a`

: If `'a`

is
substituted for `list 'a`

everywhere, the result would be `list (list 'a)`

- but wait, `'a`

appears there, so it’d be substituted again, ad
infinitum.

Extra care is also needed when binding a variable to itself, as is the
case with `'a ~ 'a`

. These constraints are trivially discharged, but
adding them to the substitution would mean an infinite loop!

```
occurs :: Var Typed -> Type Typed -> Bool
TyVar _) = False
occurs _ (= x `Set.member` ftv e occurs x e
```

If the variable has already been bound, the new type is unified with the one present in the substitution being accumulated. Otherwise, it is added to the substitution.

```
bind :: Var Typed -> Type Typed -> SolveM ()
bind var ty| occurs var ty = throwError (Occurs var ty)
| TyVar var == ty = pure ()
| otherwise = do
<- get
env -- Attempt to extend the environment, otherwise
-- unify with existing type
case Map.lookup var env of
Nothing -> put (Map.singleton var (normType ty) `compose` env)
Just ty'
| ty' == ty -> pure ()
| otherwise -> unify (normType ty) (normType ty')
```

Running the solver, then, amounts to folding through the constraints in order, applying the substitution created at each step to the remaining constraints while also accumulating it to end up at the most general unifier.

```
solve :: Int -> Subst Typed
-> [Constraint Typed]
-> Either TypeError (Subst Typed)
= pure s
solve _ s [] ConUnify e a t:xs) = do
solve i s (case runSolve i s (unify (normType a) (normType t)) of
Left err -> Left (ArisingFrom err e)
Right (i', s') -> solve i' (s' `compose` s) (apply s' xs)
ConSubsume e a b:xs) =
solve i s (case runSolve i s (subsumes unify (normType a) (normType b)) of
Left err -> Left (ArisingFrom err e)
Right (i', s') -> solve i' (s' `compose` s) (apply s' xs)
```

### Inferring and Checking Patterns

Amulet, being a member of the ML family, does most data processing
through *pattern matching*, and so, the patterns also need to be type
checked.

The pattern grammar is simple: it’s made up of 6 constructors, while expressions are described by over twenty constructors.

Here, the bidirectional approach to inference starts to shine. It is possible to have different behaviours for when the type of the pattern (or, at least, some skeleton describing that type) is known and for when it is not, and such a type must be produced from the pattern alone.

In an unification-based system like ours, the inference judgement can be recovered from the checking judgement by checking against a fresh type variable.

```
= do
inferPattern p <- freshTV
x <- checkPattern p x
(p', binds) pure (p', x, binds)
```

Inferring patterns produces three things: an annotated pattern, since syntax trees after type checking carry their types; the type of values that pattern matches; and a list of variables the pattern binds. Checking omits returning the type, and yields only the annotated syntax tree and the list of bindings.

As a special case, inferring patterns with type signatures overrides the checking behaviour. The stated type is kind-checked (to verify its integrity and to produce an annotated tree), then verified to be a subtype of the inferred type for that pattern.

```
@(PType p t ann) = do
inferPattern pat<- inferPattern p
(p', pt, vs) <- resolveKind t
(t', _) <- subsumes pat t' pt -- t' ≤ pt
_ case p' of
Capture v _ -> pure (PType p' t' (ann, t'), t', [(v, t')])
-> pure (PType p' t' (ann, t'), t', vs) _
```

Checking patterns is where the fun actually happens. Checking `Wildcard`

s
and `Capture`

s is pretty much identical, except the latter actually
expands the capture list.

```
Wildcard ann) ty = pure (Wildcard (ann, ty), [])
checkPattern (Capture v ann) ty =
checkPattern (pure (Capture (TvName v) (ann, ty), [(TvName v, ty)])
```

Checking a `Destructure`

looks up the type of the constructor in the
environment, possibly instancing it, and does one of two things,
depending on whether or not the destructuring did not have an inner
pattern.

```
@(Destructure con ps ann) ty =
checkPattern excase ps of
```

- If there was no inner pattern, then the looked-up type is unified with the “goal” type - the one being checked against.

```
Nothing -> do
<- lookupTy con
pty <- unify ex pty ty
_ pure (Destructure (TvName con) Nothing (ann, pty), [])
```

- If there
*was*an inner pattern, we proceed by decomposing the type looked up from the environment. The inner pattern is checked against the*domain*of the constructor’s type, while the “goal” gets unified with the*co-domain*.

```
Just p -> do
<- decompose ex _TyArr =<< lookupTy con
(c, d) <- checkPattern p c
(ps', b) <- unify ex ty d _
```

Checking tuple patterns is a bit of a mess. This is because of a
mismatch between how they’re written and how they’re typed: a 3-tuple
pattern (and expression!) is written like `(a, b, c)`

, but it’s *typed*
like `a * (b * c)`

. There is a local helper that incrementally converts
between the representations by repeatedly decomposing the goal type.

```
@(PTuple elems ann) ty =
checkPattern ptlet go [x] t = (:[]) <$> checkPattern x t
:xs) t = do
go (x<- decompose pt _TyTuple t
(left, right) :) <$> checkPattern x left <*> go xs right
(= error "malformed tuple in checkPattern" go [] _
```

Even more fun is the `PTuple`

constructor is woefully overloaded: One
with an empty list of children represents matching against `unit`

.
One with a single child is equivalent to the contained pattern; Only one
with more than two contained patterns makes a proper tuple.

```
in case elems of
-> do
[] <- unify pt ty tyUnit
_ pure (PTuple [] (ann, tyUnit), [])
-> checkPattern x ty
[x] -> do
xs concat -> binds) <- unzip <$> go xs ty
(ps, pure (PTuple ps (ann, ty), binds)
```

### Inferring and Checking Expressions

Expressions are incredibly awful and the bane of my existence. There are 18 distinct cases of expression to consider, a number which only seems to be going up with modules and the like in the pipeline; this translates to 24 distinct cases in the type checker to account for all of the possibilities.

As with patterns, expression checking is bidirectional; and, again, there are a lot more checking cases then there are inference cases. So, let’s start with the latter.

#### Inferring Expressions

Inferring variable references makes use of instantiation to generate fresh type variables for each top-level universal quantifier in the type. These fresh variables will then be either bound to something by the solver or universally quantified over in case they escape.

Since Amulet is desugared into a core language resembling predicative System F, variable uses also lead to the generation of corresponding type applications - one for each eliminated quantified variable.

```
@(VarRef k a) = do
infer expr<- lookupTy' k
(inst, old, new) if Map.null inst
then pure (VarRef (TvName k) (a, new), new)
else mkTyApps expr inst old new
```

Functions, strangely enough, have both checking *and* inference
judgements: which is used impacts what constraints will be generated,
and that may end up making type inference more efficient (by allocating
less, or correspondingly spending less time in the solver).

The pattern inference judgement is used to compute the type and bindings of the function’s formal parameter, and the body is inferred in the context extended with those bindings; Then, a function type is assembled.

```
Fun p e an) = do
infer (<- inferPattern p
(p', dom, ms) <- extendMany ms $ infer e
(e', cod) pure (Fun p' e' (an, TyArr dom cod), TyArr dom cod)
```

Literals are pretty self-explanatory: Figuring their types boils down to pattern matching.

```
Literal l an) = pure (Literal l (an, ty), ty) where
infer (= case l of
ty LiInt{} -> tyInt
LiStr{} -> tyString
LiBool{} -> tyBool
LiUnit{} -> tyUnit
```

The inference judgement for *expressions* with type signatures is very similar
to the one for patterns with type signatures: The type is kind-checked,
then compared against the inferred type for that expression. Since
expression syntax trees also need to be annotated, they are `correct`

ed
here.

```
@(Ascription e ty an) = do
infer expr<- resolveKind ty
(ty', _) <- infer e
(e', et) <- subsumes expr ty' et
_ pure (Ascription (correct ty' e') ty' (an, ty'), ty')
```

There is also a judgement for turning checking into inference, again by making a fresh type variable.

```
= do
infer ex <- freshTV
x <- check ex x
ex' pure (ex', x)
```

#### Checking Expressions

Our rule for eliminating ∀s was adapted from the paper Complete
and Easy Bidirectional Typechecking for Higher-Rank Polymorphism.
Unlike in that paper, however, we do not have explicit *existential
variables* in contexts, and so must check expressions against
deeply-skolemised types to eliminate the universal quantifiers.

```
@TyForall{} = do
check e ty<- check e =<< skolemise ty
e' pure (correct ty e')
```

If the expression is checked against a deeply skolemised version of the
type, however, it will be tagged with that, while it needs to be tagged
with the universally-quantified type. So, it is `correct`

ed.

Amulet has rudimentary support for *typed holes*, as in dependently
typed languages and, more recently, GHC. Since printing the type of
holes during type checking would be entirely uninformative due to
half-solved types, reporting them is deferred to after checking.

Of course, holes must still have checking behaviour: They take whatever type they’re checked against.

`Hole v a) t = pure (Hole (TvName v) (a, t)) check (`

Checking functions is as easy as inferring them: The goal type is split between domain and codomain; the pattern is checked against the domain, while the body is checked against the codomain, with the pattern’s bindings in scope.

```
@(Fun p b a) ty = do
check ex<- decompose ex _TyArr ty
(dom, cod) <- checkPattern p dom
(p', ms) Fun p' <$> extendMany ms (check b cod) <*> pure (a, ty)
```

Empty `begin end`

blocks are an error.

`check ex@(Begin [] _) _ = throwError (EmptyBegin ex)`

`begin ... end`

blocks with at least one expression are checked by
inferring the types of every expression but the last, and then checking
the last expression in the block against the goal type.

```
Begin xs a) t = do
check (let start = init xs
= last xs
end <- traverse (fmap fst . infer) start
start' <- check end t
end' pure (Begin (start' ++ [end']) (a, t))
```

`let`

s are pain. Since all our `let`

s are recursive by nature, they must
be checked, including all the bound variables, in a context where the
types of every variable bound there are already available; To figure
this out, however, we first need to infer the type of every variable
bound there.

If that strikes you as “painfully recursive”, you’re right. This is
where the unification-based nature of our type system saved our butts:
Each bound variable in the `let`

gets a fresh type variable, the context
is extended and the body checked against the goal.

The function responsible for inferring and solving the types of
variables is `inferLetTy`

. It keeps an accumulating association list to
check the types of further bindings as they are figured out, one by one,
then uses the continuation to generalise (or not) the type.

```
Let ns b an) t = do
check (<- for ns $ \(a, _, _) -> do
ks <- freshTV
tv pure (TvName a, tv)
$ do
extendMany ks <- inferLetTy id ks (reverse ns)
(ns', ts) $ do
extendMany ts <- check b t
b' pure (Let ns' b' (an, t))
```

We have decided to take the advice of Vytiniotis, Peyton Jones, and
Schrijvers, and refrain from generalising lets, except at top-level.
This is why `inferLetTy`

gets given `id`

when checking terms.

The judgement for checking `if`

expressions is what made me stick to
bidirectional type checking instead of fixing out variant of Algorithm
W. The condition is checked against the boolean type, while both
branches are checked against the goal.

```
If c t e an) ty = If <$> check c tyBool
check (<*> check t ty
<*> check e ty
<*> pure (an, ty)
```

it is not possible, in general, to recover the type of a function at an application site, we infer it; The argument given is checked against that function’s domain and the codomain is unified with the goal type.

```
@(App f x a) ty = do
check ex<- secondA (decompose ex _TyArr) =<< infer f
(f', (d, c)) App f' <$> check x d <*> fmap (a,) (unify ex ty c)
```

To check `match`

, the type of what’s being matched against is first
inferred because, unlike application where *some* recovery is possible,
we can not recover the type of matchees from the type of branches *at
all*.

```
Match t ps a) ty = do
check (<- infer t (t', tt)
```

Once we have the type of the matchee in hands, patterns can be checked against that. The branches are then each checked against the goal type.

```
<- for ps $ \(p, e) -> do
ps' <- checkPattern p tt
(p', ms) <$> pure p' <*> extendMany ms (check e ty) (,)
```

Checking binary operators is like checking function application twice. Very boring.

```
@(BinOp l o r a) ty = do
check ex<- infer o
(o', to) <- decompose ex _TyArr to
(el, to') <- decompose ex _TyArr to'
(er, d) BinOp <$> check l el <*> pure o'
<*> check r er <*> fmap (a,) (unify ex d ty)
```

Checking records and record extension is a hack, so I’m not going to talk about them until I’ve cleaned them up reasonably in the codebase. Record access, however, is very clean: we make up a type for the row-polymorphic bit, and check against a record type built from the goal and the key.

```
Access rc key a) ty = do
check (<- freshTV
rho Access <$> check rc (TyRows rho [(key, ty)])
<*> pure key <*> pure (a, ty)
```

Checking tuple expressions involves a local helper much like checking tuple patterns. The goal type is recursively decomposed and made to line with the expression being checked.

```
@(Tuple es an) ty = Tuple <$> go es ty <*> pure (an, ty) where
check ex= error "not a tuple"
go [] _ = (:[]) <$> check x t
go [x] t :xs) t = do
go (x<- decompose ex _TyTuple t
(left, right) :) <$> check x left <*> go xs right (
```

And, to finish, we have a judgement for turning inference into checking.

```
= do
check e ty <- infer e
(e', t) <- subsumes e ty t
_ pure e'
```

### Conclusion

I like the new type checker: it has many things you’d expect from a
typed lambda calculus, such as η-contraction preserving typability, and
substitution of `let`

-bound variables being generally
admissable.

Our type system is fairly complex, what with rank-N types and higher
kinded polymorphism, so inferring programs under it is a bit of a
challenge. However, I am fairly sure the only place that demands type
annotations are higher-ranked *parameters*: uses of higher-rank
functions are checked without the need for annotations.

Check out Amulet the next time you’re looking for a typed functional programming language that still can’t compile to actual executables.