Word count: 1208

Hey y’all, it’s been three months since my last blog post! You know what that means.. or should mean, at least. Yes, I’d quite like to have another long blog post done, but… Life is kinda trash right now, no motivation for writing, whatever. So over the coming week(s) or so, as a coping mechanism for the chaos that is the end of the semester, I’m gonna write a couple of really short posts (like this one) that might not even be coherent at all—this sentence sure isn’t.

Today’s note is about what is perhaps the most confusing rule of Groupoid Martin-Löf’s dependent type theory, the *J* eliminator. For starters, its name means basically nothing: as far as I can tell its name comes from the fact that **I**dentity is another word for equality and J is the letter that comes after I.

First, let’s recall how the identity type is defined, or rather, the two ways in which it can be defined. The first has two *parameters*, `A`

and `x`

, and a single *index* (of type `A`

), while the latter has a single `A`

*parameter* and two *indices* of type `A`

. Using Agda syntax:

```
data _=_ {A : Type} (x : A) : A -> Type where
: x = x refl
```

```
data _=_ {A : Type} : A -> A -> Type where
: {x : A} -> x = x refl
```

These definitions give rise to subtly different (but equivalent — see section §1.12.2 of Homotopy Type Theory if you’re curious about the details) elimination rules. We’ll consider the one on the right (or above, if your screen is narrow), since that one is *based*^{1}.

One decomposition which is (sometimes) helpful when an induction principle is confusing is to break it down into a simply typed *recursion* principle and a propositional *uniqueness* principle. Let’s visit the recursion principle first.

It’s actually something you’re already familiar with, even if you don’t have a background in type theory: Indiscernibility of identicals. We’re going to assume a rather big type theory, with arrows and universes, so we can consider a family of propositions indexed by `A`

to be a type family `P : A -> Type`

. I ambiguously use Type to refer to some universe and leave it to the reader to find a consistent assignment of levels. Best of luck.

Where does `A`

come from? It’s an argument to the recursor since it’s a *parameter* to the inductive family. Similarly, `x`

is also a parameter, but we make it implicit for convenience (in a theory without implicit arguments this, of coruse, doesn’t happen). Let’s write down what we have so far.

`: {A : Type} {x : A} -> (P : A -> Type) -> ... =-rec `

I’m using “Agda” as a language marker but I’m adding extra arrows for clarity. After the proposition we’re proving, comes one hypothesis for each constructor. Above I wrote it in infix form, `refl : x = x`

, but you can alternatively consider this as `refl : (_=_ x) x`

— i.e., the family `(_=_ x)`

applied to its index `x`

.

For each constructor, the hypothesis returns a term in `P`

applied to each of the indices of the constructor—so in this case, `P x`

—and is a function of any arguments to our constructor. `refl`

doesn’t have any arguments, so the hypothesis is *just* `P x`

.

`: {A : Type} {x : A} -> (P : A -> Type) -> P x -> ... =-rec `

And now, the conclusion! Literally. We introduce new variables with the same types as our indices—let’s call this one `y : A`

—and one argument which has the type “our inductive type applied to those new indices”. Our inductive type is `(_=_ x)`

, so that applied to our new indices is `(_=_ x) y`

: `x = y`

. And the conclusion? `P`

applied to those indices!

```
: {A : Type} {x : A} -> (P : A -> Type) -> P x
=-rec -> {y : A} -> x = y -> P y
```

We can shuffle the parameters around a bit to make it more familiar, and, indeed, give it a better name, too:

`: {A : Type} {x y : A} (P : A -> Type) -> x = y -> P x -> P y subst `

The recursion principle for `(_=_ x)`

says that, if `x = y`

, then any property that’s true of `x`

—that is, an inhabitant `P x`

—is also true of `y`

!

Now let’s consider the uniqueness principle. I think this is the hardest one to wrap your head around, since it’s *really* counterintuitive. The first guess anyone would make is that the uniqueness principle says that the only term of `x = x`

is `refl`

, since, well, just look at the type definition! However..

What we’ve defined is not a type. It’s a *family* of types, indexed by an `y : A`

. So we can’t state an uniqueness principle for some specific `x = y`

, we need to consider the “whole family”. The, uh, *total space* of the family, if you’ll forgive my HoTT accent. That’s a sigma type, a dependent sum, of all the indices and only *then* our inductive family.

The uniqueness principle for `(_=_ x)`

says something about `Σ A \y -> x = y`

, or `(y : A) * x = y`

, or $\sum_{y : A} x = y$, depending on how much of my terrible syntax decisions you can tolerate. It says this type is *contractible*, i.e., only has one inhabitant up to equality, and the centre of contraction is `(x, refl)`

.

The name for this principle is *contractibility of singletons*, since it speaks about singleton types: The, for a fixed A and x, “subset of A equal to x”. If `x = y`

were a proposition, this would indeed be a subset, but we can’t in general expect `x = y`

to be a proposition.

I claim: J = `subst`

+ contractibility of singletons. Let’s see how. Here’s the full type of the J axiom, just for reference:

```
: {A : Type} {x : A}
J -> (P : (y : A) -> x = y -> Type)
-> P x refl
-> {y : A} (p : x = y)
-> P y p
```

Let’s, uh, look at the type of `P`

there. It’s a function of two arguments… mmm.. What happens if we curry it?

```
: {A : Type} {x : A}
J -> (P : (Σ A λ y -> x = y) -> Type)
-> P (x, refl)
-> {z : Σ A λ y -> x = y}
-> P z
```

Now we’re getting somewhere interesting. J say something about the type `(y : A) * x = y`

(or `Σ A λ y -> x = y`

in the *cursed* “Agda” notation) — The total space of the family `(_=_ x)`

. In particular, it says that, if we want to prove `P`

about any inhabitant `z`

of that space, it’s sufficient to prove `P (x, refl)`

. This looks suspiciously like the principle of contractibility of singletons I was talking about before! In fact, let’s see how we can derive J from contractibility of singletons and substitution.

To recap, we assume:

```
: {A : Type} {x : A} (z : (Σ A λ y -> x = y)) -> z = (x, refl)
contract : {P : A -> Type} {x y : A} -> x = y -> P x -> P y subst
```

Suppose our proof of `P (x, refl)`

is called `pr`

, for simplicity, and the other inhabitant is called, well, `z`

. By `contract z`

we have `z = (x, refl)`

, so the inverse of that is a path `(x, refl) = z`

. By `subst {P} {(x, refl)} {z} (sym (contract z))`

we have a function `P (x, refl) -> P z`

, which we can apply to `pr`

to get a `P z`

, like we wanted.

This decomposition might sound a bit useless, since, well, we can get both substitution and contractibility of singletons from J, but it’s actually super handy! It’s how I *prove* J in cubical type theory. Here, substitution is a derived operation from a primitive called *composition* (read my last blog post!), and contractibility of singletons can be proven using a *connection* (also in the last post!). So `J`

looks like:

```
J : {A : Type} {x : A}
(P : (y : A) -> Path x y -> Type)
(d : P x (\i -> x))
{y : A} (p : Path x y)
-> P y p
J P d p = transp (\i -> P (p i) (\j -> p (iand i j))) d
```

I think that’s it for what I can write for today. I didn’t really have a conclusion in mind, I just see a lot of talk about Martin-Löf’s equality and wanted to throw my own two cents out into the internet. I guess writing about J is like the monad tutorial of dependent type theory? Though I’d like to think of myself as a bit more advanced than “writing a monad tutorial”, since, you know, I wrote my own cubical type theory, but whatever..

I’m still thinking of writing up a complete introduction to type theory, like, the whole thing: What it is, how to read inference rules, the empty and unit types, products, functions, dependent products, dependent sums, coproducts, naturals, inductive types, equality, and possibly the axioms HoTT makes you postulate on top of that. Of course, it’s a *lot* of work, and the sheer scale of what I want to write is.. kinda paralysing. Let’s see whether or not it’ll happen.

Which implies the other is cringe.↩︎