A quickie: Boundaries in Cubical Agda

Posted on August 29, 2022
Word count: 866

Wonderfully vague title, yes? As some of you might know, I’m now maintaining the implementation of Cubical Agda. Undoubtedly, if you know, it’s because I’ve been talking your ear off about it: I’m sorry. The “little kid with new toy” energy will continue until morale improves.

Anyway, part of what I’ve been doing is refactoring the implementation to make it easier for me, and for others, to understand it. I already pulled out and documented the implementation of every single Kan operation from the massive module that all the primitives were stuck in the same module (PR 6034), but also fixed a tiny soundness issue (PR 6032) and a bug in the conversion checker (PR 6013).

But I’m not here today to brag (my changes are but a single drop in the ocean that is Agda), or to complain — I think Andrea Vezzosi did an absolutely stellar job implementing Cubical Agda, as can be seen by how much research is going on based on it, and today I’d like to commemorate that by sharing my favourite bit of trivia: How boundary constraints for higher pattern matching work.

Patterns & copatterns

If you’re familiar with functional programming, with Haskell if not Agda itself, you might recognise the following function as an instance of definition by pattern-matching. Admittedly, it’s quite a trivial instance, but it’s one nonetheless.

refl : ∀ {l} {A : Type l} (x : A) → x ≡ x
refl x i = x

But, dear reader, that’s where you’re wrong: That’s right, I’ve tricked you. This isn’t a definition by pattern matching at all! It’s a definition by copattern matching. While the variable x is a variable, the i is something much sneakier: an IApply copattern. By the way, IApply is the internal name we use for this, but another perfectly valid name would be a path copattern.

Since refl has a Path-headed type after introducing x, we internally treat the pattern i as being not a variable binder, but rather specifying how refl x behaves when applied to interval variables — and, since being applied to interval variables is an observation you can make of paths, we call this a copattern, by analogy with how record copatterns specify what the destructors do to your definition.

So, internally, we have something like

refl (Var x) (IApply .x .x i) = ...

where the second co/pattern in the left-hand-side specifies not only the name of the bound interval variable, but also what endpoints it has. But, of course, interval copatterns aren’t that simple: we really must have that, when applied to i0 (resp. i1), the right-hand-side must equal x (resp. x; bad example because both endpoints are the same, here). So, how is this checked?


If you’ve heard me speak of IApplyConfluence before, this is that. The name might be a bit weird: at first it looks like a Java interface name, something you can apply confluence to, but really it stands for “interval application confluence.” Recall that, in cubical type theory, we have the following judgemental computation for the endpoints of a path, even when the path itself is neutral: If p : Path A x y, then p i0 = x (resp. p i1 = y).

In Agda, path lambdas are really just lambdas, but path application is different from function application: That’s what IApply means, it’s an elimination just like Apply (ordinary function application) or Proj (record projections). So, in Agda, we reduce the boundary constraints in paths defined by copattern matching to something called “IApply confluence”: the definition reduces in a way that is confluent with IApply reductions, in the sense of a nondeterministic rewrite system (but note that Agda still has deterministic reduction: it’s just a name.)

As a different example, let’s look at the following definition, now by matching on a higher-inductive type:

data Interval : Type where
  left right : Interval
  seg : left ≡ right

xtoy :  {l} {A : Type l} (x y : A)  Interval  A
xtoy x y left = x
xtoy x y right = y
xtoy x y (seg i) = {! !}

How does Agda know what the interaction point’s endpoints should be? IApply confluence to the rescue! You guessed it: since the i variable appears at a Path-headed type, it’s really an IApply copattern. The precise endpoints don’t matter here, but what does is that seg i0 and seg i1 will reduce to them. So, what we must check is that, regardless of if you first reduce seg i0 to an endpoint then apply xtoy x y, it’s the same as first reducing xtoy x y (seg i), then substituting i0 for i.

And how can we do that generically? This is, in my opinion, the most beautiful idea in the Agda codebase: We can just ask that, under the substitutions i → i0 and i → i1, the left-hand-side is equal to the right-hand-side. That’s right: Agda will compare

i ∨ ~ i ⊢ xtoy x y (seg i) = {! !}

which is why, in the currently released version of Agda, the most reliable place to get boundaries is from the unsolved unification constraints. If there are more IApply copatterns in the clause, the interval formula on the left of the (we call that a “cofibration”) will grow to include those variables, too. In general, it’ll be the disjunction of the boundaries of every variable bound by an IApply copattern in the clause.

Agda’s reduction rules guarantee that xtoy x y (seg i0) will reduce first to xtoy x y left (by IApply), then to x (by computation); Symetrically for seg i1 eventually reducing to y. But the right-hand-side will, in this case, do no such thing: That’s how we end with the constraints (handwaving)

x = ?0 (i = i0)
y = ?0 (i = i1)

That’s all for today, folks! I want to write a blog post about quotients soon, their relationship to the univalent completion of a precategory, and their generalization to delooping of internal \infty-groupoid objects, so stay tuned for that if you care.