diff --git a/src/1Lab/Resizing.lagda.md b/src/1Lab/Resizing.lagda.md
index 51a6a970a..57ddc66b3 100644
--- a/src/1Lab/Resizing.lagda.md
+++ b/src/1Lab/Resizing.lagda.md
@@ -23,7 +23,7 @@ open import Meta.Bind
module 1Lab.Resizing where
```
-# Propositional Resizing
+# Propositional Resizing {defines="propositional-resizing"}
Ordinarily, the collection of all $\kappa$-small predicates on
$\kappa$-small types lives in the next universe up, $\kappa^+$. This is
diff --git a/src/Cat/CartesianClosed/Lambda.lagda.md b/src/Cat/CartesianClosed/Lambda.lagda.md
index 55b7ef33c..a4305a953 100644
--- a/src/Cat/CartesianClosed/Lambda.lagda.md
+++ b/src/Cat/CartesianClosed/Lambda.lagda.md
@@ -15,7 +15,7 @@ import Cat.Reasoning
module Cat.CartesianClosed.Lambda
```
-# Simply-typed lambda calculus {defines="simply-typed-lambda-calculus"}
+# Simply-typed lambda calculus {defines="simply-typed-lambda-calculus STLC"}
-# Bifibrations
+# Bifibrations {defines="bifibration"}
A [[displayed category]] $\cE \liesover \cB$ is a **bifibration** if is
it both a [[fibration|cartesian fibration]] and an opfibration. This
diff --git a/src/Cat/Displayed/Cocartesian.lagda.md b/src/Cat/Displayed/Cocartesian.lagda.md
index 52282cbd0..da0a899aa 100644
--- a/src/Cat/Displayed/Cocartesian.lagda.md
+++ b/src/Cat/Displayed/Cocartesian.lagda.md
@@ -425,6 +425,8 @@ cocartesian-lift→co-cartesian-lift cocart .Cartesian-lift.cartesian =
```
+:::{.definition #cocartesian-fibration}
+
We can use this notion to define cocartesian fibrations (sometimes
referred to as **opfibrations**).
@@ -438,6 +440,8 @@ record Cocartesian-fibration : Type (o ⊔ ℓ ⊔ o' ⊔ ℓ') where
Cocartesian-lift (has-lift f x')
```
+:::
+
+
+```agda
+module Cat.Displayed.Doctrine {o ℓ} (B : Precategory o ℓ) where
+```
+
+
+
+# Regular hyperdoctrines {defines="regular-hyperdoctrine"}
+
+A **regular hyperdoctrine** is a generalisation of the defining features
+of the [[poset of subobjects]] of a [[regular category]]; More
+abstractly, it axiomatises exactly what is necessary to interpret
+first-order (regular) logic _over_ a [[finitely complete category]].
+There is quite a lot of data involved, so we'll present it in stages.
+
+::: warning
+While the _raison d'être_ of regular hyperdoctrines is the
+interpretation of regular logic, that is far too much stuff for Agda to
+handle if it were placed in this module. **The interpretation of logic
+lives at [[logic in a hyperdoctrine]]**.
+:::
+
+```agda
+record Regular-hyperdoctrine o' ℓ' : Type (o ⊔ ℓ ⊔ lsuc (o' ⊔ ℓ')) where
+```
+
+To start with, fix a category $\cB$, which we think of as the _category
+of contexts_ of our logic --- though keep in mind the definition works
+over a completely arbitrary category. The heart of a regular
+hyperdoctrine is a [[displayed category]] $\bP \liesover \cB$, which, a
+priori, assigns to every object $\Gamma : \cB$ a category $\bP(\Gamma)$
+of **predicates** over it, where the set $\hom(\phi, \psi)$ for $\phi,
+\psi : \bP(\Gamma)$ interprets the **entailment** relation between
+predicates; we therefore write $\phi \vdash_\Gamma \psi$.
+
+```agda
+ field
+ ℙ : Displayed B o' ℓ'
+
+ module ℙ = Displayed ℙ
+```
+
+However, having an entire _category_ of predicates is hard to make
+well-behaved: that would lend itself more to an interpretation of
+dependent type theory, rather than the first-order logic we are
+concerned with. Therefore, we impose the following three restrictions on
+$\bP$:
+
+```agda
+ field
+ has-is-set : ∀ Γ → is-set ℙ.Ob[ Γ ]
+ has-is-thin : ∀ {x y} {f : Hom x y} x' y' → is-prop (ℙ.Hom[ f ] x' y')
+ has-univalence : ∀ x → is-category (Fibre ℙ x)
+```
+
+First, the space of predicates over $\Gamma$ must be a [[set]]. Second,
+the entailment relation $\phi \vdash_\Gamma \psi$ must be a
+[[proposition]], rather than an arbitrary set --- which we will use as
+justification to omit the names of its inhabitants. Finally, each
+[[fibre category]] $\bP(\Gamma)$ must be [[univalent|univalent
+category]]. In light of the previous restriction, this means that each
+fibre satisfies _antisymmetry_, or, specialising to logic, that
+inter-derivable propositions are indistinguishable.
+
+Next, each fibre $\bP(\Gamma)$ must be [[finitely complete]]. The binary
+products interpret conjunction, and the terminal object interprets the
+true proposition; since we are working with posets, these two shapes of
+limit suffice to have full finite completeness.
+
+```agda
+ fibrewise-meet : ∀ {x} x' y' → Product (Fibre ℙ x) x' y'
+ fibrewise-top : ∀ x → Terminal (Fibre ℙ x)
+```
+
+Everything we have so far is fine, but it only allows us to talk about
+predicates over a _specific_ context, and we do not yet have an
+interpretation of _substitution_ that would allow us to move between
+fibres. This condition is fortunately very easy to state: it suffices to
+ask that $\bP$ be a [[Cartesian fibration]].
+
+```agda
+ cartesian : Cartesian-fibration ℙ
+```
+
+We're almost done with the structure. To handle existential
+quantification, the remaining connective of regular logic, we use the
+key insight of Lawvere: the existential elimination and introduction
+rules
+
+
+
+$$
+\frac{\phi \vdash_{x} \psi}{\exists_x \phi \vdash \psi}
+$$
+
+$$
+\frac{\exists_x \phi \vdash \psi}{\phi \vdash_{x} \psi}
+$$
+
+
+
+say precisely that existential quantification along $x$ is [[left
+adjoint]] to weakening by $x$! Since weakening will be interpreted by
+[[cartesian lifts]], we will interpret the existential quantification by
+a left adjoint to that: in other words, $\bP$ must also be a
+[[cocartesian fibration]] over $\bP$.
+
+```agda
+ cocartesian : Cocartesian-fibration ℙ
+```
+
+Note that we have assumed the existence of left adjoints to arbitrary
+substitutions, which correspond to forms of existential quantification
+more general than quantification over the latest variable. For example,
+if the base category $\cB$ has finite products, then existential
+quantification of a predicate $\phi : \bP(A)$ over $\delta : A \to A
+\times A$ corresponds to the predicate "$(i, j) \mapsto (i = j) \land
+\phi(i)$".
+
+
+That concludes the _data_ of a regular hyperdoctrine. We will
+soon write down the axioms it must satisfy: but before that, we need a
+digression to introduce better notation for working with the
+deeply-nested data we have introduced.
+
+
+```agda
+ module cartesian = Cartesian-fibration cartesian
+ module cocartesian = Cocartesian-fibration cocartesian
+ module fibrewise-meet {x} (x' y' : ℙ.Ob[ x ]) = Product (fibrewise-meet x' y')
+
+ module fibrewise-top x = Terminal (fibrewise-top x)
+
+ _[_] : ∀ {x y} → ℙ.Ob[ x ] → Hom y x → ℙ.Ob[ y ]
+ _[_] x f = cartesian.has-lift.x' f x
+
+ exists : ∀ {x y} (f : Hom x y) → ℙ.Ob[ x ] → ℙ.Ob[ y ]
+ exists = cocartesian.has-lift.y'
+
+ _&_ : ∀ {x} (p q : ℙ.Ob[ x ]) → ℙ.Ob[ x ]
+ _&_ = fibrewise-meet.apex
+
+ aye : ∀ {x} → ℙ.Ob[ x ]
+ aye = fibrewise-top.top _
+
+ infix 30 _[_]
+ infix 25 _&_
+```
+
+
+
+The first two axioms concern the commutativity of substitution and the
+conjunctive connectives:
+
+```agda
+ field
+ subst-&
+ : ∀ {x y} (f : Hom y x) (x' y' : ℙ.Ob[ x ])
+ → (x' & y') [ f ] ≡ x' [ f ] & y' [ f ]
+
+ subst-aye
+ : ∀ {x y} (f : Hom y x)
+ → aye [ f ] ≡ aye
+```
+
+Next, we have a _structural rule_, called **Frobenius reciprocity**,
+governing the interaction of existential quantification and conjunction.
+If substitution were invisible, it would say that $(\exists \phi) \land
+\psi$ is $\exists (\phi \land \psi)$; Unfortunately, proof assistants
+force us to instead say that if we have $\phi : \bP(\Gamma)$, $\psi :
+\bP(\Delta)$, and $\rho : \Gamma \to \Delta$, then $\exists_\rho(\phi)
+\land \psi$ is $\exists_\rho(\phi \land \psi[\rho])$.
+
+```agda
+ field
+ frobenius
+ : ∀ {x y} (f : Hom x y) {α : ℙ.Ob[ x ]} {β : ℙ.Ob[ y ]}
+ → exists f α & β ≡ exists f (α & β [ f ])
+```
+
+Finally, we have a general rule for the commutativity of existential
+quantification and substitution. While in general the order matters, the
+**Beck-Chevalley condition** says that we can conclude
+
+$$
+\exists_h (a[k]) = (\exists_g a)[f]
+$$
+
+provided that the square
+
+~~~{.quiver .short-05}
+\[\begin{tikzcd}[ampersand replacement=\&]
+ d \&\& a \\
+ \\
+ b \&\& c
+ \arrow["k"', from=1-1, to=3-1]
+ \arrow["h", from=1-1, to=1-3]
+ \arrow["f", from=1-3, to=3-3]
+ \arrow["g"', from=3-1, to=3-3]
+ \arrow["\lrcorner"{anchor=center, pos=0.125}, draw=none, from=1-1, to=3-3]
+\end{tikzcd}\]
+~~~
+
+is a pullback.
+
+```agda
+ beck-chevalley
+ : ∀ {a b c d} {f : Hom a c} {g : Hom b c} {h : Hom d a} {k : Hom d b}
+ → is-pullback B h f k g
+ → ∀ {α} → exists h (α [ k ]) ≡ (exists g α) [ f ]
+```
+
+That concludes the definition of regular hyperdoctrine. Said snappily, a
+**regular hyperdoctrine** $\bP \liesover \cB$ is a [[bifibration]] into
+[[(meet-)semilattices|semilattice]] satisfying Frobenius reciprocity and
+the Beck-Chevalley condition.
+
+
diff --git a/src/Cat/Displayed/Doctrine/Frame.lagda.md b/src/Cat/Displayed/Doctrine/Frame.lagda.md
new file mode 100644
index 000000000..531a8cd44
--- /dev/null
+++ b/src/Cat/Displayed/Doctrine/Frame.lagda.md
@@ -0,0 +1,273 @@
+
+
+```agda
+module Cat.Displayed.Doctrine.Frame where
+```
+
+# Indexed frames
+
+An example of [[regular hyperdoctrine]] that is not the [[poset of
+subobjects]] of a [[regular category]] is given by the canonical
+indexing of a [[frame]] $F$, the [[bifibration]] corresponding through
+the Grothendieck construction to $\hom(-,X)$. We will construct this in
+very explicit stages.
+
+```agda
+Indexed-frame : ∀ {o ℓ} → Frame o → Regular-hyperdoctrine (Sets ℓ) (o ⊔ ℓ) (o ⊔ ℓ)
+Indexed-frame {o = o} {ℓ} F = idx where
+```
+
+
+
+## The indexing
+
+As the underlying [[displayed category]] of our hyperdoctrine, we'll
+fibre the [[slice category]] $\Sets/F$ over $\Sets$: The objects over a
+set $S$ are the functions $S \to F$. However, rather than a discrete
+fibration, we have a fibration into posets: the ordering at each fibre
+is pointwise, and we can extend this to objects in different fibres by
+the equivalence $(x \le_f y) \equiv (x \le_{\id} f^*y)$ of maps into a
+[[Cartesian lift]] of the codomain.
+
+```agda
+ disp : Displayed (Sets ℓ) (o ⊔ ℓ) (o ⊔ ℓ)
+ disp .Ob[_] S = ⌞ S ⌟ → ⌞ F ⌟
+ disp .Hom[_] f g h = ∀ x → g x F.≤ h (f x)
+ disp .Hom[_]-set f g h = is-prop→is-set isp
+
+ disp .id' x = F.≤-refl
+ disp ._∘'_ p q x = F.≤-trans (q x) (p _)
+```
+
+
+
+Since our ordering at each fibre is the pointwise ordering, it follows
+that our limits are also pointwise limits: the meet $f \cap g$ is the
+function $x \mapsto f(x) \cap g(x)$, and the terminal object is the
+function which is constantly the top element.
+
+```agda
+ prod : ∀ {S : Set ℓ} (f g : ⌞ S ⌟ → ⌞ F ⌟) → Product (Fibre disp S) f g
+ prod f g .apex x = f x F.∩ g x
+ prod f g .π₁ i = F.∩≤l
+ prod f g .π₂ i = F.∩≤r
+ prod f g .has-is-product .⟨_,_⟩ α β i = F.∩-univ _ (α i) (β i)
+```
+
+
+
+```agda
+ term : ∀ S → Terminal (Fibre disp S)
+ term S .top _ = F.top
+ term S .has⊤ f = is-prop∙→is-contr isp (λ i → sym F.∩-idr)
+```
+
+## As a fibration
+
+As we have _defined_ the ordering by exploiting the existence of
+Cartesian lifts, it should not be surprising that the lift $f^*g$ a
+function $Y \to F$ along a function $X \to Y$ is simply the composite
+$gf : X \to F$.
+
+```agda
+ cart : Cartesian-fibration disp
+ cart .has-lift f g .x' x = g (f x)
+ cart .has-lift f g .lifting i = F.≤-refl
+ cart .has-lift f g .cartesian = record
+ { universal = λ m p x → p x
+ ; commutes = λ m h' → isp _ _
+ ; unique = λ m p → isp _ _
+ }
+```
+
+However, the *co*cartesian lifts are a lot more interesting, and their
+construction involves a pretty heavy dose of [[propositional resizing]].
+In ordinary predicative type theory, each frame is associated with a
+particular level for which it admits arbitrary joins. But our
+construction of the hyperdoctrine associated with a frame works for
+indexing a frame over an _arbitrary_ category of sets! How does that
+work?
+
+The key observation is that the join $\bigcup_I f$ of a family $f : I
+\to F$ can be replaced by the join $\bigcup_{\im f} \pi_1$ of its
+_image_; and since $\im f$ can be made to exist at the same level as $F$
+regardless of how large $I$ is, this replacement allows us to compute
+joins of arbitrary families. More relevantly, it'll let us compute the
+cocartesian lift of a function $g : X \to F$ along a function $X \to Y$.
+
+```agda
+ cocart : Cocartesian-fibration disp
+ cocart .has-lift {x = X} {y = Y} f g = lifted module cocart where
+```
+
+At each $y : Y$, we would like to take the join
+
+$$
+\bigcup_{x : f^*(y)} g(x)
+$$
+
+indexed by the fibre of $f$ over $y$. But since we do not control the
+size of that type, we can replace this by the join over the subset
+
+$$
+\{ e : \exists_{x : X} f(x) = y \land g(x) = e \}
+$$
+
+which is the image of $g$ as a function $f^*(y) \to F$.
+
+```agda
+ img : ⌞ Y ⌟ → Type _
+ img y = Σ[ e ∈ ⌞ F ⌟ ] □ (Σ[ x ∈ ⌞ X ⌟ ] ((f x ≡ y) × (g x ≡ e)))
+
+ exist : ⌞ Y ⌟ → ⌞ F ⌟
+ exist y = F.⋃ {I = img y} fst
+```
+
+That this satisfies the universal property of a cocartesian lift is then
+a calculation:
+
+```agda
+ lifted : Cocartesian-lift disp f g
+ lifted .y' y = exist y
+ lifted .lifting i = F.⋃-colimiting (g i , inc (i , refl , refl)) fst
+ lifted .cocartesian .universal {u' = u'} m h' i = F.⋃-universal fst λ (e , b) →
+ □-rec! (λ { (x , p , q) →
+ e F.=⟨ sym q ⟩
+ g x F.≤⟨ h' x ⟩
+ u' (m (f x)) F.=⟨ ap u' (ap m p) ⟩
+ u' (m i) F.≤∎ }) b
+ lifted .cocartesian .commutes m h = isp _ _
+ lifted .cocartesian .unique m x = isp _ _
+```
+
+## Putting it together
+
+We're ready to put everything together. By construction, we have a
+"category displayed in posets",
+
+```agda
+ idx : Regular-hyperdoctrine (Sets ℓ) _ _
+ idx .ℙ = disp
+ idx .has-is-set x = Π-is-hlevel 2 λ _ → F .fst .is-tr
+ idx .has-is-thin f g = isp
+ idx .has-univalence S = set-identity-system
+ (λ _ _ _ _ → Cat.≅-path (Fibre disp _) (isp _ _))
+ λ im → funextP λ i → F.≤-antisym (Cat.to im i) (Cat.from im i)
+```
+
+which is both a fibration and an opfibration over $\Sets$,
+
+```agda
+ idx .cartesian = cart
+ idx .cocartesian = cocart
+```
+
+with finite fibrewise limits preserved by reindexing,
+
+```agda
+ idx .fibrewise-meet x' y' = prod x' y'
+ idx .fibrewise-top S = term S
+ idx .subst-& f x' y' = refl
+ idx .subst-aye f = refl
+```
+
+which satisfies Frobenius reciprocity and the Beck-Chevalley condition.
+Frobenius reciprocity follows by some pair mangling and the infinite
+distributive law in $F$:
+
+```agda
+ idx .frobenius {X} {Y} f {α} {β} = funext λ i → F.≤-antisym
+ ( exist α i F.∩ β i F.=⟨ F.⋃-distribʳ ⟩
+ F.⋃ {I = img α i} (λ (x , _) → x F.∩ β i) F.≤⟨
+ F.⋃-universal _ (λ img → F.⋃-colimiting
+ ( img .fst F.∩ β i
+ , □-map (λ { (x , p , q) → x , p , ap₂ F._∩_ q (ap β p) }) (img .snd))
+ fst)
+ ⟩
+ exist (λ x → α x F.∩ β (f x)) i F.≤∎ )
+ ( exist (λ x → α x F.∩ β (f x)) i F.≤⟨
+ F.⋃-universal _ (λ (e , p) → □-rec! (λ { (x , p , q) →
+ e F.=⟨ sym q ⟩
+ α x F.∩ β (f x) F.=⟨ ap (α x F.∩_) (ap β p) ⟩
+ α x F.∩ β i F.≤⟨ F.⋃-colimiting (α x , inc (x , p , refl)) _ ⟩
+ F.⋃ {I = img α i} (λ (x , _) → x F.∩ β i) F.≤∎ }) p)
+ ⟩
+ F.⋃ {I = img α i} (λ (x , _) → x F.∩ β i) F.=⟨ sym F.⋃-distribʳ ⟩
+ exist α i F.∩ β i F.≤∎)
+ where open cocart {X} {Y} f
+```
+
+And the Beck-Chevalley condition follows from the observation that a
+pullback square induces an equivalence between $f^*(i)$ and $g^*(f i)$
+which sends $\alpha(k x)$ to $\alpha$, so that the joins which define
+both possibilities for quantification or substitution agree:
+
+```agda
+ idx .beck-chevalley {A} {B} {C} {D} {f} {g} {h} {k} ispb {α} =
+ funext λ i → F.⋃-apⁱ (Iso→Equiv (eqv i))
+ where
+ module c X Y = cocart {X} {Y}
+ module pb = is-pullback ispb
+ open is-iso
+
+ eqv : ∀ i → Iso (c.img D A h (λ x → α (k x)) i) (c.img B C g α (f i))
+ eqv i .fst (e , p) = e , □-map (λ { (d , p , q) → k d , sym (pb.square $ₚ _) ∙ ap f p , q }) p
+ eqv i .snd .inv (e , p) = e , □-map (λ { (b , p , q) →
+ let
+ it : ⌞ D ⌟
+ it = pb.universal {P' = el! (Lift _ ⊤)}
+ {p₁' = λ _ → i} {p₂' = λ _ → b} (funext (λ _ → sym p)) (lift tt)
+ in it , (pb.p₁∘universal $ₚ lift tt) , ap α (pb.p₂∘universal $ₚ lift tt) ∙ q }) p
+ eqv i .snd .rinv (x , _) = refl ,ₚ squash _ _
+ eqv i .snd .linv (x , _) = refl ,ₚ squash _ _
+```
diff --git a/src/Cat/Displayed/Doctrine/Logic.lagda.md b/src/Cat/Displayed/Doctrine/Logic.lagda.md
new file mode 100644
index 000000000..95b2ebd3b
--- /dev/null
+++ b/src/Cat/Displayed/Doctrine/Logic.lagda.md
@@ -0,0 +1,702 @@
+
+
+```agda
+module Cat.Displayed.Doctrine.Logic
+```
+
+
+
+# The internal logic of a regular hyperdoctrine {defines="logic-in-a-hyperdoctrine"}
+
+Fix a [[regular hyperdoctrine]] $\bP$ over a [[finitely complete
+category]] $\cB$. We've claimed that the data of $\bP$ is enough to
+interpret regular logic relative to $\cB$, but that proof was deferred.
+This module actually finishes this claim, but because syntax is always
+fiddly, it's _quite_ long.
+
+## Terms
+
+The first thing we'll do is strictify $\cB$ a bit. Instead of working
+directly with arbitrary morphisms, it's slightly better --- for the
+computational behaviour of substitution --- to have a syntactic
+presentation of the terms of our logic. We start with the **types**,
+built inductively using `` _`×_ ``{.Agda}, and with an injection from
+the objects of $\cB$. We also single out a class of objects which are
+built from repeated pairing onto the terminal object to be the
+**contexts**.
+
+```agda
+data Ty : Typeω where
+ ↑ : Ob → Ty
+ _`×_ : Ty → Ty → Ty
+
+data Cx : Typeω where
+ [] : Cx
+ _ʻ_ : Cx → Ty → Cx
+```
+
+
+
+These two classes can be interpreted into objects in the base category.
+Throughout this entire module, we'll write $\sem{x}$ for the semantic
+interpretation of a syntactic object; In the formalisation, the brackets
+are always superscript with an indicator of what is being interpreted.
+
+```agda
+⟦_⟧ᵗ : Ty → Ob
+⟦ ↑ x ⟧ᵗ = x
+⟦ t `× s ⟧ᵗ = ⟦ t ⟧ᵗ ⊗₀ ⟦ s ⟧ᵗ
+
+⟦_⟧ᶜ : Cx → Ob
+⟦ [] ⟧ᶜ = Terminal.top terminal
+⟦ Γ ʻ x ⟧ᶜ = ⟦ Γ ⟧ᶜ ⊗₀ ⟦ x ⟧ᵗ
+```
+
+Next we have the syntax of _variables_: A variable $v : \Gamma \ni \tau$
+encodes a projection $\sem{v}$, which, because of the structure of our
+contexts, is always of the form $\pi_2 \pi_1 \pi_1 \dots$. Put
+syntactically, we can always access the latest variable, and we can also
+weaken variables by one.
+
+```agda
+data _∋_ : Cx → Ty → Typeω where
+ stop : (Γ ʻ τ) ∋ τ
+ pop : Γ ∋ τ → (Γ ʻ σ) ∋ τ
+```
+
+Finally, we can define the class of terms: A term is a variable, or lies
+in the fragment dealing with Cartesian products, or comes from applying
+a function $\sem{\tau} \to \sem{\sigma}$ from the base category to an
+argument $\Gamma \vdash \tau$; this is required if we want our logic to
+be applicable to more than variables and products!
+
+```agda
+data Tm : Cx → Ty → Typeω where
+ var : Γ ∋ τ → Tm Γ τ
+
+ _,_ : Tm Γ τ → Tm Γ σ → Tm Γ (τ `× σ)
+ `π₁ : Tm Γ (τ `× σ) → Tm Γ τ
+ `π₂ : Tm Γ (τ `× σ) → Tm Γ σ
+
+ fun : Hom ⟦ τ ⟧ᵗ ⟦ σ ⟧ᵗ → Tm Γ τ → Tm Γ σ
+
+-- Superscript n for "name", e for "expression"
+⟦_⟧ⁿ : Γ ∋ τ → Hom ⟦ Γ ⟧ᶜ ⟦ τ ⟧ᵗ
+⟦ stop ⟧ⁿ = π₂
+⟦ pop x ⟧ⁿ = ⟦ x ⟧ⁿ ∘ π₁
+
+⟦_⟧ᵉ : Tm Γ τ → Hom ⟦ Γ ⟧ᶜ ⟦ τ ⟧ᵗ
+⟦ var x ⟧ᵉ = ⟦ x ⟧ⁿ
+⟦ x , y ⟧ᵉ = ⟨ ⟦ x ⟧ᵉ , ⟦ y ⟧ᵉ ⟩
+⟦ `π₁ x ⟧ᵉ = π₁ ∘ ⟦ x ⟧ᵉ
+⟦ `π₂ x ⟧ᵉ = π₂ ∘ ⟦ x ⟧ᵉ
+⟦ fun f t ⟧ᵉ = f ∘ ⟦ t ⟧ᵉ
+```
+
+## Renamings and substitutions
+
+Even after we have a good grasp on the morphisms in $\cB$ that we want
+to call _terms_, there are still two classes of maps, now between pairs
+of contexts, that we must single out. A **renaming** $\rho : \Gamma \to
+\Delta$ maps variables in $\Delta$ to variables in $\Gamma$: there's an
+identity renaming, and you can either choose to keep or drop variables
+from $\Delta$.
+
+If we allow ourselves to map variables in $\Delta$ to _terms_ in
+$\Gamma$, we get the class of **substitutions**. Now, instead of keeping
+a variable as-is, we can instead use an arbitrary term:
+
+```agda
+data Ren : Cx → Cx → Typeω where
+ stop : Ren Γ Γ
+ drop : Ren Γ Δ → Ren (Γ ʻ τ) Δ
+ keep : Ren Γ Δ → Ren (Γ ʻ τ) (Δ ʻ τ)
+
+data Sub : Cx → Cx → Typeω where
+ stop : Sub Γ Γ
+ _,_ : Tm Γ τ → Sub Γ Δ → Sub Γ (Δ ʻ τ)
+ drop : Sub Γ Δ → Sub (Γ ʻ τ) Δ
+```
+
+```agda
+-- Superscript r for "renaming", s for "substitution"
+⟦_⟧ʳ : Ren Γ Δ → Hom ⟦ Γ ⟧ᶜ ⟦ Δ ⟧ᶜ
+⟦ stop ⟧ʳ = id
+⟦ drop r ⟧ʳ = ⟦ r ⟧ʳ ∘ π₁
+⟦ keep r ⟧ʳ = ⟦ r ⟧ʳ ⊗₁ id
+
+⟦_⟧ˢ : Sub Γ Δ → Hom ⟦ Γ ⟧ᶜ ⟦ Δ ⟧ᶜ
+⟦ stop ⟧ˢ = id
+⟦ x , s ⟧ˢ = ⟨ ⟦ s ⟧ˢ , ⟦ x ⟧ᵉ ⟩
+⟦ drop ρ ⟧ˢ = ⟦ ρ ⟧ˢ ∘ π₁
+```
+
+One substitution which deserves special attention is the "terminating"
+substitution, which maps from an arbitrary $\Gamma$ to the empty
+context.
+
+```agda
+!ˢ : ∀ {Γ} → Sub Γ []
+!ˢ {[]} = Sub.stop
+!ˢ {Γ ʻ x} = Sub.drop !ˢ
+```
+
+Now comes the fiddly part of dealing with syntax: the endless recursive
+functions for the action of renamings and substitutions on _everything_,
+and all the correctness lemmas that guarantee we're doing the right
+thing.
+
+```agda
+_∘ʳ_ : Ren Γ Δ → Ren Δ Θ → Ren Γ Θ
+ren-var : Ren Γ Δ → Δ ∋ τ → Γ ∋ τ
+ren-tm : Ren Γ Δ → Tm Δ τ → Tm Γ τ
+sub-var : Sub Γ Δ → Δ ∋ τ → Tm Γ τ
+sub-tm : Sub Γ Δ → Tm Δ τ → Tm Γ τ
+
+ren-var-correct
+ : (ρ : Ren Γ Δ) (v : Δ ∋ τ) → ⟦ ren-var ρ v ⟧ⁿ ≡ ⟦ v ⟧ⁿ ∘ ⟦ ρ ⟧ʳ
+sub-var-correct
+ : (ρ : Sub Γ Δ) (t : Δ ∋ τ) → ⟦ sub-var ρ t ⟧ᵉ ≡ ⟦ t ⟧ⁿ ∘ ⟦ ρ ⟧ˢ
+
+ren-tm-correct
+ : (ρ : Ren Γ Δ) (v : Tm Δ τ) → ⟦ ren-tm ρ v ⟧ᵉ ≡ ⟦ v ⟧ᵉ ∘ ⟦ ρ ⟧ʳ
+sub-tm-correct
+ : (ρ : Sub Γ Δ) (t : Tm Δ τ) → ⟦ sub-tm ρ t ⟧ᵉ ≡ ⟦ t ⟧ᵉ ∘ ⟦ ρ ⟧ˢ
+```
+
+
+For the sake of the reader's sanity, we've chosen to display
+only the type signatures, and all the implementations are in this
+``{.html} block.
+
+```agda
+stop ∘ʳ ρ = ρ
+drop σ ∘ʳ ρ = drop (σ ∘ʳ ρ)
+
+keep σ ∘ʳ stop = keep σ
+keep σ ∘ʳ drop ρ = drop (σ ∘ʳ ρ)
+keep σ ∘ʳ keep ρ = keep (σ ∘ʳ ρ)
+
+ren-var stop v = v
+ren-var (drop σ) v = pop (ren-var σ v)
+ren-var (keep σ) stop = stop
+ren-var (keep σ) (pop v) = pop (ren-var σ v)
+
+ren-tm ρ (var x) = var (ren-var ρ x)
+ren-tm ρ (t , s) = ren-tm ρ t , ren-tm ρ s
+ren-tm ρ (`π₁ t) = `π₁ (ren-tm ρ t)
+ren-tm ρ (`π₂ t) = `π₂ (ren-tm ρ t)
+ren-tm ρ (fun x t) = fun x (ren-tm ρ t)
+
+sub-var stop v = var v
+sub-var (x , ρ) stop = x
+sub-var (x , ρ) (pop v) = sub-var ρ v
+sub-var (drop ρ) v = ren-tm (drop stop) (sub-var ρ v)
+
+sub-tm ρ (var x) = sub-var ρ x
+sub-tm ρ (t , s) = sub-tm ρ t , sub-tm ρ s
+sub-tm ρ (`π₁ tm) = `π₁ (sub-tm ρ tm)
+sub-tm ρ (`π₂ tm) = `π₂ (sub-tm ρ tm)
+sub-tm ρ (fun x tm) = fun x (sub-tm ρ tm)
+
+ren-var-correct stop v = sym (idr _)
+ren-var-correct (drop ρ) v = ap (_∘ π₁) (ren-var-correct ρ v) ∙ sym (assoc _ _ _)
+ren-var-correct (keep ρ) stop = sym (π₂∘⟨⟩ ∙ idl _)
+ren-var-correct (keep ρ) (pop v) =
+ pushl (ren-var-correct ρ v) ∙ sym (pullr π₁∘⟨⟩)
+
+ren-tm-correct ρ (var x) = ren-var-correct ρ x
+ren-tm-correct ρ (v , v₁) = sym (⟨⟩∘ _ ∙ sym (ap₂ ⟨_,_⟩ (ren-tm-correct ρ v) (ren-tm-correct ρ v₁)))
+ren-tm-correct ρ (`π₁ v) = pushr (ren-tm-correct ρ v)
+ren-tm-correct ρ (`π₂ v) = pushr (ren-tm-correct ρ v)
+ren-tm-correct ρ (fun x v) = pushr (ren-tm-correct ρ v)
+
+sub-var-correct stop t = sym (idr _)
+sub-var-correct (x , ρ) stop = sym π₂∘⟨⟩
+sub-var-correct (x , ρ) (pop t) = sym (pullr π₁∘⟨⟩ ∙ sym (sub-var-correct ρ t))
+sub-var-correct (drop ρ) v =
+ ren-tm-correct (drop stop) (sub-var ρ v)
+ ∙ extendl (idr _ ∙ sub-var-correct ρ v)
+
+sub-tm-correct ρ (var x) = sub-var-correct ρ x
+sub-tm-correct ρ (t , s) =
+ sym (⟨⟩∘ _ ∙ ap₂ ⟨_,_⟩ (sym (sub-tm-correct ρ t)) (sym (sub-tm-correct ρ s)))
+sub-tm-correct ρ (`π₁ t) = ap (π₁ ∘_) (sub-tm-correct ρ t) ∙ assoc _ _ _
+sub-tm-correct ρ (`π₂ t) = ap (π₂ ∘_) (sub-tm-correct ρ t) ∙ assoc _ _ _
+sub-tm-correct ρ (fun x t) = ap (x ∘_) (sub-tm-correct ρ t) ∙ assoc _ _ _
+```
+
+
+
+## Formulae
+
+Finally, we have a syntactic description of the objects in $\bP(\Gamma)$
+that concern regular logic: they are built from the following grammar.
+Each fibre has a top element and conjunction. We single out the
+existential quantification along the latest variable
+
+$$
+\exists_\pi : \bP(\Gamma,x) \to \bP(\Gamma)\text{.}
+$$
+
+Moreover, we can define equality $t = s$ of terms $t, s : \Gamma \vdash
+\tau$ in terms of existential quantification, though not along a
+variable, so it gets its own constructor; and finally, if you have a
+predicate on a _type_ $P : \bP(\tau)$ and a term $t : \Gamma \vdash
+\tau$, then you can from $P(t) : \bP(\Gamma)$.
+
+```agda
+data Formula : Cx → Typeω where
+ `⊤ : Formula Γ
+ _`∧_ : Formula Γ → Formula Γ → Formula Γ
+
+ `∃ : Formula (Γ ʻ τ) → Formula Γ
+
+ _=ᵖ_ : Tm Γ τ → Tm Γ τ → Formula Γ
+
+ pred : ℙ.Ob[ ⟦ τ ⟧ᵗ ] → Tm Γ τ → Formula Γ
+```
+
+The formulas over $\Gamma$ become objects over $\Gamma$ in the way that
+was implied in the definition of hyperdoctrine: in particular, the
+inclusion of semantic predicates is interpreted into substitution, and
+the equality predicate is interpreted by the formula
+
+$$
+(\exists_\delta \top)[\langle t, s \rangle]\text{.}
+$$
+
+```agda
+⟦_⟧ᵖ : Formula Γ → ℙ.Ob[ ⟦ Γ ⟧ᶜ ]
+⟦ x `∧ y ⟧ᵖ = ⟦ x ⟧ᵖ & ⟦ y ⟧ᵖ
+⟦ `⊤ ⟧ᵖ = aye
+⟦ `∃ x ⟧ᵖ = exists π₁ ⟦ x ⟧ᵖ
+⟦ pred p tm ⟧ᵖ = p [ ⟦ tm ⟧ᵉ ]
+⟦ t =ᵖ s ⟧ᵖ = exists ⟨ id , id ⟩ aye [ ⟨ ⟦ t ⟧ᵉ , ⟦ s ⟧ᵉ ⟩ ]
+```
+
+Once again we have the strictified presentation of substitution, though
+this time it's short enough to present inline:
+
+```agda
+sub-prop : Sub Γ Δ → Formula Δ → Formula Γ
+sub-prop ρ (φ `∧ ψ) = sub-prop ρ φ `∧ sub-prop ρ ψ
+sub-prop ρ (t =ᵖ s) = sub-tm ρ t =ᵖ sub-tm ρ s
+sub-prop ρ `⊤ = `⊤
+sub-prop ρ (`∃ φ) = `∃ (sub-prop (var stop , Sub.drop ρ) φ)
+sub-prop ρ (pred x t) = pred x (sub-tm ρ t)
+```
+
+Unfortunately, we also have to prove that this is sent by $\sem{-}$ to
+the semantic substitution:
+
+```agda
+sub-prop-correct
+ : (ρ : Sub Γ Δ) (φ : Formula Δ)
+ → ⟦ sub-prop ρ φ ⟧ᵖ ≡ ⟦ φ ⟧ᵖ [ ⟦ ρ ⟧ˢ ]
+
+sub-prop-correct ρ (φ `∧ ψ) = sym $
+ (⟦ φ ⟧ᵖ & ⟦ ψ ⟧ᵖ) [ ⟦ ρ ⟧ˢ ] ≡⟨ subst-& _ _ _ ⟩
+ ⟦ φ ⟧ᵖ [ ⟦ ρ ⟧ˢ ] & ⟦ ψ ⟧ᵖ [ ⟦ ρ ⟧ˢ ] ≡˘⟨ ap₂ _&_ (sub-prop-correct ρ φ) (sub-prop-correct ρ ψ) ⟩
+ ⟦ sub-prop ρ (φ `∧ ψ) ⟧ᵖ ∎
+
+sub-prop-correct ρ `⊤ = sym (subst-aye _)
+
+sub-prop-correct {Γ = Γ} ρ (`∃ {Γ = Δ} {τ = τ} φ) =
+ exists π₁ ⟦ sub-prop (var stop , drop ρ) φ ⟧ᵖ ≡⟨ ap (exists _) (sub-prop-correct _ φ) ⟩
+ exists π₁ (⟦ φ ⟧ᵖ [ ⟨ ⟦ ρ ⟧ˢ ∘ π₁ , π₂ ⟩ ]) ≡⟨ beck-chevalley rem₁ ⟩
+ (exists π₁ ⟦ φ ⟧ᵖ) [ ⟦ ρ ⟧ˢ ] ∎
+ where
+```
+
+The most interesting case is the one above, for existential
+quantification, where we use Beck-Chevalley applied to the pullback
+square
+
+~~~{.quiver}
+\[\begin{tikzcd}[ampersand replacement=\&]
+ {\Gamma, x :\tau} \&\& {\Delta, x : \tau} \\
+ \\
+ \Gamma \&\& \Delta
+ \arrow["{\pi_1}"', from=1-1, to=3-1]
+ \arrow["{\rho, x}", from=1-1, to=1-3]
+ \arrow["{\pi_1}", from=1-3, to=3-3]
+ \arrow["\rho"', from=3-1, to=3-3]
+ \arrow["\lrcorner"{anchor=center, pos=0.125}, draw=none, from=1-1, to=3-3]
+\end{tikzcd}\]
+~~~
+
+to commute
+
+$$
+\exists_{\pi_1} (\phi [ \rho\pi_1 , \pi_2 ]) = (\exists_{\pi_1} \phi) [ \rho ]
+$$
+
+
+
+```agda
+sub-prop-correct ρ (pred P t) =
+ ap₂ _[_] refl (sub-tm-correct ρ t) ∙ sym (subst-∘ _ _)
+
+sub-prop-correct ρ (s =ᵖ t) =
+ exists ⟨ id , id ⟩ aye [ ⟨ ⟦ sub-tm ρ s ⟧ᵉ , ⟦ sub-tm ρ t ⟧ᵉ ⟩ ]
+ ≡⟨ ap₂ _[_] refl (ap₂ ⟨_,_⟩ (sub-tm-correct ρ s) (sub-tm-correct ρ t)) ⟩
+ exists ⟨ id , id ⟩ aye [ ⟨ ⟦ s ⟧ᵉ ∘ ⟦ ρ ⟧ˢ , ⟦ t ⟧ᵉ ∘ ⟦ ρ ⟧ˢ ⟩ ]
+ ≡⟨ ap₂ _[_] refl (sym (⟨⟩∘ _)) ⟩
+ exists ⟨ id , id ⟩ aye [ ⟨ ⟦ s ⟧ᵉ , ⟦ t ⟧ᵉ ⟩ ∘ ⟦ ρ ⟧ˢ ]
+ ≡⟨ sym (subst-∘ _ _) ⟩
+ (exists ⟨ id , id ⟩ aye [ ⟨ ⟦ s ⟧ᵉ , ⟦ t ⟧ᵉ ⟩ ]) [ ⟦ ρ ⟧ˢ ]
+ ∎
+```
+
+
+
+## A sequent calculus
+
+We now turn to the problem of proving that our interpretation above
+satisfies the rules of regular logic. We will start by defining the
+**entailment relation** $\phi \vDash \psi$ between _syntactic_ formulae
+which is equivalent to $\sem{\phi} \le \sem{\psi}$: $\phi$ entails
+$\psi$ iff $\bP$ thinks that the interpretation of $\phi$ implies that
+of $\psi$.
+
+```agda
+infix 10 _⊨_
+
+opaque
+ _⊨_ : ∀ {Γ} → (φ ψ : Formula Γ) → Type ℓ'
+ _⊨_ φ ψ = ⟦ φ ⟧ᵖ ≤ ⟦ ψ ⟧ᵖ
+
+ from-entails : φ ⊨ ψ → ⟦ φ ⟧ᵖ ≤ ⟦ ψ ⟧ᵖ
+ from-entails p = p
+
+ instance
+ H-Level-⊨ : ∀ {n} → H-Level (φ ⊨ ψ) (suc n)
+ H-Level-⊨ = prop-instance (has-is-thin _ _)
+```
+
+
+
+Having defined entailment, we can now build a deductive calculus on top
+of $\phi \vDash \psi$ by constructing combinators corresponding to each
+proof rule. Most of the proofs are formalised straightforwardly, so we
+will restrict ourselves to pointing out _which_ rule is being
+formalised.
+
+
+
+The **cut rule** corresponds to transitivity of semantic entailment:
+
+$$
+\frac{\phi \vDash \psi \quad \psi \vDash \theta}{\phi \vDash \theta}\text{cut}
+$$
+
+```agda
+ cut : φ ⊨ ψ
+ → ψ ⊨ θ
+ → φ ⊨ θ
+ cut p q = ≤-trans p q
+```
+
+The **substitution rule** follows from our previous correctness lemmas:
+
+$$
+\frac{\phi \vDash \psi}{\phi[\rho] \vDash \theta[\rho]}\text{subst}
+$$
+
+```agda
+ sub-entail
+ : (ρ : Sub Γ Δ) {φ ψ : Formula Δ}
+ → φ ⊨ ψ
+ → sub-prop ρ φ ⊨ sub-prop ρ ψ
+ sub-entail ρ {φ} {ψ} α =
+ ⟦ sub-prop ρ φ ⟧ᵖ P.=⟨ sub-prop-correct ρ φ ⟩
+ ⟦ φ ⟧ᵖ [ ⟦ ρ ⟧ˢ ] P.≤⟨ subst-≤ ⟦ ρ ⟧ˢ α ⟩
+ ⟦ ψ ⟧ᵖ [ ⟦ ρ ⟧ˢ ] P.=˘⟨ sub-prop-correct ρ ψ ⟩
+ ⟦ sub-prop ρ ψ ⟧ᵖ P.≤∎
+```
+
+The three **rules for conjunction** follow at once from the definition
+of fibrewise meets:
+
+
+
+$$
+\frac{\Phi \vDash \phi \quad \Phi \vDash \psi}{\Phi \vdash \phi \land \psi}\land\text{-intro}
+$$
+
+$$
+\frac{\Phi \vDash \phi \land \psi}{\Phi \vdash \phi}\land\text{-eliml}
+$$
+
+$$
+\frac{\Phi \vDash \phi \land \psi}{\Phi \vdash \psi}\land\text{-elimr}
+$$
+
+
+
+```agda
+ `∧-intro
+ : Φ ⊨ φ
+ → Φ ⊨ ψ
+ → Φ ⊨ φ `∧ ψ
+ `∧-intro α β = &-univ α β
+
+ `∧-elimₗ
+ : Φ ⊨ φ `∧ ψ
+ → Φ ⊨ φ
+ `∧-elimₗ α = ≤-trans α (fibrewise-meet.π₁ _ _)
+
+ `∧-elimᵣ
+ : Φ ⊨ φ `∧ ψ
+ → Φ ⊨ ψ
+ `∧-elimᵣ α = ≤-trans α (fibrewise-meet.π₂ _ _)
+```
+
+The **rules for existentials** are slightly fiddly, but they follow from
+the universal properties of co/cartesian lifts and Frobenius
+reciprocity:
+
+
+
+$$
+\frac{\Phi \vDash \exists \phi \quad \Phi \land \phi \vDash \psi}{\Phi \vDash \psi}
+ \exists\text{-elim}
+$$
+
+$$
+\frac{\Phi \vDash_{x} \phi[t/x]}{\Phi \vDash \exists \phi}\exists\text{-intro}
+$$
+
+
+
+```agda
+ `∃-elim
+ : Φ ⊨ `∃ φ
+ → wk Φ `∧ φ ⊨ wk ψ
+ → Φ ⊨ ψ
+
+ `∃-intro
+ : ∀ {Γ} {Φ ψ} {t : Tm Γ τ}
+ → Φ ⊨ sub-prop (t , stop) ψ
+ → Φ ⊨ `∃ ψ
+```
+
+
+The proofs here have a bit more "unfortunately, proof
+assistants" than the others, so we'll keep them in ``{.html}
+too.
+
+```agda
+ `∃-elim {Φ = Φ} {φ = φ} {ψ = ψ} Φ⊢exists φ⊢ψ =
+ ⟦ Φ ⟧ᵖ P.≤⟨ &-univ Φ⊢exists P.≤-refl ⟩
+ ⟦ `∃ φ ⟧ᵖ & ⟦ Φ ⟧ᵖ P.=⟨ frobenius π₁ ⟩
+ exists π₁ (⟦ φ ⟧ᵖ & ⟦ Φ ⟧ᵖ [ π₁ ]) P.=⟨ ap (exists π₁) &-comm ⟩
+ exists π₁ (⟦ Φ ⟧ᵖ [ π₁ ] & ⟦ φ ⟧ᵖ) P.≤⟨ φ⊢ψ' ⟩
+ ⟦ ψ ⟧ᵖ P.≤∎
+ where
+ φ⊢ψ' : exists π₁ (⟦ Φ ⟧ᵖ [ π₁ ] & ⟦ φ ⟧ᵖ) ≤ ⟦ ψ ⟧ᵖ
+ φ⊢ψ' = ≤-exists π₁
+ (transport (ap₂ _≤_
+ (ap₂ _&_ (sub-prop-correct (drop stop) Φ ∙ ap₂ _[_] refl (idl _)) refl)
+ (sub-prop-correct (drop stop) ψ ∙ ap₂ _[_] refl (idl _)))
+ φ⊢ψ)
+
+ `∃-intro {Γ = Γ} {Φ = Φ} {ψ = ψ} {t = t} α = hom[ cancell π₁∘⟨⟩ ] $
+ cocartesian.has-lift.lifting π₁ ⟦ ψ ⟧ᵖ
+ ℙ.∘' cartesian.has-lift.lifting ⟨ id , ⟦ t ⟧ᵉ ⟩ ⟦ ψ ⟧ᵖ
+ ℙ.∘' p
+ where
+ p : ⟦ Φ ⟧ᵖ ≤ (⟦ ψ ⟧ᵖ [ ⟨ id , ⟦ t ⟧ᵉ ⟩ ])
+ p =
+ ⟦ Φ ⟧ᵖ P.≤⟨ α ⟩
+ ⟦ sub-prop (t , stop) ψ ⟧ᵖ P.=⟨ sub-prop-correct (t , stop) ψ ⟩
+ (⟦ ψ ⟧ᵖ [ ⟨ id , ⟦ t ⟧ᵉ ⟩ ]) P.≤∎
+```
+
+
+
+
+
+### Local rewriting
+
+Since we're working with a very strict *pre*syntax of formulas, there
+are many distinct syntactic-formulae with identical semantics. This
+strictness affords us good computational properties at the cost of
+_mathematical_ properties: our syntax isn't initial in any sense.
+Moreover, since the details of the entailment relation are kept `opaque`
+to encourage use of the combinators above, it's not immediate to the
+outside world that it respects semantic equality.
+
+We'll remedy this in the following way: we can define an inductive
+relation $t \approx s$ which includes semantic equality but also
+includes combinators for "zooming in" part of a formula, ignoring the
+parts that don't change, so that we can deploy semantic equality at
+precisely the place where it matters. Then, we provide combinators
+witnessing that the entailment $\phi \vDash \psi$ respects $\approx$.
+
+```agda
+data _≈ᵗ_ : Tm Γ τ → Tm Γ τ → Typeω where
+ sem : ⟦ t ⟧ᵉ ≡ ⟦ s ⟧ᵉ → t ≈ᵗ s
+ _,_ : t ≈ᵗ t₁ → s ≈ᵗ s₁ → (t , s) ≈ᵗ (t₁ , s₁)
+ `π₁ : t ≈ᵗ t₁ → `π₁ t ≈ᵗ `π₁ t₁
+ `π₂ : t ≈ᵗ t₁ → `π₂ t ≈ᵗ `π₂ t₁
+ fun : ∀ {h : Hom ⟦ τ ⟧ᵗ ⟦ σ ⟧ᵗ} {t t₁ : Tm Γ τ}
+ → t ≈ᵗ t₁
+ → fun {τ = τ} {σ} h t ≈ᵗ fun h t₁
+
+data _≈ᵖ_ : Formula Γ → Formula Γ → Typeω where
+ sem : ⟦ φ ⟧ᵖ ≡ ⟦ ψ ⟧ᵖ → φ ≈ᵖ ψ
+ _`∧_ : φ ≈ᵖ φ' → ψ ≈ᵖ ψ' → (φ `∧ ψ) ≈ᵖ (φ' `∧ ψ')
+ `∃ : φ ≈ᵖ ψ → `∃ φ ≈ᵖ `∃ ψ
+ pred : ∀ {p} → t ≈ᵗ t₁ → pred p t ≈ᵖ pred p t₁
+ _=ᵖ_ : ∀ {Γ τ} {t t₁ s s₁ : Tm Γ τ}
+ → t ≈ᵗ t₁ → s ≈ᵗ s₁ → (t =ᵖ s) ≈ᵖ (t₁ =ᵖ s₁)
+```
+
+The data types above are simply presentations of semantic equality: $t
+\approx s$ iff. $\sem{t} = \sem{s}$. However, $t \approx s$ once again
+contains redundancy which is mathematically irrelevant but handy when
+working in a proof assistant.
+
+```agda
+to-semᵗ : t ≈ᵗ t₁ → ⟦ t ⟧ᵉ ≡ ⟦ t₁ ⟧ᵉ
+to-semᵗ (sem p) = p
+to-semᵗ (p , q) = ap₂ ⟨_,_⟩ (to-semᵗ p) (to-semᵗ q)
+to-semᵗ (`π₁ p) = ap₂ _∘_ refl (to-semᵗ p)
+to-semᵗ (`π₂ p) = ap₂ _∘_ refl (to-semᵗ p)
+to-semᵗ (fun p) = ap₂ _∘_ refl (to-semᵗ p)
+
+to-semᵖ : φ ≈ᵖ ψ → ⟦ φ ⟧ᵖ ≡ ⟦ ψ ⟧ᵖ
+to-semᵖ (sem p) = p
+to-semᵖ (p `∧ q) = ap₂ _&_ (to-semᵖ p) (to-semᵖ q)
+to-semᵖ (`∃ p) = ap (exists π₁) (to-semᵖ p)
+to-semᵖ (pred p) = ap₂ _[_] refl (to-semᵗ p)
+to-semᵖ (_=ᵖ_ {Γ = Γ} {τ = τ} p q) = ap₂ equ (to-semᵗ p) (to-semᵗ q) where
+ equ : ∀ (h h' : Hom ⟦ Γ ⟧ᶜ ⟦ τ ⟧ᵗ) → ℙ.Ob[ ⟦ Γ ⟧ᶜ ]
+ equ h h' = exists ⟨ id , id ⟩ aye [ ⟨ h , h' ⟩ ]
+```
+
+We can then provide combinators for turning $\phi \approx \psi$ into
+$\phi \vDash \psi$, or $\psi \vDash \phi$, as appropriate.
+
+```agda
+opaque
+ unfolding _⊨_
+
+ ≈→entails : φ ≈ᵖ ψ → entails φ ψ
+ ≈→entails p = transport (ap₂ _≤_ refl (to-semᵖ p)) P.≤-refl
+
+ ≈→entails⁻ : φ ≈ᵖ ψ → entails ψ φ
+ ≈→entails⁻ p = transport (ap₂ _≤_ refl (sym (to-semᵖ p))) P.≤-refl
+```
+
+
diff --git a/src/Cat/Regular.lagda.md b/src/Cat/Regular.lagda.md
index 7e35b7582..6bf5ef5f8 100644
--- a/src/Cat/Regular.lagda.md
+++ b/src/Cat/Regular.lagda.md
@@ -17,7 +17,7 @@ import Cat.Reasoning as Cr
module Cat.Regular where
```
-# Regular categories
+# Regular categories {defines="regular-category"}
A **regular category** is a category with [[pullback]]-stable [[image
factorisations]]. To define regular categories, we use the theory of
diff --git a/src/Order/Frame.lagda.md b/src/Order/Frame.lagda.md
index e924bbe31..2e3788bca 100644
--- a/src/Order/Frame.lagda.md
+++ b/src/Order/Frame.lagda.md
@@ -16,7 +16,7 @@ import Cat.Reasoning
module Order.Frame where
```
-# Frames
+# Frames {defines="frame"}