Today I want to talk about something I came up with when working on questions connected to the material presented in our ICFP submission but we did not get the opportunity to discuss in the paper. The claim is that glueing terms to the structure of a model allows the reader to get a good intuition of the way the calculus works by making explicit what used to be hidden under the hood.

I learned about the glueing technique whilst reading about models of the simply-typed SK combinatorial calculus [1]. Glueing is described as a tool to circumvent the mismatch between the really restrictive structure of SK terms and the rather liberal (naïve) model: if one is able to evaluate SK terms in the model, it is however impossible to extract normal forms from the semantical objects without applying some kind of lambda to SK encoding procedure! Glueing syntactical terms to their semantical counterparts is precisely what is required by the procedure to be able to reify partial applications of S and K.

Glueing resurfaced later on in my own experimentations when defining a model for a weak-head normalization by evaluation [2]. Indeed, the need to be able to throw away unnecessary computations (e.g. the argument part of a stuck application should not be reduced) is easily met by enriching the model with syntactical artefacts corresponding to source terms for the semantical objects. A pleasant side-effet of the added structure is the simplification of the formulation and proofs of correctness properties in Type Theory.

# The setting

But in this blog post I would like to argue that there is more to it than merely solving these kind of problems: glueing can be used to obtain a well-structured model highlighting some nice properties of the calculus. The setting for this example is the definition of a normalization function for the simply-typed lambda calculus which does not require a glueing to be defined. Types are either the base, uninterpreted, type or arrow types:

σ, τ : ty ∷= ♭ | σ `→ τ

Our well-typed terms using de Bruijn indices are boring so we will have a look at the definition of normal forms instead. By definition, only stuck expressions (a variable followed by a spine of arguments in normal form) of the base type can be regarded as normal forms. The equational theory will accordingly include eta rules.

data _⊢ne_ (Γ : Con ty) : ty → Set where `v : (pr : σ ∈ Γ) → Γ ⊢ne σ _`$_ : (f : Γ ⊢ne σ `→ τ) (x : Γ ⊢nf σ) → Γ ⊢ne τ data _⊢nf_ (Γ : Con ty) : ty → Set where `↑_ : (t : Γ ⊢ne ♭) → Γ ⊢nf ♭ `λ_ : (b : Γ ∙ σ ⊢nf τ) → Γ ⊢nf σ `→ τ

All these notions obviously come with the expected weakening operations.

# Model definition - where the glueing happens

The model is defined in terms of reduction-free elements: the interpretation of a term is either a neutral form which will just grow when being eliminated or it is a normal form together with an element of the acting model explaining what its behaviour is.

_⊩_ : ∀ (Γ : Con ty) (σ : ty) → Set Γ ⊩ σ = Γ ⊢ne σ ⊎ Γ ⊢nf σ × Γ ⊩⋆ σ

The acting model is the part of the model doing all the heavy lifting when computational reductions are required. It has a Kripke flavour in the sense that it refers to the full model in any possible future context extension. Quite unsurprisingly, a term of base type has no computational content (it can only be a neutral) and elements of function type are interpreted as functions on semantical objects.

_⊩⋆_ : ∀ (Γ : Con ty) (σ : ty) → Set Γ ⊩⋆ ♭ = ⊤ Γ ⊩⋆ σ `→ τ = ∀ {Δ} (inc : Γ ⊆ Δ) → Δ ⊩ σ → Δ ⊩ τ

These definitions are easily extended to contexts by recursion and give rise to semantical environments Δ ⊩ε Γ. A natural notion of weakening can be made formal.

# The (not so) trivial quoting function

Because of the way the model is defined, there is little to no mystery that proving that each element has an image in the model amounts to proving the existence of a procedure turning terms into their cut-free equivalents.

Even if the definitions of quote and unquote are straightforward, they are rather unusual: unlike their more traditional analogues, they confine the uses of eta-expansion at reification time. This has to be compared to the motto eta-expansion both at the syntactical and semantical level usually associated with the (un)quote functions. Additionally, this syntactic work is performed in a standalone function eta[_]_ [3] which is model agnostic.

eta[_]_ : ∀ {Γ} σ (T : Γ ⊢ne σ) → Γ ⊢nf σ eta[ ♭ ] t = `↑ t eta[ σ `→ τ ] t = `λ eta[ τ ] (wk-ne inc t `$ var) where inc = step (same _) var = eta[ σ ] `v here!

This way of organizing extraction of normal forms from the model brings us closer to a staged reduction process which deals with computations (beta, delta, iota) first and then reorganizes the cut-free forms using extra rules (eta, nu [4]) than your average normalization by evaluation formalization.

# Evaluation function

By combining weakening for semantical environments and unquote, we can define a diagonal environment Γ ⊩ε Γ for every Γ. In order to be able to extract normal forms from simple terms, it is thus enough to define an evaluation function.

eval : (t : Γ ⊢ σ) (ρ : Δ ⊩ε Γ) → Δ ⊩ σ

eval is defined by induction on the structure of the term t. In the variable case, a simple lookup in the semantical environment produces the well-typed value needed. The application case combines the induction hypotheses using _$$_, a function defined by case analysis on the function: if the function is stuck, the application also is whereas if it is live, the action model will compute further using the argument.

_$$_ : (F : Γ ⊩ σ `→ τ) (X : Γ ⊩ σ) → Γ ⊩ τ inj₁ f $$ X = inj₁ (f `$ ↑[ _ ] X) inj₂ (f , F) $$ X = F (⊆-refl _) X

The lambda abstraction case is defined in two steps: first the object B in the acting model can be generated by applying the induction hypothesis for the body of the lambda in an extended (and weakened) environment. And then the normal form is generated by quoting the object obtained when applying B to the variable bound by the head lambda.

eval (`λ t) ρ = inj₂ (`λ ↑B , B) where B : _ ⊩⋆ _ `→ _ B = λ inc X → eval t (wk-⊩ε _ inc ρ , X) ↑B = ↑[ _ ] B (step (⊆-refl _)) (↓[ _ ] `v here!)

Now we can just tie the knot and combine the diagonal environment with the evaluation function and quote in order to obtain a normalization procedure.

norm[_]_ : ∀ {Γ} σ (t : Γ ⊢ σ) → Γ ⊢nf σ norm[ σ ] t = ↑[ σ ] eval t (⊩ε-refl _)

# Conclusions

We recalled that glueing can help us give more structure to our models thus circumventing limitations of the naïve ones. But, more importantly, we have also shown that this technique can produce new insights on already well understood constructions by e.g. making it possible to isolate different stages of the normalization process: computation and standardization in our case.

# Footnotes

Intuitionistic model constructions and normalization proofs by Thierry Coquand and Peter Dybjer (citeulike)

Raw Agda development on patch-tag.

It should be noted that the traditional eta-expansion trick (↑[ σ ] ↓[ σ ] t) consisting of drowning the neutral term in the model and refying it back immediately after is precisely equal to applying eta[ σ ] to t.

A set of rules reorganizing stuck recursive functions we introduce in our ICFP submission (This is starting to sound like an advertisement campaign... ¬_¬).


Last update: 2024 12
fun