Parametrised Modules: (ab)using lambda-lifting

Lately, I have been working quite a bit with an abstract notion of Semantics defined as a set of combinators all packed in a record. Doing so, I have come to enjoy the flexibility provided by parameterised modules to define operations generically over such a Semantics. In this blog post, I want to show how this pattern can be used to tidy up some definitions.

Introduction: Scope and lambda lifting

A module declaration in Agda can be parameterised by a telescope of arguments. These variables are then made available to the user across the whole body of the module in a manner reminiscent of Coq's Sections. The most bare-bones example I could come up with is an Identity module parameterised by A, a Set, and defining the identity function for A. This definition is accepted without an out of scope error being raised because, inside the module, the context is extended with A : Set.

module Identity (A : Set) where identity : A → A identity = λ x → x

Once we leave the module, the type of identity as it is written down does not make sense anymore because A's scope was only extending as far as the module was. The process of taking such a definition and altering it so that it makes sense even outside of Identity is called lambda-lifting. It consists in prepending the telescope of arguments the module was parameterised with [1] to the type and body of the various defined symbols it contains. We can check that Identity.identity's type now includes an extra argument: the type A. Outside the Identity module, it is effectively the polymorphic identity [2] (the highlighted part is the one generated by the lambda-lifting):

check : Identity.identity ∈ ∀ A → A → A check = indeed

A more complex example

In my experience these parameterised modules are particularly helpful when you need to make the content of a record, passed as an argument, available to the type and body of a defined symbol. To keep things simple, I decided to use one of the most basic constructs in the standard library's Algebra: a RawMonoid. If you have one such beast, then you can aggregate values by using the monoid's binary operation and people do that in real software.

This module is called Reduce, it is parameterised by a RawMonoid [3] and it implements aggregate, a function which collapses a list of values down to a single one. Here it is crucial for us to be able to bring the fields of the RawMonoid mon in scope for the whole definition: Carrier, _∙_ and ε are all packed up in that record.

module Reduce {c ℓ : Level} (mon : RawMonoid c ℓ) where open RawMonoid mon aggregate : List Carrier → Carrier aggregate = foldr _∙_ ε

It would of course be possible to write an equivalent function without using such a module: it all amounts to lambda-lifting the definition by hand. However no matter the approach we choose (bringing the same constants in scope or projecting the values out of the record), the definitions become far less readable:

aggregate′ : {c ℓ : Level} (mon : RawMonoid c ℓ) → let open RawMonoid mon in List Carrier → Carrier aggregate′ mon = let open RawMonoid mon in foldr _∙_ ε aggregate′′ : {c ℓ : Level} (mon : RawMonoid c ℓ) → List (RawMonoid.Carrier mon) → RawMonoid.Carrier mon aggregate′′ mon = foldr (RawMonoid._∙_ mon) (RawMonoid.ε mon)

We can now use aggregate with an extra argument (the RawMonoid in question). Assuming we have ℕ+ and ℕ* (respectively the additive and multiplicative monoids over the natural numbers) and a function [_⋯_] generating a range of natural numbers, we can write some test about computations involving aggregate:

test-aggregate : Reduce.aggregate ℕ+ [ 1 ⋯ 5 ] ≡ 15 ∧ Reduce.aggregate ℕ* [ 1 ⋯ 5 ] ≡ 120 test-aggregate = refl , refl

One last thing I like to do in this sort of situation, especially when defining what could be considered a semantics, is to anticipate the lambda-lifting and define an infix operator which uses a double turnstile (⊨) to separate the specific theory used from the (parametrically defined) computation it acts upon. Here this would mean adding the following line inside the module Reduce:

_⊨reduce_ = aggregate

_⊨reduce_ is supposed to take two arguments but, inside Reduce, aggregate only takes one. This weird mismatch is solved when we leave the module and the RawMonoid argument is lambda-lifted. We can now write A ⊨reduce xs to mean that we use the monoid A to give a meaning to the phrase "reduce xs". The same tests as before hold but I find the type more readable:

test-reduce : ℕ+ ⊨reduce [ 1 ⋯ 5 ] ≡ 15 ∧ ℕ* ⊨reduce [ 1 ⋯ 5 ] ≡ 120 test-reduce = refl , refl


This is only a first approximation. It is naturally possible to have a slightly more subtle approach and to only prepend the variables which appear free in the type and body of the defined symbols together will all the ones they themselves depend on (and so on recursively). In other words, to only use the smallest, dependency-closed sub-context containing the variables free in the defined symbol's type and body.

Here we write a ∈ A to mean that a has type A. This notion can be defined using a simple indexed family which looks a lot like propositional equality: because of the way the only constructor indeed is defined, if a ∈ A is inhabited then A can only ever be a's type:

data _∈_ {ℓ : Level} {A : Set ℓ} (a : A) : (B : Set ℓ) → Set where indeed : a ∈ A

In the standard library, a Monoid is a RawMonoid plus some laws. Here we do not care about the proofs that _∙_ is associative and ε a neutral element for it. It goes without saying that in a general-purpose library, it would be more interesting to work on a Monoid and prove aggregate's properties (e.g. that it is a monoid homomorphism).

Last update: 2018 07 13