Type theory is expressive enough that one can implement certified proof search algorithms not only telling us whether something is provable or not but providing us with a derivation which is statically known to be correct. However this comes at a price: the original systems are usually formulated in a hand-wavy way which is quite strongly incompatible with a formal treatment thus forcing us to find alternative presentations.

In this blog post, I'll try to highlight the main points of the draft Conor and I wrote in December / January. We will see how to reformulate a fragment of Intuitionistic Linear Logic in order to make the implementation of a certified proof search algorithm tractable.

Before starting, I would like to point out that Liam O'Connor wrote a very nice blog post showcasing the usual ℕ-index definition of untyped lambda terms as well as introducing an elegant syntax for "linear" [1] proof terms [2]. It has the same underlying idea: massaging the original definitions can lead to a calculus more suited for a mechanised approach.

# The ILL fragment we are interested in.

We are dealing with the fragment equipped with base types, tensor and par. The linearity restriction makes the right introduction rule for tensor awkward to use: the prover has to guess how to partition the context in two in a way that will let her discharge the subgoals. In the setting of certified automated proof search, the context is represented as a snoc list of assumptions rather than a multiset thus leading us to write the rule the following way:

Γ ≡ Δ ⋈ E Δ ⊢ σ E ⊢ τ ------------------------------ ⊗ʳ Γ ⊢ σ `⊗ τ

Here, Γ ≡ Δ ⋈ E means that Γ is obtained by interleaving Δ and E which, if we read the rule bottom-up, precisely correspond to the idea of finding the right partition when applying ⊗ʳ. But we really do not want to have to generate all partitions and test them one after the other. This leads us to our first maxim:

# If it's too complicated, solve a more general problem!

It turns out that we can craft a calculus more general than linear logic which will not force the logician to guess partitions all the time. I must confess that I have lied by omission up to this point: this whole enterprise was not started by the mere will to formalize proof search for linear logic. It has been spawned by Conor coming in the office and presenting ideas about type theories with world annotations. One of his ideas was that typing a term might impact the context (e.g. by changing the world an assumption lives in) which implies automatically that one needs both an "input" as well as an "output" context in the typing relation. We end up with something like that:

Γ ⊢ t : T ⊣ Δ

I rapidly had the feeling that a similar formulation could be very useful in certifying proof search for linear logic and started experimenting with the idea. Instead of guessing how to partition the context, we could think of the calculus as describing the idea that we start with a pool of resources available, prove a formula and end up with the leftovers not used in that proof. Proof search becomes resource consumption.

The tensor introduction rule would simply use the whole context to try to prove the first subgoal, get back whatever is leftover and use that as an input to attempt solving the second one. After some investigations, this is the introduction rules I came up with for our new sequent calculus generalising linear logic:

Γ ∋ k ∈ Δ Γ ⊢ σ ⊣ Δ Δ ⊢ τ ⊣ E Γ ⊢ σ ⊣ Δ₁ Γ ⊢ τ ⊣ Δ₂ E ≡ Δ₁ ⊙ Δ₂ ------------ κ ------------------------ ⊗ ----------------------------------------- & Γ ⊢ `κ k ⊣ Δ Γ ⊢ σ `⊗ τ ⊣ E Γ ⊢ σ `& τ ⊣ E

The case of atomic propositions is defined in terms of an auxiliary relation _∋_∈_ describing how variable lookup consumes a resource from the context. In the tensor case, we have arbitrarily decided that proof search will start on the first subgoal and thread the contexts as expected. The par one is slightly more subtle: we use the same input to explore both subgoals in parallel but have to make sure that the outputs are synchronized i.e. that the assumptions used by both subproofs are the same.

The intuition describing contexts as multisets of resources, and specifically the output one as leftovers, begets the following soudness theorem showing the connection with good ol' IMLL: if we remove whatever was a leftover from the input context then we can recover a derivation in ILL.

soundness : Γ ⊢ σ ⊣ Δ → Γ ─ Δ ⊢ σ

Defining this difference operator is however quite hard: we have nothing (apart from the derivation Γ ⊢ σ ⊣ Δ itself) telling us that Δ is indeed a sub-multiset of Γ! This new obstacle teaches us our second principle:

# Never ditch the structure you might want to use later on!

It is hard for us to define Γ ─ Δ because we don't know how Γ relates to Δ. But this can be dealt with by introducing the notion of consumption annotation: instead of defining the sequent calculus over contexts of assumptions, we can define it over Usages of a given context. Consumption via the axiom rule turns available resources into consumed ones but, most importantly, it preserves the structure of it all!

A Usage of a context γ is defined in a pointwise manner saying whether each assumption is an available resource or an already used one. A variable lookup will merely turn a previously available assumption into a used one. And checking that two outputs are synchronized now amounts to making sure that the Usages agree in a pointwise manner! The relation corresponding to Usage differences is also defined in a pointwise manner and we can prove that Γ ⊢ σ ⊣ Δ being derivable implies that there is a difference E such that E ⊢ σ.

The last unresolved problem is the way we are going to deal with left rules. The left rule for par is a direct loss of information: we move from having A & B as an assumption to either one of the subformula. If the goal we are trying to prove is B & A then applying a left rule too early will lead us to a dead end. The one for tensor is typically needed before using a right introduction rule for tensor itself: the two subformulas of the assumptions may be used in different subproofs e.g. when proving A ⊗ B with A ⊗ B in the context. But it could also be applied earlier on without any dramatic consequences. Hence our last slogan:

# Never do more than you need to!

In an approach reminiscent of focusing [3], we realize that we do not need to have left rules. Indeed, it is possible to formulate the notion of usage in a way that completely frees us from having to deal with them: rather than having a binary notion of usage for assumptions, we can have a more subtle one describing how an assumption may have been partially used so far. For example, we may write [ A ]⊗ B to mean that A is entirely available whilst B is partially used. A careful analysis of a derivation in the more general calculus will allow us to insert left rules wherever needed.

# Conclusion

I hope I have managed to sum up quite clearly the 3 main take home messages of our paper. They are not really specific to linear logic itself but rather good practices when working in a formal system. If you'd like more details about the ILL setup itself, don't hesitate to have a read. We show there that our generalised calculus enjoys a notion of weakening (you can always throw in more resources in the input context, they'll just be returned to you untouched in the output one!) and that it is sound and complete with respect to derivability in ILL. We conclude that the fragment is decidable and derive solvers for equations on a commutative monoid as well as a more specialised one using Nisse's Bag Equivalence via a Proof-Relevant Membership Relation.

# Footnotes

Liam's system is inspired by linear logic extended with explicit contraction and weakening.

I was not aware of his work at the time I wrote the procedure presented here and did not pay much attention to the way I formulated ILL because the interesting part is elsewhere. A system closer to his would provide a nicer interface though.

See: Logic Programming with Focusing Proofs in Linear Logic, by Jean-Marc Andreoli (citeulike)


Last update: 2024 12
fun