Three Tricks to make Termination Obvious

Two weeks ago I spent some time implementing Wadler's "Prettier Printer". The implementation is fairly straightforward except for three functions that are not seen as terminating. Here are the three tricks that made them go through.

Wadler's paper introduces a Doc datatype abstractly representing documents in terms of raw strings, indentation, newlines, concatenation but also alternatives of subdocuments. Semantically, alternatives are used to offer two different layouts of the same underlying content. Various functions manipulate these documents; they may introduce alternatives when it's too early to make a decision or get rid of them e.g. once the maximum width of the final document is known.

Using Vec rather than List

The first problem shows up with the fill function which turns a list of documents into a document by trying to fill the lines as much as possible. If the list is less than one element long, the strategy to adopt is evident. In case it's longer, the algorithm calls flatten to collapse documents onto one line and offers two alternatives: the one where they have been made to fit and another one where a newline has been inserted.

fill :: [Doc] -> Doc ... fill (x:y:zs) = (flatten x <+> fill (flatten y : zs)) :<|> (x </> fill (y : zs))

The issue here is that flatten y : zs is not a subterm of x:y:zs which makes the recursive call to fill look illegal. The right notion of smaller here is not "strict subterm" but rather "smaller length". This can be fixed by simply changing the type of the function to ∀ {n} → Vec Doc n → Doc thus making the termination obviously structural: if the length of x:y:zs is suc (suc n) then the length of flatten y : zs is suc n, a strict subterm!

The code is kept the same but the type has an extra invariant built in (which is threaded via implicit arguments) and that makes the termination checker happy. We can recover a top-level function with fill's type by pre-composing our replacement for it with Data.Vec.fromList.

CPS-transform algorithms using queues to store future work

In Wadler's paper, a function best is in charge of picking the most tightly-packed display form of a document a certain width can handle. It calls an auxiliary function be which takes as arguments the width, the number of characters already inserted on the current line and a LIFO of documents (together with their respective indentation levels) that still have to be displayed.

During a typical run of the function, sub-structures get pushed on the stack and later on recursive calls are performed on them. However when that happens, we have lost the connection with the super-structure justifying that the call is indeed on a smaller argument and thus valid. Here is part of the code demonstrating the issue:

be :: Int -> Int -> [(Int, Doc)] -> Doc ... be w k ((i,NIL):z) = be w k z be w k ((i,x :<> y):z) = be w k ((i,x):(i,y):z) ...

The trick here is to perform the recursive calls when the evidence justifying them is available but to only use their results later on. In other word: replace future arguments with the continuation associated to them. This forces us to slightly change the type of the function: rather than working with a stack of (Int, Doc) and inspecting its head, we work with a focused Int and Doc and a continuation Int -> Doc which, provided a number of characters already inserted on the current line will yield the document corresponding to the rest of the computation. The two lines we saw earlier on have now become [1]:

be : ℕ → ℕ → ℕ → Doc → (ℕ → Doc) → Doc ... be w k i NIL ds = ds k be w k i (d :<> e) ds = be w k i d $ λ k → be w k i e ds ...

For a more self-contained example, I have implemented a binary tree traversal using the same trick to assemble a list of all its leaves. The resulting algorithm corresponds to using a difference list to collect all the leaves.

Use sized-types to make higher-order functions usable

Last but not least, one of the examples of Wadler's library in action is the definition of a pretty printer for a deep embedding of XML. The XML trees are represented by:

data XML = Elt String [Att] [XML] | Txt String data Att = Att String String

The function showXMLs is used to neatly display the tree. In one of its clauses, it perform recursive calls on the subtrees via a higher-order function showFill which Agda has issues seeing as terminating:

showFill :: (a -> [Doc]) -> [a] -> Doc showFill f [] = nil showFill f xs = bracket "" (fill (concat (map f xs))) "" showXMLs :: XML -> [Doc] ... showXMLs (Elt n a c) = [text "<" <> showTag n a <> text ">" <> showFill showXMLs c <> text "</" <> text n <> text ">"] ...

A common trick in this type of situation is to mutually define showXMLs and an inlined version of the partially applied function showFill showXMLs. However this code duplication is rather annoying. It turns out that we can, just like in our first example, save the day by slightly changing the types of the structures and functions at hand. This time we'll add implicit Size information [2] to the definition of XML and showXMLs.

data XML : {i : Size} → Set where Elt : ∀ {i} → String → List Attribute → List (XML {i}) → XML {↑ i} Txt : ∀ {i} → String → XML {i} showXMLs : ∀ {i} → XML {i} → List Doc ...

When we don't explicitly specify the Size, it gets defaulted to infinity which means that by default the indexed type will behave like its non-indexed counterpart. However we can get the full benefit of sized types whenever we do mention the index.

By making explicit reference to the tree's size in the type of showXMLs, we get the termination checker to accept its definition as valid. Indeed, when matching on the constructor Elt, the implicit size argument is ↑ i for some i (cf. Elt's return type) and in the recursive occurence passed to showFill, it has become i which is strictly smaller.


What I assumed to be a simple matter of closely following Wadler's paper turned out to be a nice exercise in convincing the termination checker whilst staying true to the original algorithm. In each one of these three examples we have managed to avoid both the sledgehammer that is well-founded recursion and code duplication by using either type-level tricks or a CPS transformation.


Here we are actually using rather than Int because these values are always positive anyways so we might as well say so in the type!

For more on sized and dependent types, cf. Andreas Abel's MiniAgda: Integrating Sized and Dependent Types

Last update: 2016 11 24