Author: Eiko

Time: 2026-03-31 08:23:37 - 2026-03-31 08:24:21 (UTC)

(This document is generated by GPT for me to find interesting stuffs to learn at)

Advanced Routes for Learning Functional Data Structures

This document is about learning routes, not just a catalogue of named structures.

The point is not merely to know that some data structure exists. The point is to learn a few strong design principles so that, when a new workload appears, you can design a representation that fits it.

A good way to think about a data structure is:

  • workload: what operations matter, and with what frequency?
  • representation: what shape of data can encode the state?
  • invariant: what must always remain true?
  • local repair: when an operation breaks the invariant, how can it be repaired cheaply?
  • cached summary: what information should be stored at nodes to speed up future operations?
  • evaluation strategy: can laziness or scheduling move expensive work away from the critical path?

A functional data structure is often a carefully chosen compromise between these axes.


0. Meta-route: how to read this subject

Before the specific routes, here is the general habit that turns reading into design skill.

For each data structure you meet, ask:

  1. What exact problem is it solving?
  2. What invariant makes the desired operations cheap?
  3. What information is cached?
  4. Which operations are local, and which ones trigger repair?
  5. Is the complexity obtained by shape, by laziness, by annotation, or by some bootstrapping trick?
  6. What would break if I changed the workload?
  7. Could I keep the same principle but change the representation?

That is the habit that turns “I know some structures” into “I can invent one.”


1. Route: Persistence, amortization, and scheduled rebuilding

What magic this gives you

This route teaches you how to build structures that are purely functional and persistent, while still being efficient.

The magic is that expensive work does not need to happen all at once. You can:

  • spread rebuilding work over future operations,
  • use laziness to suspend maintenance,
  • preserve old versions of the structure,
  • reason about efficiency without mutation.

This is the route where one learns that in functional programming, time can be managed structurally.

A queue is the canonical example. A naive purely functional queue using one list for the front and one for the rear needs occasional reversal of the rear list. That reversal looks expensive, but if handled correctly, the average cost per operation is still small.

Principles

1.1 Persistence

A persistent structure preserves old versions after updates.

s0 = empty
s1 = insert 3 s0
s2 = insert 5 s1

All of s0, s1, and s2 are still usable.

The cost model is different from imperative mutation. Instead of overwriting memory, one usually shares unchanged substructure.

1.2 Amortization in the functional setting

In an imperative structure, one can pay a rare large cost and average it across operations. In the persistent setting, this is more delicate: if the “same expensive debt” can be forced many times through sharing, a naive amortized argument can fail.

So one must be careful:

  • either prove that the expensive work is not duplicated,
  • or use a disciplined lazy schedule,
  • or deamortize so that no individual operation becomes too expensive.

1.3 Scheduled rebuilding

Instead of rebuilding everything at once, perform a little maintenance each time.

This is like rotating responsibilities among future operations.

Example: two-list queue

A standard queue representation is:

data Queue a = Q [a] [a]
-- front rearReversed

If the front is empty, reverse the rear.

normalize :: Queue a -> Queue a
normalize (Q [] r) = Q (reverse r) []
normalize q        = q

snoc :: a -> Queue a -> Queue a
snoc x (Q f r) = normalize (Q f (x:r))

uncons :: Queue a -> Maybe (a, Queue a)
uncons (Q [] [])     = Nothing
uncons (Q (x:xs) r)  = Just (x, normalize (Q xs r))
uncons (Q [] r)      = uncons (normalize (Q [] r))

The core idea is not the exact code. The core idea is:

  • represent the queue by two simpler pieces,
  • allow temporary imbalance,
  • repair only when a threshold condition is met.

That pattern appears everywhere.

What to extract for design skill

When reading this route, do not just learn queues and deques. Learn these moves:

  • represent a structure as a cheap front plus a deferred rear,
  • identify when a global repair is necessary,
  • decide whether the repair should be immediate, amortized, or incrementally scheduled,
  • check whether persistence invalidates a naive amortized proof.

If you internalize this, you will start to see many structures as “a representation plus a maintenance schedule.”

Reading guide for this route

Core readings

  1. Chris Okasaki, Purely Functional Data Structures

    • Chapter 3: Amortization and Persistence via Lazy Evaluation
    • Chapter 4: Scheduling
    • Chapter 5: Lazy Rebuilding
  2. Chris Okasaki, “Simple and Efficient Purely Functional Queues and Deques”

  3. Chris Okasaki, “The Role of Lazy Evaluation in Amortized Data Structures”

What to look for

  • Why naive amortized reasoning becomes delicate under persistence.
  • Where the deferred work actually lives.
  • Whether the critical invariant is structural, scheduled, or both.
  • How laziness acts as concrete bookkeeping for future work.

Suggested reading order

  1. Read Okasaki Chapter 3.
  2. Read the queue/deque paper.
  3. Read Chapters 4 and 5.
  4. Read the paper on lazy evaluation and amortized data structures.
  5. Revisit your own queue implementation afterward.

Explore guide

  • Implement a naive two-list queue.
  • Write an amortized argument for it.
  • Identify exactly why persistence complicates that argument.
  • Implement a scheduled queue.
  • Compare where the maintenance work moved.

2. Route: Numeral systems as data structures

This is one of the most beautiful and reusable directions.

It is especially worth extra study because it trains the “invent your own representation” muscle.

What magic this gives you

This route teaches you that the structure of a sequence can come from the structure of a number representation.

This gives several kinds of magic:

  • logarithmic lookup/update without ordinary arrays,
  • constant-time insertion at one end,
  • elegant representations with very small local repair rules,
  • a way to derive structures from algebraic patterns rather than ad hoc balancing.

The key insight is:

a sequence can be represented by a decomposition of its size.

Instead of saying “I have a list” or “I have a tree”, you say:

  • the size is written in binary-like digits,
  • each digit corresponds to a tree of a certain size,
  • adding or removing an element behaves like increment/decrement.

This is deeply reusable.

Principles

2.1 Digits correspond to chunks

Suppose we store the sequence as a list of complete binary trees. Each tree has size a power of two. A digit says whether a tree of that size is present.

For example, if the length is 13, then

[ 13 = 8 + 4 + 1 ]

so the sequence can be represented by trees of sizes 8, 4, and 1.

This is like binary expansion, but now each digit carries actual data.

2.2 Carry and borrow become structural repair

When you insert an element at the front, you create a size-1 tree. If there is already a size-1 tree, combine them into a size-2 tree. If that collides, combine again. This is just binary carrying.

So insertion becomes like incrementing a number.

2.3 Lookup follows the size decomposition

To find the ith element, inspect the leading digits to determine which tree contains it, then descend into that tree.

Thus indexing becomes logarithmic because the representation decomposes size hierarchically.

2.4 Different numeral systems change the performance profile

Ordinary binary gives one style of structure. Skew binary changes the rules so that some one-end operations become worst-case constant time rather than merely amortized.

This is the real lesson:

performance comes partly from the algebra of the digits.

Example: binary random access list

A classic simplified representation looks like this:

data Tree a
  = Leaf a
  | Node Int (Tree a) (Tree a)
-- Int caches size

data Digit a
  = Zero
  | One (Tree a)

type RAList a = [Digit a]

A One t at position k means there is a tree of size 2^k.

We need a way to combine two equal-sized trees.

size :: Tree a -> Int
size (Leaf _)       = 1
size (Node n _ _)   = n

link :: Tree a -> Tree a -> Tree a
link t1 t2 = Node (size t1 + size t2) t1 t2

Insertion at the front works like binary increment.

consTree :: Tree a -> RAList a -> RAList a
consTree t [] = [One t]
consTree t (Zero : ds) = One t : ds
consTree t (One t' : ds) = Zero : consTree (link t t') ds

cons :: a -> RAList a -> RAList a
cons x = consTree (Leaf x)

The structure is almost shouting its meaning:

  • Leaf x is the new low-order digit,
  • collisions are resolved by carrying,
  • the recursive carry builds larger complete trees.

Lookup

A simplified lookup sketch:

lookupTree :: Int -> Tree a -> a
lookupTree 0 (Leaf x) = x
lookupTree i (Node _ l r)
  | i < size l  = lookupTree i l
  | otherwise   = lookupTree (i - size l) r
lookupTree _ _ = error "bad index"

lookupRA :: Int -> RAList a -> a
lookupRA _ [] = error "bad index"
lookupRA i (Zero:ds) = lookupRA i ds
lookupRA i (One t:ds)
  | i < size t  = lookupTree i t
  | otherwise   = lookupRA (i - size t) ds

The exact details vary across presentations, but the principle is stable.

Why this route is powerful for invention

Because it teaches you to derive a data structure from a representation theorem:

  • choose chunk sizes,
  • choose allowed digits,
  • choose merge rules,
  • choose how lookup descends.

This suggests many design questions:

  • must chunks be powers of two?
  • could I use skew binary or another redundant numeral system?
  • what if trees cache more than just size?
  • what if updates happen near the front only?
  • what if I need concatenation as well as lookup?

Suddenly you are designing, not just memorizing.

A slightly more advanced viewpoint

A random-access list can be seen as a forest of complete trees with digit constraints.

That is already a pattern:

  • the forest gives hierarchical indexing,
  • the digit discipline controls how many trees of each size exist,
  • the carry law controls update cost.

This separates concerns nicely.

It is often useful to say:

the performance is determined by the digit algebra, while the element operations are determined by the tree annotations.

That sentence is worth remembering.

Example exercise ideas

  1. Add updateRA :: Int -> (a -> a) -> RAList a -> RAList a.
  2. Replace Int size caches with a type-indexed size if you want stronger invariants.
  3. Try to generalize the tree annotation from size alone to some summary monoid.
  4. Compare binary vs skew-binary insertion rules.

What to notice while studying

Pay special attention to these questions:

  • Why does a numeral system appear at all?
  • Which operation corresponds to increment, decrement, carry, or borrow?
  • Which invariant is global, and which repairs are local?
  • Which performance guarantees are amortized, and which are worst-case?

That is the core magic of this route.

Reading guide for this route

Core readings

  1. Chris Okasaki, Purely Functional Data Structures

    • Chapter 6: Random-Access Lists
  2. Chris Okasaki, “Purely Functional Random-Access Lists”

  3. Ralf Hinze, “Bootstrapping One-Sided Flexible Arrays”

Secondary readings

  1. Material on skew-binary random-access lists and skew-binary numeration.
  2. Ralf Hinze et al., “Calculating Datastructures” for a more derivational viewpoint.
  3. Older applicative random-access structures such as Eugene Myers’ work.

What to look for

  • Which part of the structure plays the role of a digit.
  • Which operation is really carry or borrow.
  • Which guarantees come from the tree shape and which come from the numeral system.
  • How changing the numeral system changes the operational profile.

Suggested reading order

  1. Read Okasaki Chapter 6.
  2. Implement binary random-access lists.
  3. Read the original random-access-list paper.
  4. Then study skew-binary variants.
  5. Then read Hinze’s one-sided flexible arrays.
  6. Finally read derivational material such as “Calculating Datastructures”.

Explore guide

  • Implement a tiny binary random-access list.
  • Add lookup and update.
  • Write the carry logic in the most explicit style possible.
  • Implement a skew-binary variant and compare the repair rules.
  • Try adding annotations richer than size and note what stays unchanged.

Chapter-level note

This route is especially worth slow study because it trains the habit of deriving a structure from a representation of size rather than from an already-familiar container shape.


3. Route: Measured trees and finger trees

This is the other route especially worth extra study.

It is a very important abstraction because it shows how an algebraic summary can drive search, split, and indexing.

What magic this gives you

This route gives you the ability to turn a generic tree into many specialized data structures by changing the annotation algebra.

The magic is:

  • one representation can support many APIs,
  • cached summaries guide traversal,
  • splitting/searching can be expressed via prefix measures,
  • the same structure can behave like a sequence, priority queue, search structure, interval-like structure, or indexed collection depending on the chosen measure.

This is one of the clearest examples of a data structure being built from an algebra of summaries.

The central principle

A finger tree stores at each node a summary value, often called a measure. The measure lives in a monoid.

So you choose:

class Monoid v => Measured a v where
  measure :: a -> v

Then internal nodes cache the monoidal combination of their subtree summaries.

If the monoid operation is written (<>), then the summary of a concatenation is the concatenation of summaries:

[ (x y) = (x) (y). ]

This means you can search by reading summaries rather than inspecting every element.

Why this is powerful

Suppose your measure is size:

newtype Size = Size Int
  deriving (Eq, Ord, Show)

instance Semigroup Size where
  Size a <> Size b = Size (a + b)

instance Monoid Size where
  mempty = Size 0

Then you can split by position.

Suppose instead your measure is minimum priority:

newtype MinP = MinP (Maybe Int)

Then the same structure can guide access by priority information.

Suppose your measure stores both size and max endpoint. Then you can support interval-like or segment-like queries.

The representation stays mostly the same. The algebra changes.

This is why finger trees are so conceptually rich.

Simplified shape

A simplified finger tree shape looks like:

data Digit a = One a | Two a a | Three a a a | Four a a a a

data Node v a
  = Node2 v a a
  | Node3 v a a a

data FingerTree v a
  = Empty
  | Single a
  | Deep v (Digit a) (FingerTree v (Node v a)) (Digit a)

Ignore the exact constructors at first. What matters is the pattern:

  • a small left digit,
  • a recursive middle tree of nodes,
  • a small right digit,
  • a cached summary for the whole tree.

The digits are the “fingers”: they give quick access near the ends. The recursive middle preserves structure for deeper operations.

The deep lesson

A finger tree is not just “a sequence tree with nice bounds.” It is a method for combining three ideas:

  1. good end access via front/back digits,
  2. balanced-ish recursive structure in the middle,
  3. summary-driven search using monoidal measures.

That combination is what makes it so reusable.

Minimal code sketch: measured sequence by size

Let us sketch the flavor rather than a full correct implementation.

newtype Size = Size { getSize :: Int }
  deriving (Eq, Ord, Show)

instance Semigroup Size where
  Size a <> Size b = Size (a + b)

instance Monoid Size where
  mempty = Size 0

class Monoid v => Measured a v where
  measure :: a -> v

instance Measured a Size where
  measure _ = Size 1

Now the summary of a subtree tells us how many elements are under it. An index-based split can be driven by prefix sizes.

Conceptually, if you want to split at position k, you descend while maintaining the measure of everything to the left. At each step, you ask whether the left summary already crosses k.

Pseudo-Haskell:

splitAtFT :: Int -> FingerTree Size a -> (FingerTree Size a, FingerTree Size a)
splitAtFT k t = split (
  (Size n) -> n > k
  ) t

The real implementation is more subtle, but the conceptual content is this:

  • define a monotone predicate on prefix summaries,
  • descend by consulting cached measures,
  • stop where the predicate first flips.

That is an extremely general pattern.

Example measures

3.1 Size for indexing

Use Size to obtain sequence-like indexing and splitting.

3.2 Priority summary

Suppose elements have priorities. Store at each node the minimum priority in the subtree. Then you can quickly know the global minimum or guide certain searches.

newtype MinP = MinP { getMinP :: Maybe Int }

instance Semigroup MinP where
  MinP Nothing <> y = y
  x <> MinP Nothing = x
  MinP (Just a) <> MinP (Just b) = MinP (Just (min a b))

instance Monoid MinP where
  mempty = MinP Nothing

3.3 Multiple summaries at once

A common trick is to use a product monoid.

data Summary = Summary
  { sSize :: Size
  , sMin  :: MinP
  }

instance Semigroup Summary where
  Summary a1 b1 <> Summary a2 b2 = Summary (a1 <> a2) (b1 <> b2)

instance Monoid Summary where
  mempty = Summary mempty mempty

Now one structure supports multiple kinds of reasoning.

This is another general design lesson:

if one cached field helps one query and another cached field helps another query, they can often be combined by a product monoid.

Why this route is so useful for inventing your own structures

Because it teaches a generic recipe:

  1. Choose a recursive representation with good local structure.
  2. Choose a monoid of summaries.
  3. Cache summaries at internal nodes.
  4. Express queries as predicate-guided descent on prefix summaries.

That recipe appears far beyond finger trees themselves.

You can use the same idea in:

  • ropes,
  • interval trees,
  • editor buffers,
  • segment trees,
  • search trees with cached statistics,
  • custom trees with application-specific summaries.

A design pattern worth memorizing

When faced with a new problem, ask:

  • what summary of a subtree would let me answer or guide the query?
  • is that summary associative?
  • can I maintain it locally when rebuilding nodes?

If the answer is yes, there is a good chance that a measured tree will work.

Example: editor buffer intuition

Suppose you are designing a text buffer. You might want:

  • fast concatenation,
  • split at cursor,
  • line/column navigation,
  • search by character offset.

Then a measure might store:

  • total character count,
  • total line count,
  • maybe width information for the last line.

Now the cursor movement operations become guided by prefix summaries.

This is an example of turning “editor operations” into “algebra over summaries.”

What to notice while studying

Focus on these ideas:

  • the representation is generic, the measure is specific,
  • search is driven by prefix summaries,
  • annotations are not decoration; they are the core engine,
  • monoids are used because subtree summaries must combine associatively.

This is perhaps the most algebraically elegant route in the subject.

Reading guide for this route

Core readings

  1. Ralf Hinze and Ross Paterson, “Finger Trees: A Simple General-Purpose Data Structure”

  2. Practical Data.Sequence-style implementations

    • read these after the paper, not before.

Secondary readings

  1. Ralf Hinze, “Finger Trees: The Swiss Army Knife of Data Structures”
  2. Formalizations of finger trees in proof assistants.
  3. More general material on monoidal annotations in recursive structures.

What to look for

  • What the measure is really doing.
  • Why associativity is the reason summaries can be cached compositionally.
  • How prefix summaries guide search and splitting.
  • Which parts of the API are generic in the measure and which are not.
  • Why the digits matter for end efficiency.

Suggested reading order

  1. Read the finger-tree paper once for overall shape.
  2. Reread it focusing only on the measure mechanism.
  3. Implement a stripped-down measured tree using only size.
  4. Add a product measure.
  5. Then inspect practical code such as Data.Sequence.

Explore guide

  • Implement a tiny measured tree with size annotations.
  • Define split via a predicate on prefix summaries.
  • Try measures such as size, minimum priority, (size, lineCount), or (size, accumulatedCost).
  • Property-test the structure against a list model.
  • Ask which operations are truly generic in the measure.

Chapter-level note

This route is especially important because it teaches how to turn an application query into an algebra of cached summaries.


4. Route: Bootstrapping and worst-case guarantees

What magic this gives you

This route teaches you how to obtain stronger guarantees by layering structures inside structures.

The magic is:

  • improving amortized bounds to worst-case bounds,
  • turning one structure into a component inside a more powerful one,
  • separating “logical structure” from “maintenance engine”.

A functional priority queue is a standard place to see this.

Principles

4.1 Bootstrapping

Use a data structure recursively inside itself, or use one data structure to manage the deferred work of another.

4.2 Deamortization

Take an amortized structure and schedule its expensive steps so that no single operation becomes too costly.

4.3 Global invariants with local repairs

Strong worst-case bounds typically require tighter invariants than amortized structures do.

Tiny illustrative sketch

Suppose you have a heap-like structure where merging is easy but rebuilding after repeated inserts is not. One idea is:

  • represent the queue as a small front structure plus a structure containing suspended merges,
  • perform one merge step per operation,
  • ensure that every operation pays a little toward future work.

That sketch is vague as implementation, but very concrete as a design principle.

What to extract for design skill

  • amortized elegance is not the end of the story,
  • one can often explicitly manage debt,
  • recursive self-use is sometimes the correct abstraction, not a trick.

Reading guide for this route

Core readings

  1. Chris Okasaki, Purely Functional Data Structures

    • Chapter 7: Data-Structural Bootstrapping
    • Chapter 8: Implicit Recursive Slowdown
  2. Gerth Stølting Brodal and Chris Okasaki, “Optimal Purely Functional Priority Queues”

What to look for

  • How one structure manages the deferred work of another.
  • How an amortized design is transformed into a worst-case one.
  • Which new invariant appears when stronger bounds are required.
  • Why the recursive self-reference is doing real work rather than ornament.

Suggested reading order

  1. First understand an amortized structure.
  2. Then read Okasaki Chapter 7.
  3. Then read the Brodal–Okasaki paper.
  4. Then read Chapter 8 on implicit recursive slowdown.

Explore guide

  • Take a structure you understand amortizedly.
  • Write down the debt it accumulates.
  • Ask how that debt could be represented as actual data.
  • Try to pay a bounded amount of that debt per operation.
  • Compare a simpler heap against a stronger worst-case design.

5. Route: Relaxed balance and practical persistent sequences

What magic this gives you

This route teaches you how to build structures that behave well in practical workloads, especially for sequences supporting:

  • indexing,
  • updates,
  • concatenation,
  • splitting,
  • slices.

The magic is that the tree need not be perfectly balanced everywhere. Controlled imbalance can buy better practical performance and richer operations.

Principles

5.1 Relaxed invariants

Instead of enforcing exact balance, allow local variation within safe bounds.

5.2 Richer metadata

To make relaxed balance workable, cache more structural information.

5.3 Rebalance only when needed

The tree is repaired when a local threshold is violated, not on every operation.

Example intuition: vectors with concatenation

A persistent vector based on a wide branching tree often gives fast lookup and update, but naive concatenation is poor. Relaxed variants allow subtree sizes to vary within bounds so that concatenation and split become feasible without destroying performance.

What to extract for design skill

  • exact balance is often too rigid,
  • controlled local slack can be a feature,
  • metadata can compensate for relaxed shape.

Reading guide for this route

Core readings

  1. Phil Bagwell and Tiark Rompf, “RRB-Trees: Efficient Immutable Vectors”

  2. Nicolas Stucki et al., “RRB Vector: A Practical General Purpose Immutable Sequence”

What to look for

  • Why rigid vector trees make concatenation and split awkward.
  • What “relaxed balance” actually means operationally.
  • How extra metadata compensates for local irregularity.
  • Which repairs are local and what triggers them.

Suggested reading order

  1. Start from the mental model of an ordinary persistent vector.
  2. Read the original RRB-tree report.
  3. Read a practical follow-up such as RRB Vector.
  4. Then inspect implementation code.

Explore guide

  • Sketch a small wide shallow vector tree.
  • Notice why concatenation is awkward.
  • Add local size tables or relaxed branching information.
  • Compare navigation and repair under rigid vs relaxed invariants.

This is an important route if you care about structures that survive real application workloads.


6. Route: Encapsulated mutation behind a functional interface

What magic this gives you

This route teaches you maturity.

The magic is that you can preserve a clean functional API and functional reasoning style even when the implementation uses mutation internally for efficiency.

This is useful when a purely persistent tree formulation is possible but awkward or slow.

Principles

6.1 Separate semantic purity from implementation technique

The user-facing semantics may be immutable and persistent even if the implementation uses arrays, refs, or path compression internally.

6.2 Version discipline matters

Some structures are fully persistent, some only semipersistent. You must understand exactly what past versions are allowed to do.

6.3 Mutation is acceptable when the abstraction boundary is precise

The real question is not “is there mutation?” but “what laws does the interface satisfy?”

Example intuition: union-find

Union-find is easy imperatively because path compression mutates parents. Persistent variants force you to ask:

  • which parts really need persistence?
  • can we permit only some old versions to be used?
  • can controlled mutation be hidden safely?

This route is useful if you want to build efficient real-world structures without becoming dogmatic.

Reading guide for this route

Core readings

  1. Sylvain Conchon and Jean-Christophe Filliâtre, “A Persistent Union-Find Data Structure”

  2. Background material on persistent arrays / rerooting techniques.

What to look for

  • The difference between full persistence, partial persistence, and semipersistence.
  • Which internal mutations are observationally hidden.
  • How rerooting or path compression interacts with version structure.
  • Why the abstraction barrier carries most of the semantic burden.

Suggested reading order

  1. Review imperative union-find first.
  2. Read Conchon–Filliâtre.
  3. Then read background on persistent arrays and rerooting.
  4. Finally design a small immutable API over a mutable core.

Explore guide

  • Implement a tiny ephemeral union-find.
  • State precisely what the persistent version should guarantee.
  • Decide whether you want full persistence or semipersistence.
  • Explore an ST-based internal representation with a pure outward API.
  • Make explicit which old versions remain valid.

7. Route: Proofs, invariants, and type-guided design

What magic this gives you

This route gives you the ability to reason about your structures as mathematical objects rather than piles of code.

The magic is:

  • clearer invariants,
  • fewer bugs in rebalancing logic,
  • more confidence when generalizing,
  • the ability to discover the right representation by stating its laws precisely.

Principles

7.1 Separate representation invariant from API laws

For example:

  • API law: lookup after update returns the new value at that index,
  • representation invariant: subtree size caches are correct and balance bounds hold.

7.2 Use types or proofs to make illegal states impossible when worth it

Sometimes a GADT or indexed type can encode shape invariants. Sometimes this is too heavy. The art is deciding where the proof burden pays off.

7.3 Verification improves design

Trying to prove correctness often reveals the right abstraction and the right decomposition of operations.

Example sketch

A node that caches size can be validated by a smart constructor.

data Tree a
  = Leaf a
  | Node Int (Tree a) (Tree a)

mkNode :: Tree a -> Tree a -> Tree a
mkNode l r = Node (treeSize l + treeSize r) l r

trueSize :: Tree a -> Int
trueSize (Leaf _)     = 1
trueSize (Node _ l r) = trueSize l + trueSize r

treeSize :: Tree a -> Int
treeSize (Leaf _)     = 1
treeSize (Node n _ _) = n

valid :: Tree a -> Bool
valid (Leaf _)       = True
valid (Node n l r)   = n == treeSize l + treeSize r && valid l && valid r

This is a tiny example, but it teaches a habit: node metadata should have a precise contract.

What to extract for design skill

  • write invariants down early,
  • separate smart constructors from raw constructors when useful,
  • use proofs or property tests to expose the true design.

Reading guide for this route

Core readings

  1. Tobias Nipkow (ed.), Functional Data Structures and Algorithms: A Proof Assistant Approach

    • Chapter 14: Priority Queues
    • Chapter 19: Amortized Analysis
    • Chapter 21: Splay Trees
    • Chapter 22: Skew Heaps
    • Chapter 23: Pairing Heaps
  2. Formal developments of specific structures such as finger trees, heaps, or balanced trees.

What to look for

  • The separation between representation invariant and user-visible specification.
  • How correctness lemmas split into metadata soundness, structural preservation, and abstract behavior.
  • When proof pressure clarifies the right API.
  • When type-level encoding helps and when it only adds ceremony.

Suggested reading order

  1. Start with ordinary valid :: T a -> Bool invariants.
  2. Read one verified heap chapter.
  3. Read the chapter on amortized analysis.
  4. Then move to a subtler structure such as pairing heaps or splay trees.
  5. Only after that consider pushing more invariants into types.

Explore guide

  • For each structure you implement, define valid.
  • Property-test operations against a simple list or multiset model.
  • Write one or two proof-style lemmas informally before mechanizing anything.
  • If desired, verify a small heap or random-access-list variant in a proof assistant.

8. Suggested study order

If your goal is to invent data structures when needed, rather than merely consume existing ones, here is a good order:

  1. Persistence + amortization

    • learn how functional efficiency is even possible,
    • study queues, deques, lazy scheduling.
  2. Numeral systems as representation

    • random-access lists,
    • skew-binary variants,
    • learn to derive structures from size decomposition.
  3. Measured trees / finger trees

    • monoidal annotations,
    • search by prefix summaries,
    • the most algebraically reusable route.
  4. Bootstrapping / deamortization

    • stronger bounds,
    • structure-inside-structure design.
  5. Relaxed balance

    • practical persistent vectors and sequence structures,
    • slack with metadata.
  6. Encapsulated mutation

    • when purity at the interface is enough,
    • understand semantic vs implementation purity.
  7. Proof-oriented design

    • make invariants explicit,
    • verify or at least strongly test them.

9. Which two routes are especially worth your time

For your goals, the two most fertile routes are:

9.1 Numeral systems as data structures

Because it teaches representation design from first principles.

It encourages questions like:

  • what are my chunk sizes?
  • what counts as a digit?
  • what operation corresponds to carry?
  • can another numeral system improve the bound I care about?

This route helps you invent structures.

9.2 Measured trees / finger trees

Because it teaches how algebraic summaries can drive an entire API.

It encourages questions like:

  • what summary of a subtree would answer my query?
  • is that summary associative?
  • can I cache it at nodes?
  • can queries be expressed as guided descent on prefix summaries?

This route helps you turn application needs into algebra.

These two routes complement each other beautifully:

  • numeral systems teach you to derive structure from representation of size,
  • measured trees teach you to derive operations from algebra of summaries.

One is about how the shape is encoded. The other is about how information flows through the shape.

Together they are an extremely strong basis for creating custom functional data structures.


10. Practical mini-project ideas

If you want these ideas to become native, build small structures yourself.

Project A: binary random-access list

Implement:

empty   :: RAList a
cons    :: a -> RAList a -> RAList a
headRA  :: RAList a -> Maybe a
tailRA  :: RAList a -> Maybe (RAList a)
lookupRA :: Int -> RAList a -> Maybe a
updateRA :: Int -> a -> RAList a -> Maybe (RAList a)

Then ask:

  • where exactly does carry happen?
  • which proof would establish logarithmic lookup?
  • how would skew-binary change the insertion discipline?

Project B: a measured sequence tree

Build a simplified measured tree supporting:

pushL   :: a -> T a -> T a
pushR   :: a -> T a -> T a
splitAt :: Int -> T a -> (T a, T a)
index   :: Int -> T a -> Maybe a

with a size measure.

Then generalize the measure to:

data Summary = Summary
  { totalSize :: Sum Int
  , totalCost :: Sum Int
  }

or some application-specific summary.

Then ask:

  • which operations depended on the measure being size?
  • which ones stayed generic?
  • what predicate on prefix summaries drives splitting?

Project C: editor-buffer-flavored measure

Define a sequence of text chunks whose measure stores:

  • character count,
  • line count,
  • maybe last-line width.

Implement splitting by character offset and by line number.

This project makes the finger-tree-style route feel concrete.


11. Final mental model

A large portion of advanced functional data structure design comes from a handful of moves:

  1. represent size structurally,
  2. cache associative summaries,
  3. allow controlled imbalance,
  4. schedule maintenance work,
  5. layer one structure inside another,
  6. keep invariants explicit.

When you read papers or implementations, try to classify them by these moves.

That makes the field much smaller and much more reusable.

Instead of seeing a zoo of clever objects, you start to see a small family of principles appearing in different disguises.


12. A very short checklist for inventing your own structure

When a new workload appears, ask:

  • What operations matter most?
  • Can the state be decomposed hierarchically?
  • What invariant would make the expensive operation local?
  • What summary should be cached at subtrees?
  • Is that summary associative?
  • Can occasional expensive work be scheduled or incrementalized?
  • Do I need full persistence, or only a clean immutable interface?
  • How will I state and test the invariant?

If you can answer those clearly, you are already doing real data structure design.


13. Compact reference list

A compact list of the main titles integrated into the route sections above:

  • Chris Okasaki, Purely Functional Data Structures.
  • Chris Okasaki, “Purely Functional Random-Access Lists”.
  • Chris Okasaki, “Simple and Efficient Purely Functional Queues and Deques”.
  • Chris Okasaki, “The Role of Lazy Evaluation in Amortized Data Structures”.
  • Ralf Hinze, “Bootstrapping One-Sided Flexible Arrays”.
  • Ralf Hinze and Ross Paterson, “Finger Trees: A Simple General-Purpose Data Structure”.
  • Ralf Hinze, “Finger Trees: The Swiss Army Knife of Data Structures”.
  • Gerth Stølting Brodal and Chris Okasaki, “Optimal Purely Functional Priority Queues”.
  • Phil Bagwell and Tiark Rompf, “RRB-Trees: Efficient Immutable Vectors”.
  • Nicolas Stucki et al., “RRB Vector: A Practical General Purpose Immutable Sequence”.
  • Sylvain Conchon and Jean-Christophe Filliâtre, “A Persistent Union-Find Data Structure”.
  • Tobias Nipkow (ed.), Functional Data Structures and Algorithms: A Proof Assistant Approach.
  • “Calculating Datastructures”.

Note-taking template

For each paper or chapter, a useful note template is:

Paper / chapter:

Workload it targets:

Representation:

Invariant:

Cached summary / metadata:

Local repair step:

Where the complexity comes from:

If I changed the workload, what would I redesign first?

This keeps the literature organized as a family of reusable design moves rather than a pile of isolated clever tricks.