--- title: "The G-machine In Detail, or How Lazy Evaluation Works" date: January 31, 2020 maths: true --- \long\def\ignore#1{} \ignore{ \begin{code} {-# LANGUAGE RecordWildCards, NamedFieldPuns, CPP #-} #if !defined(Section) #error "You haven't specified a section to load! Re-run with -DSection=1 or -DSection=2" #endif #if defined(Section) && (Section != 1 && Section != 2) #error Section "isn't a valid section to load! Re-run with -DSection=1 or -DSection=2" #endif \end{code} } With Haskell now more popular than ever, a great deal of programmers deal with lazy evaluation in their daily lives. They're aware of the pitfalls of lazy I/O, know not to use foldl, and are masters at introducing bang patterns in the right place. But very few programmers know the magic behind lazy evaluationâ€”graph reduction. This post is an abridged adaptation of Simon Peyton Jones' and David R. Lester's book, _"Implementing Functional Languages: a tutorial."_, itself a refinement of SPJ's previous work, 1987's _"The Implementation of Functional Programming Languages"_. The newer book doesn't cover as much material as the previous: it focuses mostly on the evaluation of functional programs, and indeed that is our focus today as well. For this, it details three abstract machines: The G-machine, the Three Instruction Machine (affectionately called Tim), and a parallel G-machine. In this post we'll take a look first at a stack-based machine for reducing arithmetic expressions. Armed with the knowledge of how typical stack machines work, we'll take a look at the G-machine, and how graph reduction works (and where the name comes from in the first place!) This post is written as [a Literate Haskell source file], with Cpp conditionals to enable/disable each section. To compile a specific section, use GHC like this: bash ghc -XCPP -DSection=1 2020-01-09.lhs  ----- \ignore{ \begin{code} {-# LANGUAGE CPP #-} #if Section == 1 \end{code} } \begin{code} module StackArith where \end{code} Section 1: Evaluating Arithmetic with a Stack ============================================= Stack machines are the base for all of the computation models we're going to explore today. To get a better feel of how they work, the first model of computation we're going to describe is stack-based arithmetic, better known as reverse polish notation. This machine also forms the basis of the programming language FORTH. First, let us define a data type for arithmetic expressions, including the four basic operators (addition, multiplication, subtraction and division.) \begin{code} data AExpr = Lit Int | Add AExpr AExpr | Sub AExpr AExpr | Mul AExpr AExpr | Div AExpr AExpr deriving (Eq, Show, Ord) \end{code} This language has an 'obvious' denotation, which can be realised using an interpreter function, such as aInterpret below. \begin{code} aInterpret :: AExpr -> Int aInterpret (Lit n) = n aInterpret (Add e1 e2) = aInterpret e1 + aInterpret e2 aInterpret (Sub e1 e2) = aInterpret e1 - aInterpret e2 aInterpret (Mul e1 e2) = aInterpret e1 * aInterpret e2 aInterpret (Div e1 e2) = aInterpret e1 div aInterpret e2 \end{code} Alternatively, we can implement the language through its _operational_ behaviour, by compiling it to a series of instructions that, when executed in an appropriate machine, leave it in a _final state_ from which we can extract the expression's result. Our abstract machine for aritmethic will be a _stack_ based machine with only a handful of instructions. The type of instructions is AInstr{.haskell}. \begin{code} data AInstr = Push Int | IAdd | IMul | ISub | IDiv deriving (Eq, Show, Ord) \end{code} The state of the machine is simply a pair, containing an instruction stream and a stack of values. By our compilation scheme, the machine is never in a state where more values are required on the stack than there are values present; This would not be the case if we let programmers directly write instruction streams. We can compile a program into a sequence of instructions recursively. \begin{code} aCompile :: AExpr -> [AInstr] aCompile (Lit i) = [Push i] aCompile (Add e1 e2) = aCompile e1 ++ aCompile e2 ++ [IAdd] aCompile (Mul e1 e2) = aCompile e1 ++ aCompile e2 ++ [IMul] aCompile (Sub e1 e2) = aCompile e1 ++ aCompile e2 ++ [ISub] aCompile (Div e1 e2) = aCompile e1 ++ aCompile e2 ++ [IDiv] \end{code} And we can write a function to represent the state transition rules of the machine. \begin{code} aEval :: ([AInstr], [Int]) -> ([AInstr], [Int]) aEval (Push i:xs, st) = (xs, i:st) aEval (IAdd:xs, x:y:st) = (xs, (x + y):st) aEval (IMul:xs, x:y:st) = (xs, (x * y):st) aEval (ISub:xs, x:y:st) = (xs, (x - y):st) aEval (IDiv:xs, x:y:st) = (xs, (x div y):st) \end{code} A state is said to be _final_ when it has an empty instruction stream and a single result on the stack. To run a program, we simply repeat aEval until a final state is reached. \begin{code} aRun :: [AInstr] -> Int aRun is = go (is, []) where go st | Just i <- final st = i go st = go (aEval st) final ([], [n]) = Just n final _ = Nothing \end{code} A very important property linking our compiler, abstract machine and interpreter together is that of _compiler correctness_. That is: haskell forall x. aRun (aCompile x) == aInterpret x  As an example, the arithmetic expression $2 + 3 \times 4$ produces the following code sequence: haskell [Push 2,Push 3,Push 4,IMul,IAdd]  You can interactively follow the execution of this program with the tool below. Pressing the Step button is equivalent to aEval. The stack is drawn in boxes to the left, and the instruction sequence is presented on the right, where the > marks the currently executing instruction (the "program counter", if you will).