Case Study: How to build your own stack frames. (Mark Hopkins)
Fri, 25 Aug 1995 16:15:20 GMT

          From comp.compilers

Related articles
Case Study: How to build your own stack frames. (1995-08-25)
| List of all articles for this month |

Newsgroups: comp.compilers
From: (Mark Hopkins)
Keywords: code, interpreter
Organization: Omnifest
Date: Fri, 25 Aug 1995 16:15:20 GMT

      Whenever possible, one should always define the abstract architecture
of the machine at a high-level first. After doing so, you will then
convert this description rigorously into an implementation. In this
particular case, the implementation will be stack-based, so the stack
operators will be seen to emerge systematically from the corresponding
recursive definitions.

      So in reality, you *don't* define your stack machine at the outset. You
*derive* it. This is the approach that is guaranteed to give you the most
reliable (and bug-free) results: because the logical proof of correctness
and the design process are one and the same!

      The case in point cited below is a revision of an article completely
specifying the development of the architecture of the SECD-2 and SEQCD
machines. The premise is the question: "what if Landin had derived his
SECD machine rigorously instead of coming up with a design out of thin
air?" The answer offered, SECD-2, is able to process lambda expressions
in applicative order (one can also define a "normal order" version of SECD-2,
and this will lead to a combinator reduction architecture I call COM). The
SEQCD machine is an extension that is able to handle an extension of the
Lambda Calculus which comprises the rational infinite lambda terms. In
this extended lambda calculus one can directly represent imperative constructs
such as the goto statement or while loop. Amazingly, even though it is not
specifically designed to do so, the SEQCD machine will process imperative
language constructs exactly like you'd expect an ordinary computer to do,
yet it handles lambda expressions perfectly well. The imperative behavior
is completely emergent and was pretty much an unexpected feature.

The front end to the extended lambda calculus (the Regular Infinite
Lambda Calculus) is basically a functional language with imperative
syntax. As of yet it has not been implemented and only specified in
outline. Its name is JOLT.

(1) What If
      One will frequently note in the literature where abstract machines are
defined, but almost invariably where the definitions appear to be pulled out
of the hat. This seems like a rather counter-productive way to do things.
Not only do you have the labor of actually pulling machine out of the hat, but
then you have to establish a link to some invariant properties and then go
through the process of proving the machine correct. And half the time, it's
not correct.

      One notices, for instance, that in Werner Kluge's "The Organization of
Reduction, Data Flow, and Control Flow Systems" the author goes through all
the trouble of formally defining normal and applicative order reduction and
then just basically throws it all away and outlines a set of architectures
that might as well have just been concocted. The irony is that the entire
latter half of the book is unnecessary. Abstract machine architectures for
normal order reduction and applicative order reduction can be rigorously
derived from the corresponding definitions presented in that book.

      One can see the contrast of the two approaches in the specification of
the SECD and CAML machines. Whereas the latter is actually *derived*
systematically from a definition of beta reduction, the former is not
linked to any such definition. The result is pretty obvious: SECD doesn't

      The better way of doing things is not to even think about machines at
all in the beginning, but to think about formal definitions instead. A
formal definition for any kind of non-trivial system will invariably be
recursive. The process of specifying the abstract machine, then, comes
down to nothing more than eliminating the recursion from the definition.
In the process, all three problems mentioned above are solved by the
time-honored method of rendering them irrelevant: (1) the machine design
is not concocted but is derived rigorously, (2) the linkage to a specification
is automatic, and (3) the proof of correctness is the derivation of the
machine itself!

      What if SECD has been designed rigorously? I will also ask the related
question: what if the correspondence between imperative and functional
languages (a' la Landin's 1965 approach) had been completed? As it turns out,
both questions have the same answer -- the SEQCD Machine.

      The SEQCD Machine is just a minor extension of the SECD-2 machine, which is
about to be described below. Because of the way it handles its environment,
it turns out to be an entirely trivial process to extend SECD-2 to an
applicative order machine for processing Regular Infinite lambda expressions
in such a way that the result will miraculously behave just like a processor
for an imperative language. This behavior is totally emergent, there's
nothing explicitly imperative in the SEQCD machine.

      So what's described here is the precursor: SECD-2.

(2) The SECD-2 Machine
      You may want to consult the reference: Werner Kluge's "The Organization of
Reduction, Data Flow, and Control Flow Systems". A definition for applicative
order evaluation can be found on p.114 (I've modified the notation slightly).

      For lambda terms, we'll use the abstract syntax:
                            Term -> \Var . Term
                                      -> Term Term
                                      -> Var

Here, "\" is being used to denote the lambda operator because it looks like a

      Define the partial function:

                      [T]: the result of reducing T in applicative order

and in the following, let x denote a variable, and T, T' and U arbitrary terms.
                  [x] = x
                  [\x.T] = \x.[T]
                  [(\x.T) U] = { \x.T, [U] }
                  [T U] = { [T], [U] }, T not a lambda term

where { f, x } is an auxilliary procedure defined as follows:

                  { \x.T, U } = [x <- U, T],
                  { T, U } = T U, T not a lambda term

where (x <- U, T) is my notation for denoting the result of substituting U
for all free occurrences of x in the term T. The substitution operator can be
defined recursively as follows:

                            (x <- U, y) = U if y = x
                                                              y otherwise
                            (x <- U, \y.T) = \z.(x <- U, y <- z, T)
                            (x <- U, T T') = (x <- U, T) (x <- U, T')

where z is an otherwise unused variable introduced, as usual, to help prevent
accidental captures of any free variable, y, in the term U by the \y operator.

      These two sets of rules can be joined into one. Define a substitution
environment as a list of parallel substitutions (i.e. as a function mapping
variables to terms). Define the following:

                                  I ---------- the empty environment (the identity function)
                                  E, x <- U -- the environment altered by binding x to U.

As functions, these environments have the properties:

                                I(x) = x for all variables x.
                                (E,x<-U)(y) = U if y is the variable x.
                                                        = E(y) for all other variables y.

Let (E, T) denote the result of applying the substitutions in the environment
E to all the variables in term T. Then the following apply:

                      (E, x) = E(x)
                      (E, \x.T) = \z.(E, x <- z, T)
                      (E, TU) = (E, T) (E, U)

Noting that z is an otherwise unused variable, this definition follows
directly from the definition of the substitution operator above.

      Combining this with the reduction process [T] defined above, use the
                                          [E, T] to denote [(E, T)].

In the following we will restrict environments to those that bind variables
only to terms already reduced to normal form.

Then it follows that:

                [E, x] = [E(x)] = E(x)
                [E, \x.T] = [\z.(E,x<-z,T)]
                                    = \z. [(E,x<-z), T]

                [E, (\x.T) U] = [\z.(E,x<-z,T) (E,U)]
                                            = { \z.(E,x<-z,T), [E,U] }
                [E, TU] = [(E,T) (E,U)]
                                            = { [E,T], [E,U] } T not a lambda term
                      { \x.T, U } = [(I, x <- U), T],
                      { T, U } = T U, T not a lambda term

Note that

              [E, (\x.T) U] = { \z.(E,x<-z,T), [E,U] }
                                          = [ (I, z<-[E,U], E, x<-z, T) ]
                                          = [(E, x<-[E,U]), T]

Thus, we arrive at a combined definition:

                [S0] [T] = [I, T]
                      [S1] [E, x] = E(x)
                      [S2] [E, \x.T] = \z.[(E,x<-z), T]
                      [S3a] [E, (\x.T) U] = [(E, x<-[E,U]), T]
                      [S3b] [E, TU] = { [E,T], [E,U] } T not a lambda term
                            [S4a] { \x.T, U } = [(I, x <- U), T],
                            [S4b] { T, U } = T U, T not a lambda term

      It's not hard to derive an abstract machine from this. This machine
will be based on the following inductive assertions:

                          s, E, C, D -> ... -> e, E', [E,C], D
                          t, E, C, D -> ... -> f, E, [E,C], D

and will therefore have 4 states. The start state, s, will be an optimized
state that evaluates an expression while letting the environment go. The
end state, e, is the corresponding end state. On the other hand, the start
state, t, will evaluate an expression taking care to preserve the environment
all the way to the corresponding end state, f. Our task is to make use of
the t->f in the right places to avoid the need to save copies of the
environment as much as possible. It turns out that there will only be one
place where it remains necessary to copy the whole environment and it closely
corresponds to the call-return operation of imperative languages. In all
other places we only need to save single variables. One should note that

                                        E, x<-U, x<-E(x) = E, x<-E(x) = E

so that the operation updating x is reversible as long as we retain the
state of the variable x.

      Thus, to evaluate an expression, C, we merely need to carry out the process:

                C --> s, I, C, [] -> ... -> e, E', [I,C], [] --> [I,C] = [C].

The dump, D, is tentatively identified as a hetergeneous tagged stack whose
exact format will be specified as the least that is required to make the
inductive proof of the assertions above work. We'll derive it by actually
carrying out the proof first. That may seem odd -- proving the consistency
of something whose creation is generated by the proof itself -- but that's
the perfect way to do things: it's a form of bootstrapping. The tagged
stack will be represented using list notation. Inductive steps will be
indicated by an ellipsis (...). Other steps will be labeled by the rules
that they will come to denote.

                R0 R4
[S0] T --> s, I, T, [] ... e, I, [I,T], [] --> [T]

[S1] s, E, x, D --> e, E, E(x), D
          t, E, x, D --> f, E, E(x), D

[S2] s, E, \x.T, D --> s, (E,x<-z), T, 1:z:D R5a
                                      ... e, E', [E,x<-z,T], 1:z:D --> e, E', \z.[E,x<-z,T], D

          t, E, \x.T, D --> t, (E,x<-z), T, 2:x:E(x):1:z:D
                                      ... f, (E,x<-z), [E,x<-z,T], 2:x:E(x):1:z:D

                                      R6 R5b
                                      --> f, E, [E,x<-z,T], 1:z:D --> f, E, \z.[E,x<-z,T], D

[S3] s, E, TU, D --> t, E, U, 3:T:D ... f, E, [E,U], 3:T:D

          t, E, TU, D --> t, E, U, 4:T:D ... f, E, [E,U], 4:T:D

[S3a] f, E, [E,U], 3:\x.T:D --> s, (E,x<-[E,U]), T, D
                                                        ... e, E', [E,x<-[E,U],T), D

            f, E, [E,U], 4:\x.T:D --> t, (E,x<-[E,U]), T, 2:x:E(x):D
                                                        ... f, (E,x<-[E,U]), [E,x<-[E,U],T], 2:x:E(x):D
                                                        --> f, E, [E,x<-[E,U],T], D

[S3b] f, E, [E,U], 3:T:D --> s, E, T, 5:[E,U]:D T not a lambda term
                                                  ... e, E', [E,T], 5:[E,U]:D
                                                  ... e, E'', {[E,T], [E,U]}, D

            f, E, [E,U], 4:T:D --> t, E, T, 5:[E,U]:D
                                                  ... f, E, [E,T], 5:[E,U]:D
                                                  ... f, E, {[E,T], [E,U]}:D

We have inductively asserted that

                e, E, T', 5:U':D -> ... -> e, E', { T', U' }, D
and f, E, T', 5:U':D -> ... -> f, E, { T', U' }, D

In order to fill out this part of the induction, we need to look at the
respective cases S4a and S4b.

[S4a] e, E, \x.T, 5:U:D --> s, (I,x<-U), T, D ... e, E', [I,x<-U,T], D

            f, E, \x.T, 5:U:D --> s, (I,x<-U), T, 6:E:D R11
                                                ... e, E', [I,x<-U,T], 6:E:D --> f, E, [I,x<-U,T], D

[S4b] e, E, T, 5:U:D --> e, E, TU, D T not a lambda term

            f, E, T, 5:U:D --> f, E, TU, D

      Collecting these rules together, we get the following specification

R0 initialize: T --> s, I, T, []

R1a s, E, x, D --> e, E, E(x), D
R1b t, E, x, D --> f, E, E(x), D

R2a s, E, \x.T, D --> s, (E,x<-z), T, 1:z:D
R2b t, E, \x.T, D --> t, (E,x<-z), T, 2:x:E(x):1:z:D

R3a s, E, T U, D --> t, E, U, 3:T:D
R3b t, E, T U, D --> t, E, U, 4:T:D

R4 e, E, T, [] --> T and halt

R5a e, E, T, 1:z:D --> e, E, \z.T, D
R5b f, E, T, 1:z:D --> f, E, \z.T, D

R6 f, E, T, 2:x:U:D --> f, (E,x<-U), T, D

R7a f, E, U, 3:\x.T:D --> s, (E,x<-U), T, D
R7b f, E, U, 3:T:D --> s, E, T, 5:U:D T not a lambda term

R8a f, E, U, 4:\x.T:D --> t, (E,x<-U), T, 2:x:E(x):D
R8b f, E, U, 4:T:D --> t, E, T, 5:U:D T not a lambda term

R9a e, E, \x.T, 5:U:D --> s, (I,x<-U), T, D
R9b e, E, T, 5:U:D --> e, E, T U, D T not a lambda term

R10a f, E, \x.T, 5:U:D --> s, (I,x<-U), T, 6:E:D
R10b f, E, T, 5:U:D --> f, E, T U, D T not a lambda term

R11 e, E', T, 6:E:D --> f, E, T, D

      From this, we can directly read off the struture of the tagged stack. It
consists of a list of records in the following formats:

                                              1 z
                                              2 x U
                                              3 T
                                              4 T
                                              5 U
                                              6 E

where T is an unevaluated term, U a term in normal form, x a variable, z
a fresh variable, and E an environment.

      As an interesting exercise to test the robustness of the approach, let's
add in an extra class of terms and clause in the definition of [T]: the
conditional. We'll use the following C-like syntax:

                            Term -> Term? Term: Term

      To evaluate C? T: F the conditional C will first be reduced to normal form.
If it is zero, then F will be reduced and T discarded, else T will be reduced
and F discarded. Formally, this can be represented by the definition:

                                [C? T: F] = if([C], T, F)
                                      if(0, T, F) = [F]
                                      if(X, T, F) = [T] otherwise

Combining this with the substitution operation we get the following result:

                      [E, (C? T: F)] = [(E,C)? (E,T): (E,F)]
                                                    = if([E,C], (E, T), (E, F))

Defining IF(X, E, T, F) = if(X, (E, T), (E, F)), we get the following result

            [S5] [E, (C? T: F)] = IF([E, C], E, T, F)
            [S6a] IF(0, E, T, F) = [E, F]
            [S6b] IF(X, E, T, F) = [E, T] otherwise

      From this, we may add the following items to the inductive derivation

[S5] s, E, (C? T: F), D --> t, E, C, 7:T:F:D
                                                ... f, E, [E,C], 7:T:F:D

          t, E, (C? T: F), D --> t, E, C, 8:T:F:D
                                                ... f, E, [E,C], 8:T:F:D

[S6a] f, E, 0, 7:T:F:D --> s, E, F, D ... e, E', [E,F], D

            f, E, 0, 8:T:F:D --> t, E, F, D ... f, E, [E,F], D

[S6b] f, E, X, 7:T:F:D --> s, E, T, D ... e, E', [E,T], D (X not 0)

            f, E, X, 8:T:F:D --> t, E, T, D ... f, E, [E,T], D (X not 0)

This leads to the following additions:

R12a s, E, (C? T: F), D --> t, E, C, 7:T:F:D
R12b t, E, (C? T: F), D --> t, E, C, 8:T:F:D

R13a f, E, 0, 7:T:F:D --> s, E, F, D
R13b f, E, X, 7:T:F:D --> s, E, T, D (X not 0)

R14a f, E, 0, 8:T:F:D --> t, E, F, D
R14b f, E, X, 8:T:F:D --> t, E, T, D (X not 0)

      This results in the addition of two more types of stack records of the
                                                    7 T F
                                                    8 T F

(3) The SEQCD Machine
      Where does the Q in SEQCD come in? Well, this is the part where we
deal with Regular Infinite lambda expressions. A Regular Infinite
expression (of any kind) will be defined as an infinite expression which
contains a finite number of isomorphically distinct subterms. An example
is the following expression

                  X = 0; Y = 1;
                        X < 3? (
                            X = X + 1; Y = Y * 2;
                                  X < 3? (
                                        X = X + 1; Y = Y * 2;
                                              X < 3? (
                                              ): Y
                                  ): Y
                        ): Y

with the following notations:

                          X = U; T <--> (\X.T) U
                          X + Y <--> + X Y

      If we allow subexpressions to be labeled and use "goto Q" to denote an
occurrence of the subexpression labeled by Q, we can write the expression
above more succinctly as:

                            X = 0; Y = 1; goto Q;
                      Q: X < 3? ( X = X + 1; Y = Y * 2; goto Q; }: Y

In two short paragraphs we've established a convention that's literally
pulled an imperative language out of the hat! This, then, will be the
notation we'll use to represent Regular Infinite Lambda expressions.
Let's go one step further and add the following notation:

                          while E S Q <--> x: E? S': Q

where S is an expression with a "hole" in it and S' the result of inserting
a "goto x" in that hole. Thus, the above expression can be represented

                      X = 0; Y = 1;
                      while (X < 3) (
                            X = X + 1; Y = Y * 2;

where the "hole" is made more explicit as:

                                          X = X + 1; Y = Y * 2; ().

One may also add in the convenient C-like labels:

                                  break and continue.

      A Regular Infinite lambda expression may therefore be represented as
a set of labeled expressions:

                                          q1: E1
                                          qn: En

where E0 is the entire expression. If such an expression will be denoted
Q, then let the subexpression corresponding to qn be denoted Q(qn). The
main subexpression, E0, will be represented by Q(0).

      It's trivial to extend the abstract machine described previously to allow
it to process these kinds of expressions. The machine will have the following
                                                          S, E, Q, C, D

where S is the current state (S = s, t, e, or f), E the current environment,
Q the regular infinite expression undergoing evaluation, C the current
subexpression. and D the dump.

      The initialization rule is modified to the following:

R0 initialize: Q --> s, I, Q, Q(0), []

We'll add in the following rule for the goto:

R15a s, E, Q, goto x, D --> s, E, Q, Q(x), D
R15b t, E, Q, goto x, D --> t, E, Q, Q(x), D

Every other rule is amended by adding in the Q component. That's all that's

      Let's see how this abstract machine operates on the expression in the
example above. Written explicitly in lambda notation it takes on the form:

                    Q = (\x. (\y.goto q1) 1) 0
                            q1: (< x 3)? (\x. (\y.goto q1) (* y 2)) (+ x 1): y

      Rigorously applying the rules defining the abstract machine we get:

R0 Q --> s, I, Q, (\x. (\y.goto q1) 1) 0, []
R3a --> t, I, Q, 0, [3, \x.((\y.goto q1) 1)]
R1b --> f, I, Q, 0, [3, \x.((\y.goto q1) 1)]
R7a --> s, (I,x<-0), Q, (\y.goto q1) 1]

R3a,R1b,R7a --> s (I,x<-0,y<-1), Q, goto q1, []

R15a --> s, (I,x<-0,y<-1), Q, (< x 3)? ..., []
R12a --> t, (I,x<-0,y<-1), Q, < x 3, [7:...:y]
R1b,R10b,R10b --> f, (1,x<-0,y<-1), Q, 1, [7:...:y]

R13b --> s, (1,x<-0,y<-1), Q, (\x. (\y.goto q1) (* y 2)) (+ x 1), []

As an exercise, you may want to trace through the rules and prove that this
will lead to the configutations

  --> ... --> s, (I,x<-1,y<-2), Q, (\x. (\y.goto q1) (* y 2)) (+ x 1), []
  --> ... --> s, (I,x<-2,y<-4), Q, (\x. (\y.goto q1) (* y 2)) (+ x 1), []
  --> ... --> s, (I,x<-3,y<-8), Q, y, []
R1a --> e, (I,x<-3,y<-8), Q, 8, []
R4 --> 8

      Note that

                            (I, x<-1, y<-2), x<-2 = (I, x<-2, y<-2)

since each update to the environment overwrites the previous one.

      This machine had created its own variables for the loop, maintained
them in a way completely consistent with the imperative reading of the
while loop above, and then discarded them all without being explicitly told to
do so.

      Thus an intimate connection between imperative languages and functional
languages, completely refuting Backus's assertions about the intractibility
of imperative languages, is at last revealed.


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.