re: Graph reduction, recursion, and the Y combinator

David Chase <chase@orc.olivetti.com>
Wed, 27 Sep 89 16:08:54 -0700

          From comp.compilers

Related articles
re: Graph reduction, recursion, and the Y combinator chase@orc.olivetti.com (David Chase) (1989-09-27)
| List of all articles for this month |

From: David Chase <chase@orc.olivetti.com>
Date: Wed, 27 Sep 89 16:08:54 -0700

[In reply to several questions from Dale Worley]


Background: in combinator rewriting/graph reduction, the application
of the fixed-point operator can either be rewritten like so


    Y(F) ==> F(Y(F))


(That's the definition)
or it can be graph-reduced into a cycle


                                        __
        APP v \
      / \ ==> APP \
    Y F / \_|
                                F


as observed by David Turner (and maybe other people).


> Has anyone actually done this in graph reduction execution?


Yes (I did it once in a toy interpreter for FP84, but the
interpreter was generally so slow that nothing terribly long
was ever run through it). Stoye (and others) don't mention
doing this in their paper on SKIM II (see the 1984 Conference
Lisp and Functional Programming), but I thought that they did.


>Has anyone analyzed the optimization which removes the Y and
>replaces it with the circular edge?


Not me. I think the consensus is that it replaces the creation
of lots of non-cyclic garbage with the creation of a little
cyclic garbage, but I could be wrong. Both supercombinator
optimizations and continuation-passing transformations might
change this, and I just don't know enough about the interaction
to say for sure what happens. Also, note that non-cyclic garbage
makes reference counting collection more useful, and reference
counting apparently refuses to die (I have seen it proposed for
some multiprocessor systems).


>Is this a special case of a more general optimization?


Yes. John Hughes has an interesting paper on "lazy memo functions"
in the 1985 Conference on Functional Programming Languages
and Computer Architecture (Springer-Verlag LNCS #201). A lazy
memo function is like an "ordinary" memo function, except that it


    1. use an EQ test for equality on non-atomic objects.
          This is necessary for practical use of memo functions
          in a lazy language, because otherwise memoization
          forces full evaluation.


    2. evaluate functions lazily, and rely on overwriting
          of old applications with the results of their evaluation.


The use of EQ to test functions and arguments for equality
forces one "level" of evaluation into a structure, so you don't
end up with the trivial result that F(X) = F(X).


Consider this example (taken from Hughes' paper):


    ones = 1 : ones (a cyclic data structure)


    twos = map double ones


    map f [] = []
    map f (a:x) = (f a):(map f x)


    (: is list prepend, [] is empty list, and "f x" is "f applied to x")


Evaluation of "twos" yields:


    map double ones = map double (1:ones)
                                    = (double 1) : (map double ones)
                                    = 2 : (map double ones)


However, the arguments to "map" in the last line are identical (EQ)
to the arguments at the start of the evaluation, so we use that result
and obtain a cycle. More interesting results can be obtained when
evaluating the function zip (another example from Hughes' paper)


    zip [] [] = []
    zip (a:x) (b:y) = [a,b] : (zip x y)


applied to the two lists "ab" and "abc" below:


    ab = a : b : ab
    abc = a : b : c : abc


We get (intermediate steps omitted, see the paper if you care)


    zip ab abc = [a,a]:[b,b]:[a,c]:[b,a]:[a,b]:[b,c]:(zip ab abc)


Similarly,


    Y F = F (Y F)


The paper is worth reading. It contains more interesting examples,
and a discussion of interactions with garbage collection and full
memo functions.


David
[From David Chase <chase@orc.olivetti.com>]





Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.