|Combinator graph reduction email@example.com (1989-09-21)|
|Date:||Thu, 21 Sep 89 11:17:32 EDT|
|From:||firstname.lastname@example.org (Dale Worley)|
James Salsman notes that execution via combinator graph reduction has
the possibility of automatic parallelization, and raises various
difficulties. I would like to add some notes that point out that the
difficulties may be considerably less than is commonly percieved.
From: email@example.com (James Salsman)
Combinators are related to the type-free lambda calculus, so in
practice the combinator graph reduction engine must suffer the
efficiency blow of some sort of type-tag system.
Tagging overhead can be reduced in two ways: The first is using a
system that does not require tag examination for the elementary graph
reduction operations. This sounds impossible, but Philip Koopman (An
Architecture for Combinator Graph Reduciton, PhD thesis, CMU, 1989)
has done it -- the examination of the graph to determine the reduction
to be applied is performed directly by stock hardware. His
implementation has achieved 2-3 times speedup (on stock hardware) over
special-purpose graph-reduction hardware of similar complexity that
uses traditional methods.
The overhead of tag examination for primitive data operations (such as
add, subtract, etc.) can be reduced by statically examining the
program to determine the possible datatypes that can be produced by
various expressions and replacing operators whose arguments have
predictable types with non-tag-checking versions. Although this is
hard for the lambda calculus, Shivers has done considerable work in
this area, and it should be possible to do type-determination for a
large fraction of the operations in a lambda calculus expression.
In addition, combinator graphs must not have local state
(assignment or destructive data structure operations) if they are
to be parallelized.
I've noticed that in the code that I write, there are very few true
assignments -- most of them could easily be replaced by lambda
binding, since there is only one assignment to the variable during its
scope. The exceptions seem to be induction variables in loops (which
become bindings when the loop is turned into a recursive function),
and alterations in the state of "objects with state". The latter
case, of course, is the problem.
(Has anyone noticed that the object-oriented programming people
*always* deal with objects with state, and the functional programming
people *never* deal with objects with state? What is the significance
Return to the
Search the comp.compilers archives again.