Re: Back End Generators

chase@centerline.com (David Chase)
Fri, 21 Oct 1994 02:37:05 GMT

          From comp.compilers

Related articles
Back End Generators heronj@smtplink.NGC.COM (John Heron) (1994-10-12)
Re: Back End Generators heronj@smtplink.NGC.COM (John Heron) (1994-10-12)
Re: Back End Generators hbaker@netcom.com (1994-10-14)
Re: Back End Generators johnmce@world.std.com (1994-10-16)
Re: Back End Generators adrian@platon.cs.rhbnc.ac.uk (1994-10-16)
Re: Back End Generators chase@centerline.com (1994-10-21)
Re: Back End Generators pardo@cs.washington.edu (1994-10-21)
Re: Back End Generators bill@amber.ssd.csd.harris.com (1994-10-21)
Re: Back End Generators smucker@cs.wisc.edu (1994-10-21)
Re: Back End Generators Peter.Damron@Eng.Sun.COM (1994-10-18)
Re: Back End Generators hbaker@netcom.com (1994-10-28)
Re: Back End Generators anton@mips.complang.tuwien.ac.at (1994-10-24)
[1 later articles]
| List of all articles for this month |

Newsgroups: comp.compilers
From: chase@centerline.com (David Chase)
Keywords: code, tools, bibliography
Organization: CenterLine Software
References: 94-10-094 94-10-123
Date: Fri, 21 Oct 1994 02:37:05 GMT

johnmce@world.std.com (John McEnerney) writes:
> This was the Graham-Glanville algorithm; it was Glanville's PhD thesis.
> I seem to have lost the original paper. The idea was to use an LR parser
> to recognize substrings in a prefix representation of the IR tree, and
> associate code generation actions with the parse. There have been several
> follow-up papers on this. The only commercial compiler I have heard of
> that used this was an old Intel cross-compiler for x86 development.


There's at least one commercial compiler based on the follow-up papers.
One follow-up offshoot was Pelegri's "BURS" work (bottom-up rewriting
systems, cost-attributed tree pattern matching). Henry, Damron, (and
others?) worked with this at U Washington years ago, and the same
technology was incorporated into one of the current crop of compilers from
Sun. (Richard Tuck implemented it from scratch.) I would not be surprised
to discover it in use elsewhere.


Todd Proebsting improved that line of "code generator generators" even
more, in the sense that his BURG system provides astoundingly fast
code-generator-generation time.


Last I discussed this with anyone, I think we decided that the "way to go"
was probably the top-down schemes like the one developed by Aho,
Ganapathi, and Tjiang. (Either TWIG, or get hold of IBURG and fix it up
to use run-time costs instead of static costs).


The reasons to use bottom-up matching are that it is faster, and that it
is more expressive in one sense -- top-down is "slower" but probably that
does not matter, and has (I think) some restrictions on the patterns it
can swallow, but the ability to compute costs at run-time and use dynamic
programming lets you do much more interesting pattern matching "by hand".
This seems to be more important, in general, than raw speed and slightly
richer set of patterns.


    Note -- Pelegri's BURS work "compiles" the dynamic
    programming, in that it is all encoded into tables
    (there is a property of code generation grammars that
    make this possible -- it is a neat trick that I do
    not understand as well as I should). However, this
    makes it difficult to "conditionally match a pattern",
    which is something (it turns out) that is sometimes
    useful to do.


    It might be possible to incorporate a notion of
    run-time pattern failure into the BURS techniques
    as well, though at least one person in the field
    thought (off-the-cuff) that this could lead to
    exponential blow-up. However, such estimates must
    be taken with a grain of salt, since the table
    size of bottom-up pattern matching is worst-case
    exponential already (yet it rarely comes close in
    practice).


Neither of these techniques uses unification -- all the free variables of
the patterns must be different (i.e, you cannot have a pattern for "x := x
+ 1"; it must be "x := y + 1").


It might be fun (and perhaps someone has already tried it) to experiment
with application of tree pattern matching to code generation from DAGs or
general graphs. DAGs should be easy for bottom-up matching --
topologically sort the graph, then visit nodes from "bottom" to "top".
I've been trying to figure out (in my copious free time) how this ought to
work for general graphs -- it seems like you'd need to pre-annotate all
interior (non-leaf) nodes with a "top" state (or "bottom" state, depending
on how you orient your lattice) and run the matcher until it converged.
Whether this would be useful, I do not know.


References (taken from Preston Briggs bibliography):


@article{aho:89,
    title="Code Generation Using Tree Matching and
                  Dynamic Programming",
    author="Alfred V. Aho and Mahadevan Ganapathi and
                  Steven W. K. Tjiang",
    pages="491--516",
    journal=toplas,
    year=1989,
    month=oct,
    volume=11,
    number=4
}


@inproceedings{llopart:88,
    title="Optimal Code Generation for Expression Trees:
                  An Application of {BURS} Theory",
    author="Eduardo {Pelegr\'i}-Llopart and Susan L. Graham",
    booktitle=popl15,
    address="San Diego, California",
    year=1988,
    month=jan,
    pages="294--308"
}


@article{proebsting:92b,
    title="Simple and Efficient {{\sc burs}}
                  Table Generation",
    author="Todd A. Proebsting",
    pages="331--340",
    journal=sigplan,
    year=1992,
    month=jul,
    volume=27,
    number=7,
    note=pldi92
}


@article{fraser:92,
    author="Christopher W. Fraser and David R. Hanson and
                    Todd A. Proebsting",
    title="Engineering a Simple, Efficient Code-Generator
                  Generator",
    pages="213--226",
    journal=loplas,
    year=1992,
    month=sep,
    volume=1,
    number=3
}


David Chase, speaking for myself
CenterLine Software


--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.