Re: Back End Generators (Anton Ertl)
Mon, 24 Oct 1994 11:12:16 GMT

          From comp.compilers

Related articles
[5 earlier articles]
Re: Back End Generators (1994-10-21)
Re: Back End Generators (1994-10-21)
Re: Back End Generators (1994-10-21)
Re: Back End Generators (1994-10-21)
Re: Back End Generators Peter.Damron@Eng.Sun.COM (1994-10-18)
Re: Back End Generators (1994-10-28)
Re: Back End Generators (1994-10-24)
Re: Back End Generators davidm@Rational.COM (1994-10-25)
| List of all articles for this month |

Newsgroups: comp.compilers
From: (Anton Ertl)
Keywords: code, tools
Organization: Institut fuer Computersprachen, Technische Universitaet Wien
References: 94-10-094 94-10-129
Date: Mon, 24 Oct 1994 11:12:16 GMT (David Chase) writes:
|> Last I discussed this with anyone, I think we decided that the "way to go"
|> was probably the top-down schemes like the one developed by Aho,
|> Ganapathi, and Tjiang. (Either TWIG, or get hold of IBURG and fix it up
|> to use run-time costs instead of static costs).
|> The reasons to use bottom-up matching are that it is faster, and that it
|> is more expressive in one sense -- top-down is "slower" but probably that
|> does not matter, and has (I think) some restrictions on the patterns it
|> can swallow, but the ability to compute costs at run-time and use dynamic
|> programming lets you do much more interesting pattern matching "by hand".
|> This seems to be more important, in general, than raw speed and slightly
|> richer set of patterns.

Iburg works bottom-up. The difference from Burg is that Iburg uses
dynamic programming, whereas Burg uses a tree parsing
automaton. Another bottom-up code selector generator using dynamic
programming is BEG (which also does register allocation, another part
of code generation; it is described in a paper by Helmut Emmelmann at
the SIGPLAN'89); BEG uses the greater flexibility of dynamic
programming for specifying, e.g, extra conditions for some
patterns. As you write, Iburg could be extended to do such things,

As far as I know, the set of acceptable tree grammars is larger for
dynamic programming (some tree grammars can cause divergence in
automaton generation, whereas all are acceptable for dynamic
programming), but this does not matter for code selection.

Top-down methods (as used in TWIG, I believe) have quadratic
complexity, while bottom-up methods are linear.

|> It might be fun (and perhaps someone has already tried it) to experiment
|> with application of tree pattern matching to code generation from DAGs or
|> general graphs. DAGs should be easy for bottom-up matching --
|> topologically sort the graph, then visit nodes from "bottom" to "top".

It's not so simple, at least for general tree grammars: The optimum
parse for a common subtree can be different for the two uses of the
subtree. In the worst case you get the same code as if you duplicated
the subtree. The protocol of a panel discussion at CODEGEN'91 hints at
solutions for this problem.

There is a condition on tree grammars that ensures that they also work
optimally for DAGs; tree grammars for RISC code selection satisfy this
condition for the largest part. Someday I'll get around to writing
this stuff down.

    booktitle = "Code Generation --- Concepts, Tools, Techniques",
    year = "1991",
    OPTaddress = "Schlo{"s} Dagstuhl",
    publisher = "Springer",
    editor = "Robert Giegerich and Susan L. Graham",
    series = "Workshops in Computing",

- anton
M. Anton Ertl

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.