|Book: symbol table, type system, code generation? email@example.com (Dr. Oliver Schulz) (1998-05-04)|
|Re:Book: symbol table, type system, code generation? james.grosbach@Microchip.COM (1998-05-07)|
|Re: Re:Book: symbol table, type system, code generation? firstname.lastname@example.org (1998-05-12)|
|Re: Re:Book: symbol table, type system, code generation? Peter.Damron@Eng.Sun.COM (1998-05-26)|
|Re: Re:Book: symbol table, type system, code generation? email@example.com (1998-05-30)|
|From:||Peter.Damron@Eng.Sun.COM (Peter C. Damron)|
|Date:||26 May 1998 02:01:44 -0400|
|References:||98-05-005 98-05-046 98-05-066|
firstname.lastname@example.org (Anton Ertl) writes:
>it shortly discusses tree parsing in the context of Twig; AFAIK Twig's
>top-down method has quadratic complexity, in contrast to the linear
>bottom-up methods employed in BEG, burg, iburg etc. And IMO bottom-up
>tree-parsing using dynamic programming as presented in [emmelmann+89]
>is easier to understand than the explanation of Twig given in the
Top-down tree pattern matching, as in Twig, is linear in the size of
the tree. It's just that top-down tree pattern matching is much more
dependent on grammar size and form than bottom-up tree pattern
matchers. I.e. top-down tree pattern matchers are usually slower at
pattern match time. Also, no one has figured out a way to precompute
the costs as is done in burs, bus, burg, iburg.
Both types of tree pattern matchers are relatively simple
conceptually. But the top-down matcher is simpler to build and harder
to operate at run-time, whereas the bottom-up matcher is harder to
build but easier at run-time.
>In Section 9.2 (on basic block dependence graphs for instruction
>scheduling) the algorithm presented for building the graph compares
>every instruction with every other instruction, resulting in quadratic
>complexity and redundant dependence edges. A linear method that
>introduces fewer redundant edges is known (and even if you don't know
>it, it's not that hard to figure out), but it is not even mentioned.
The algorithm presented in the book is not very fast, but the problem
is quadratic in general. If you limit the number of resources, and
consider only register type resources, then you can build the DAG in
linear time. But if you introduce either virtual registers (unbounded
resources) or memory resources (and load/store disambiguation), then
the problem becomes quadratic, though approx. linear time in practice.
E.g. to do load/store disambiguation, you may have to compare every
store with every load, since the aliasing relation (or points-to) is
Here's a couple of fairly recent references on tree pattern matching.
(I can't find the iburg one at the moment)
%T Simple and Efficient BURS Table Generation
%A Todd A. Proebsting
%D JUL 1992
%T Engineering a Simple, Efficient Code-Generator Generator
%A Christopher W. Fraser
%A David R. Hanson
%A Todd A. Proebsting
%D SEP 1992
Peter C. Damron, (not speaking for) Sun Microsystems, Inc.
Workshop Compilers, UMPK 16-303, 901 San Antonio Road, Palo Alto, CA 94303
Return to the
Search the comp.compilers archives again.