|Book: symbol table, type system, code generation? email@example.com (Dr. Oliver Schulz) (1998-05-04)|
|Re:Book: symbol table, type system, code generation? james.grosbach@Microchip.COM (1998-05-07)|
|Re: Re:Book: symbol table, type system, code generation? firstname.lastname@example.org (1998-05-12)|
|Re: Re:Book: symbol table, type system, code generation? Peter.Damron@Eng.Sun.COM (1998-05-26)|
|Re: Re:Book: symbol table, type system, code generation? email@example.com (1998-05-30)|
|From:||firstname.lastname@example.org (Anton Ertl)|
|Date:||30 May 1998 11:57:00 -0400|
|Organization:||Institut fuer Computersprachen, Technische Universitaet Wien|
|References:||98-05-005 98-05-046 98-05-066 98-05-121|
email@example.com (Anton Ertl) writes:
> >In Section 9.2 (on basic block dependence graphs for instruction
> >scheduling) the algorithm presented for building the graph compares
> >every instruction with every other instruction, resulting in quadratic
> >complexity and redundant dependence edges. A linear method that
> >introduces fewer redundant edges is known (and even if you don't know
> >it, it's not that hard to figure out), but it is not even mentioned.
Peter.Damron@Eng.Sun.COM (Peter C. Damron) writes:
> The algorithm presented in the book is not very fast, but the problem
> is quadratic in general. If you limit the number of resources, and
> consider only register type resources, then you can build the DAG in
> linear time. But if you introduce either virtual registers (unbounded
> resources) or memory resources (and load/store disambiguation), then
> the problem becomes quadratic, though approx. linear time in practice.
1) The algorithm in Muchnick's book is always quadratic. And the
redundant edges it introduces make the scheduler slower, and (in
extreme cases) may cause worse schedules.
2) Given a very realistic restriction on the machine architecture, the
use of an unlimited number of registers (or other resources) does not
make the problem quadratic. The restriction is that every instruction
can only read and write a limited number of registers/resources. This
limits the amount of work done per instruction to a constant amount
(if you attribute the work of adding flow and antidependences to the
reader of the resource), making the algorithm linear in the number of
> E.g. to do load/store disambiguation, you may have to compare every
> store with every load, since the aliasing relation (or points-to) is
> not transitive.
That depends on the kind of disambiguation you perform. E.g., if you
just divide the references into disjoint classes and don't do any
disambiguation within classes (as you may do for a simple type-based
disambiguation), you do not have to compare every store with every
load: just introduce dependences between a load of a class and the
last and the next store of the same class; you also have to introduce
a dependence between a store and the last store of the same class.
With a little phantasy you can extend this to ANSI C type-based
Even if you use a disambiguation, where you have to compare every
store to every other memory reference in the basic block, the
quadratic component typically has a much lower constant factor (my
guess: a factor of 10-100 lower for typical instruction distributions)
than the algorithm presented in the book, and the time for building
the graphs is probably dominated by the linear component in practice.
> Here's a couple of fairly recent references on tree pattern matching.
> (I can't find the iburg one at the moment)
> %T Simple and Efficient BURS Table Generation
The journal version is:
author = "Todd A. Proebsting",
title = "BURS Automata Generation",
journal = toplas,
year = "1995",
volume = "17",
number = "3",
pages = "461--486",
month = may
> %T Engineering a Simple, Efficient Code-Generator Generator
This is the iburg paper.
M. Anton Ertl Some things have to be seen to be believed
firstname.lastname@example.org Most things have to be believed to be seen
Return to the
Search the comp.compilers archives again.