|available expressions firstname.lastname@example.org (1996-12-22)|
|Re: available expressions email@example.com (1996-12-27)|
|From:||firstname.lastname@example.org (Cliff Click)|
|Date:||27 Dec 1996 23:25:19 -0500|
email@example.com (George C. Lindauer) writes:
In conjunction with global common subexpression analysis Aho Sethi and
Ulman indicate that locating places for optimization basically comes
down to a data flow analysis in which we gather available expressions.
However, they also indicate that such an analysis is going to use a
*lot* of memory and come up with lots of useless information, and
advocate that instead of doing the data flow analysis we just search
the nodes on the flow graph any time we have a candidate for
optimization. But it seems like this approach would take a lot of
time given a reasonable-sized procedure. Does anyone have a feel for
whether it is reasonable to use the extra memory to do the data flow
analysis or whether I should just live with the time limitations of
searching the graph? Alternately is there a better method which
trades off the time and memory requirements in a reasonable fashion?
There are lots of global common subexpression algorithms out there.
Partial Redundancy Elimination (and it's various follow-on fixes)
is quite popular. There are several others (including my own!) that
run in linear time and space (see my paper in PLDI '95).
Cliff Click, Ph.D. Compiler Researcher & Designer
RISC Software, Motorola PowerPC Compilers
firstname.lastname@example.org (512) 891-7240
Return to the
Search the comp.compilers archives again.