|available expressions firstname.lastname@example.org (1996-12-22)|
|Re: available expressions email@example.com (1996-12-27)|
|From:||firstname.lastname@example.org (George C. Lindauer)|
|Date:||22 Dec 1996 15:25:39 -0500|
|Organization:||University of Louisville, Louisville KY USA|
|Keywords:||optimize, question, analysis, comment|
In conjunction with global common subexpression analysis Aho Sethi and
Ulman indicate that locating places for optimization basically comes
down to a data flow analysis in which we gather available expressions.
However, they also indicate that such an analysis is going to use a
*lot* of memory and come up with lots of useless information, and
advocate that instead of doing the data flow analysis we just search
the nodes on the flow graph any time we have a candidate for
optimization. But it seems like this approach would take a lot of
time given a reasonable-sized procedure. Does anyone have a feel for
whether it is reasonable to use the extra memory to do the data flow
analysis or whether I should just live with the time limitations of
searching the graph? Alternately is there a better method which
trades off the time and memory requirements in a reasonable fashion?
[What seemed like a lot of memory in 1987 might not seem like so much
Return to the
Search the comp.compilers archives again.