Related articles |
---|
flow analysis and optimization vugranam@macs.ee.mcgill.ca (1993-11-04) |
Re: flow analysis and optimization preston@dawn.cs.rice.edu (1993-11-09) |
Re: flow analysis and optimization cliffc@rice.edu (1993-11-09) |
Re: flow analysis and optimization paco@legia.cs.rice.edu (1993-11-09) |
Re: flow analysis and optimization bwilson@shasta.Stanford.EDU (1993-11-14) |
Newsgroups: | comp.compilers |
From: | paco@legia.cs.rice.edu (Paul Havlak) |
Keywords: | optimize, analysis |
Organization: | Rice University |
References: | 93-11-041 93-11-056 |
Date: | Tue, 9 Nov 1993 23:24:38 GMT |
vugranam@macs.ee.mcgill.ca (Vugranam Chakravarthy Sreedhar) writes:
>> 3 methods are described for doing flow analyses and optimizations.
>> 1. Syntax directed approach
>> 2. Interval method
>> 3. (general) iterative method.
>> I would like to know which method you use in your compiler.
cliffc@rice.edu (Cliff Click) writes:
>Caveat Emptor: I do not hack code with the parallel compiler group. I
>think the parallel compiler group uses a syntax directed approach for the
>intraprocedural stuff. The interprocedural stuff is probably done with
>the iterative method again.
The ParaScope system (implemented by the parallel compiler group)
does dependence analysis on Fortran DO-loops (among other things).
The understanding of dataflow there is somewhat ad hoc right now,
but could be called syntax-directed because it relies on the
explicit loops.
It will ultimately move to a more interval-like understanding of
control structure, based on Tarjan intervals (nested
strongly-connected regions). But will it use interval analysis or
some more direct technique like SSA? Who knows.
The construction of def-use chains and symbolic analysis rely on
versions of Static Single-Assignment form, which is built directly
by none of the methods listed above. (See paper by Cytron et al.
in TOPLAS 13:4 October 1991.)
Complexity of SSA-based algorithms is generally linear and better
than bit-vector algorithms. Dependence analysis tends to have a
quadratic component (array kills may be rare, so you compare
all pairs of references to the same array in the same loop).
>> What is the complexity (space/time) of doing interprocedural analysis?
That depends. If you implement the good algorithms for MOD/REF,
then it's linear in the size of the program. If you cache your
initial local information, then the constant of linearity for
reanalyzing after edits is very low, since you need only rebuild
local information for modified procedures before interprocedural
propagation.
My interprocedural symbolic analysis is exponential in the worst
case, but not too bad, possibly linear, in practice. It's hard to
get large samples of whole Fortran programs for statistics.
Details to come in my thesis.
>> What benefits do you get (in terms of speedup and efficiency of the code
>> generated)?
This is very hard to measure, and varies wildly with the specific
program and specific technique. MOD information can be a freebie,
improving the overall speed of dataflow and dependence analysis.
For an example of how fancy you can get and find a benefit, see
Author = {M. W. Hall and K. Kennedy and
K. S. M\raisebox{.2em}{c}Kinley},
Title = {Interprocedural Transformations for Parallel Code Generation},
BookTitle = {Supercomputing '91},
Address = Albuquerque,
Month = Nov,
Year = 1991}
>> (C/C++ guys how do handle pointer analysis, especially in presence of heap
>> allocation, pointer arithmetic, type-casting, etc.?)
The Convex folk are the only ones I've seen achieve much success
with interprocedural analysis of pointers in a production
compiler.
--
Paul Havlak Dept. of Computer Science
Graduate Student Rice University, Houston TX 77251-1892
PFC/ParaScope projects (713) 527-6002 paco@cs.rice.edu
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.