Re: flow analysis and optimization

bwilson@shasta.Stanford.EDU (Bob Wilson)
Sun, 14 Nov 1993 20:16:10 GMT

          From comp.compilers

Related articles
flow analysis and optimization vugranam@macs.ee.mcgill.ca (1993-11-04)
Re: flow analysis and optimization preston@dawn.cs.rice.edu (1993-11-09)
Re: flow analysis and optimization cliffc@rice.edu (1993-11-09)
Re: flow analysis and optimization paco@legia.cs.rice.edu (1993-11-09)
Re: flow analysis and optimization bwilson@shasta.Stanford.EDU (1993-11-14)
| List of all articles for this month |

Newsgroups: comp.compilers
From: bwilson@shasta.Stanford.EDU (Bob Wilson)
Organization: Compilers Central
Date: Sun, 14 Nov 1993 20:16:10 GMT

vugranam@macs.ee.mcgill.ca (Vugranam Chakravarthy Sreedhar) writes:


> In dragon book 3 methods are described for doing flow
> analyses and optimizations.
> 1. Syntax directed approach
> 2. Interval method
> 3. (general) iterative method.


> 1. I would like to know which method you use in your compiler.


The scalar optimizer that Steve Tjiang developed for our SUIF compiler
uses both elimination (intervals) and iteration for solving data flow
problems. See question 4 below. We handle gotos directly, so we don't
need to restructure the code to eliminate gotos.


> 2. Do you transform your program to a reducible graph and then do your
> analyses? What benefits do you get for taking this approach? I would like
> to know practical reasons rather than theoretical ones.


We do not bother with node splitting. The practical reason is that
irreducible flow graphs are not common, and they are easily handled by
iterative analysis. Despite the theoretical advantages of elimination
data flow, iterative analysis is usually not much slower, and it is no
extra work for us. Steve Tjiang's "Sharlit" program is a data flow
analysis generator that automatically provides iterative analysis given a
description of a data flow problem. Here's a reference:


@inproceedings {STjiang_92,
author = "Steven W. K. Tjiang and John L. Hennessy" ,
title = "Sharlit---A Tool for Building Optimizers" ,
booktitle= pldi92 ,
month = jun ,
year = 1992 ,
pages = "82-93"
}
(pldi92 refers to the 1992 proceedings for the Programming Language
Design and Implementation Conference)


> 3. Do you perform interprocedural optimizations? What is the complexity
> (space/time) of doing interprocedural analysis? What benefits do you get
> (in terms of speedup and efficiency of the code generated)? (C/C++ guys
> how do handle pointer analysis, especially in presence of heap allocation,
> pointer arithmetic, type-casting, etc.?) Is it really worth doing
> interprocedural analysis (especially for C/C++)? (I am not aware of any
> production compiler doing interprocedural analyses.)


We are currently working on several different kinds of interprocedural
analysis and optimization. Mary Hall is working with several others on
some interprocedural optimizations for FORTRAN programs. They have
recently been working on a paper describing this work and you might be
able to get an advance copy from Mary (mhall@cs.stanford.edu).


I am working on the pointer analysis problem. Any reasonable analysis of
pointers in C needs to handle the problems that you mentioned. My basic
approach is not too different from Landi & Ryder or Choi et al. (I can
send references to these papers if you don't have them.) At this point, I
think the most important issue is finding a practical and efficient way to
implement these sorts of analyses. I'm working on this but it will be a
few more months before I can even begin to evaluate my solution. Perhaps
some of your colleagues at McGill can help you answer the question about
how much it's worth -- Laurie Hendren's McCat compiler is supposed to do
pretty good pointer analysis.


> 4. I was reading one article (can't remember the exact reference) where
> the author mentions that they use a combination of interval analysis and
> iterative method for doing flow analyses. Can anybody enlighten me on
> this? I beleive they identify sections of code that are reducible and
> apply iterval analysis for these sections.


This is not hard to do. Elimination data flow basically involves applying
reductions to the flow graph (e.g. the T1-T2 transformations described in
the Dragon Book). If the flow graph is reducible, it will eventually be
reduced to a single node. On the other hand, if you reach a point where
none of the transformations can be applied, the flow graph is irreducible.
At that point, you can apply iterative analysis to the partially reduced
flow graph to find the final solution. The details depend on the type of
elimination data flow that you are using, but it should be pretty
straightforward. This is the technique used by the scalar optimizer in
the SUIF compiler.


--Bob Wilson (bwilson@shasta.stanford.edu)
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.