New TR: "Frameworks for Intra- and Interprocedural Dataflow Analysis"

jdean@stash.pa.dec.com (Jeff Dean)
18 Nov 1996 00:15:59 -0500

          From comp.compilers

Related articles
New TR: "Frameworks for Intra- and Interprocedural Dataflow Analysis" jdean@stash.pa.dec.com (1996-11-18)
| List of all articles for this month |

From: jdean@stash.pa.dec.com (Jeff Dean)
Newsgroups: comp.compilers
Date: 18 Nov 1996 00:15:59 -0500
Organization: DEC Western Research Lab
Keywords: report, dataflow

Craig Chambers, Dave Grove, and I have written a technical report
describing a new framework we have developed for implementing dataflow
analyses. The report focuses on several new features of our framework
that are not found in previous such frameworks. One unique feature is
that transformations can be performed as part of the analysis: the
framework takes care of managing the details of acting only
tentatively on these transformations during iterative analysis,
undoing the effects of the transformations if iteration later causes
the code to be reanalyzed. A second feature is that
independently-written analyses can be easily combined and run in
parallel. This permits synergistic effects between optimizations that
benefit from being run in parallel, but without having to write them
as a single, monolithic pass.


The TR can be found at:


    http://www.cs.washington.edu/research/projects/cecil/www/Papers/engines.html


Comments and/or feedback are greatly appreciated.


Here's the abstract:


                  Frameworks for Intra- and Interprocedural Dataflow Analysis


                              Craig Chambers, Jeffrey Dean, and David Grove


Because dataflow analyses are difficult to implement from scratch,
reusable dataflow analysis frameworks have been developed which
provide generic support facilities for managing propagation of
dataflow information and iteration in loops. We have designed a
framework that improves on previous work by making it easy to perform
graph transformations as part of iterative analysis, to run multiple
analyses "in parallel" to achieve the precision of a single monolithic
analysis while preserving modularity and reusability of the component
analyses, and to construct context-sensitive interprocedural analyses
from intraprocedural versions. We have implemented this framework in
the Vortex optimizing compiler and used the framework to help build
both traditional optimizations and non-traditional optimizations of
dynamically-dispatched messages and first-class, lexically-nested
functions.


University of Washington Department of Computer Science and Engineering
Technical Report UW-CSE-96-11-02.


-- Jeff


------------------------------------------------------------------------------
Jeffrey Dean (jdean@pa.dec.com) Member of Research Staff
Western Research Laboratory Digital Equipment Corporation
                                      http://www.research.digital.com/people/jdean
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.