Related articles |
---|
Parallel Compiler Representation tang@binkley.cs.mcgill.ca (Xinan TANG) (1994-03-08) |
Re: Parallel Compiler Representation havlak@cs.umd.edu (1994-03-21) |
Newsgroups: | comp.compilers |
From: | havlak@cs.umd.edu (Paul Havlak) |
Keywords: | optimize, parallel, bibliography |
Organization: | U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742 |
References: | 94-03-042 |
Date: | Mon, 21 Mar 1994 19:22:12 GMT |
Xinan TANG <tang@binkley.cs.mcgill.ca> wrote:
> When we are talking parallelism from array based loops, the techniques
>used are subscript base dependence analysis, loop transformation and
>parallelization. What's the place of internal representation? Isn't it
>as important as in the compiler optimization? Can we say no matter
>what's internal representation if you represent loop as "normal form"
>that's OK? ...
I've worked on two parallelization systems at Rice University:
PFC [1,2], the Parallel Fortran Converter, originally a vectorizer,
then extended to support shared-memory parallelization
ParaScope [3,4], a programming environment building on the
infrastructure developed for the Rn programming env't and
the dependence analysis and loop transformation methods
developed for PFC. Continues as the basis for the
Fortran D compiler and the D System.
Both systems use several data structures for intermediate representation
of the program, mainly:
* the AST (abstract syntax tree) of the Fortran source. Both PFC
and ParaScope are source-to-source systems, expressing
their results as transformed parallel Fortran.
* scalar dataflow information
In PFC, a CFG and def-use chains.
In ParaScope, a CFG and SSA-based def-use chains.
(gated single-assignment form available as an option)
* scalar/array/control dependence information
In PFC, data dependence edges (control dependences are
represented as data dependence through
IF-conversion [2])
In ParaScope, data and control dependence edges (a PDG,
but we never use it as the sole representation)
Intermediate representations are the scaffolding of compiler optimization,
determining what transformations can be done and how easily. Even the
question of whether or not to normalize loops is up for grabs. I would
like to see a more unified dataflow/dependence approach, but I've yet to
see a representation that combines the full benefits of SSA form (for
scalars) and PDGs (for arrays).
--Paul
1. @Article{AlKe:Automatic,
Author = {J. R. Allen and K. Kennedy},
Title = {Automatic Translation of {Fortran} Programs to Vector Form},
Journal = TOPLAS,
Volume = 9,
Number = 4,
Pages = {491--542},
Month = Oct,
Year = 1987}
2. @InProceedings{AKPW:83,
Author = {J. R. Allen and K. Kennedy and
C. Porterfield and J. Warren},
Title = {Conversion of Control Dependence to Data Dependence},
BookTitle = POPL83,
Address = Austin,
Month = Jan,
Year = 1983}
3. @Article{PEDJ2,
Author = {K. Kennedy and K. S. M\raisebox{.2em}{c}Kinley and C. Tseng},
Title = {Interactive Parallel Programming Using the {ParaScope Editor}},
Journal = TOPDS,
Volume = 2,
Number = 3,
Pages = {329--341},
Month = Jul,
Year = 1991}
% Tech Report Number = {TR90-137},
4. @InProceedings{KMT:A+T2,
Author = {K. Kennedy and K. S. M\raisebox{.2em}{c}Kinley and C. Tseng},
Title = {Analysis and Transformation in the {ParaScope Editor}},
BookTitle = ICS91,
Address = Cologne,
Month = Jun,
Year = 1991}
% Tech Report Number = {TR90-143},
--
Paul Havlak Dept. of Comp. Science, A.V. Williams Bldg.
Proto-Research Associate University of Md, College Park MD 20742-3255
Compiling for High Performance (301) 40-52697 havlak@cs.umd.edu
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.