Parallelizing compiler thesis available

edward@minster.york.ac.uk
Thu, 14 Apr 1994 05:26:05 GMT

          From comp.compilers

Related articles
Parallelizing compiler thesis available edward@minster.york.ac.uk (1994-04-14)
| List of all articles for this month |

Newsgroups: comp.compilers
From: edward@minster.york.ac.uk
Keywords: parallel, report, available, FTP
Organization: Compilers Central
Date: Thu, 14 Apr 1994 05:26:05 GMT

      I would welcome any criticism of my recently submitted DPhil
thesis, titled


      "Extracting dataflow information for parallelizing FORTRAN nested loop
kernels".


      The abstract of the thesis follows:


---------------------------Start of Abstract--------------------


Current parallelizing FORTRAN compilers, for massively parallel
processor architectures, expand a large amount of effort in determining
data independent statements in a program. Such data independent statements
can then be scheduled to execute in parallel without need for
synchronization. This thesis hypothesises that it is just as important
to derive exact dataflow information about the data dependencies where
they exist.


We focus on the specific area of FORTRAN nested loop parallelization.
We describe a direct method for determining the distance vectors
for the inter-loop data dependencies in a $n$-nested loop kernel. These
distance vectors define dependence arcs between iterations, represented
as points in $n$-dimensional euclidean space. We also demonstrate some
of the benefits incurred from deriving such exact dataflow information
about a nested loop computation.


Firstly, we show how implicit task graph information about the nested loop
computation can be deduced allowing the parallelization of a class of
loop kernels which was previously thought difficult. The class of loop
kernels are those involving data dependent array variables with linear
coupled subscript expressions. We consider the parallelization
of these loop kernels using the DO ACROSS parallel construct on
shared memory and distributed memory architectures. We also compare
and contrast our suggested schemes with other current proposals and
demonstrate improved speedup profiles from the use of our schemes.


Secondly, we show how an exact data independence test can be derived for
multi-dimensional array variables by formulating a dependence constraint
system using the derived dependence distance vectors. Through careful
implementation, we show that our test exhibits ``tractable'' execution times:
a mean execution time of 0.7 seconds for a very large set of randomly
generated scenarios. We therefore suggest it to be suitable for use in a
parallelizing compiler.
-------------------------End of Abstract------------------------


        The thesis can be obtained via anonymous ftp at site
"minster.york.ac.uk" in"/pub/edward/thesis.ps.Z". Any comments or
criticisms are welcomed and can be mailed to me directly at
"edward@minster.york.ac.uk".


      Many thanks in advance.


- edward
---------------------------------------------------------
Edward Walker
Advanced Computer Architecture Group
Dept. of Computer Science
University of York
York, Heslington
YO1 5DD
U.K.


Internet: edward@minster.york.ac.uk
tel: ++44-0904-432771


--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.