# Parallelizing compiler thesis

## edward@minster.york.ac.ukTue, 21 Jun 1994 04:51:21 GMT

From comp.compilers

Related articles
Parallelizing compiler thesis edward@minster.york.ac.uk (1994-06-21)
| List of all articles for this month |

 Newsgroups: comp.compilers,comp.parallel From: edward@minster.york.ac.uk Keywords: parallel, available, report, Fortran Organization: Compilers Central Date: Tue, 21 Jun 1994 04:51:21 GMT

Following the initial availability of my draft DPhil thesis here at the
University of York titled

"Extracting Data Flow Information for Parallelizing FORTRAN Nested Loop
Kernels"

I'm pleased to say that the much corrected final version is now available,
as before, via ftp at

site:minster.york.ac.uk (144.32.16.26)
directory:/pub/edward

Hopefully the changes will make it much easier to read. I must admit the
draft version wasn't ideal. Thank you all who made comments about the
initial draft. They were all very helpful (even if I haven't incorporated
them all for some small reason or other). As before, I include the thesis
abstract. Hope you like it.

******************** Thesis Abstract *********************

Currently available parallelizing FORTRAN compilers expend a large amount
of effort in determining data independent statements in a program such
that these statements can be scheduled in parallel without need for
synchronisation. This thesis hypothesises that it is just as important to
derive exact data flow information about the data dependencies where they
exist. We focus on the specific problem of imperative nested loop
parallelization by describing a direct method for determining the {\em
distance vectors} of the inter-loop data dependencies in an $n$-nested
loop kernel. These distance vectors define dependence arcs between
iterations which are represented as points in $n$-dimensional euclidean
space.

To demonstrate some of the benefits gained from deriving such exact data
flow information about a nested loop computation we show how implicit task
graph information about the computation can be deduced. Deriving the
implicit task graph of the computation enables the parallelization of a
class of loop kernels which was previously thought difficult. The class
of loop kernels are those involving data dependent array variables with
{\em coupled} linear subscript expressions. We consider the
parallelization of these loop kernels using the {\tt DO ACROSS} parallel
construct on shared memory and distributed memory architectures, and we
compare our suggested schemes with other current proposals. We
demonstrate improved execution time profiles when our schemes are adopted.
Also, we show how an exact data independence test can be derived for
multi-dimensional array variables by formulating a dependence constraint
system using the derived dependence distance vectors. Through careful
implementation and making approximating assumptions where necessary we
show that our test remains exact for a very large subset of randomly
generated scenarios and that it exhibits tractable'' execution times.

------------------------------------------------------------
Edward Walker
Advanced Computer Architecture Group
Dept. of Computer Science
University of York
York, Heslington
YO1 5DD
England

Internet: edward@minster.york.ac.uk
--

Post a followup to this message