Sat, 12 Dec 1992 20:57:45 GMT

Related articles |
---|

Release of v 3.0.0 of Omega test / Extended Tiny pugh@cs.umd.edu (1992-12-12) |

Newsgroups: | comp.compilers |

From: | pugh@cs.umd.edu (Bill Pugh) |

Organization: | U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742 |

Date: | Sat, 12 Dec 1992 20:57:45 GMT |

Keywords: | tools, optimize, FTP |

Version 3.0.0 of our implementation of the Omega test is now available for

anonymous ftp from ftp.cs.umd.edu:pub/omega. The Omega test is

implemented in an extended version of Michael Wolfe's tiny tool, a

research/educational tool for examining array data dependence algorithms

and program transformations for scientific computations.

New features include:

Array & scalar expansion and privatization

Reduction dependences

Induction variable recognition

Scalar forward substitution

Storage classes

A prototype partial implementation of a unified framework

for reordering transformations. This framework reorders

reordering transformations such as loop interchange, loop

skewing, loop distribution, loop fusion and statement reordering.

This framework can be used to optimize code fragments for

parallelism and/or locality.

A FORTRAN to tiny translator (f2t) based on f2c

Features from version 2.1:

Exact dependence analysis algorithms.

Array kill analysis (determining when intermediate writes kill

a dependence). Array kill analysis is required in order to

perform array privatization or expansion.

Analysis of when "interesting" assertions can eliminate a dependence

(a dependence is uninteresting if it is false only when the loop that

carries the dependence executes less than 2 iterations).

This code is freely available for research or commercial use. The extended

version of tiny can be used as a educational or research tool. Several

research groups are incorporating the Omega test dependence analyzer into

their compilers/programming environments, include groups at University of

Illinois, Urbana-Champaign and at Georgia Tech. Anyone interested in doing

this is strongly urged to stay in contact with us (at omega@cs.umd.edu) as

we are continuing to update and improve the interface that allows external

users to hook into our data dependence analyze.

This release consists of three components:

* The Omega test: A system for performing symbolic

manipulations of conjunctions of linear constraints

over integer variables. In particular, the operations

supported include:

* Checking if integer solutions exist

* Eliminating an existential quantifier. For example,

we can transform

{ Exists b s.t. 0 <= a <= 5; b < a <= 5b}

to

{ 2 <= a <= 5}

* Checking to see if one set of constraints implies

another. For example, we can check to see if:

{0 <= x <= 3; -3 <= 2y <= 3} => {0 <= x+y, x-y <= 3}

The interface to the Omega test is described

in the file doc/omega_interface.tex

* The Omega test dependence analyzer: A system built on top

of the Omega test to analyze array data dependences. This

system builds the appropriate problems and makes the appropriate

queries of the Omega test to analyze array-based data dependences

in programs. The analyzer computes data difference/direction

vectors, recognizes reduction dependences, analyzes array kills

(which enables array expansion and array privatization),

and determines when assertions can be used to eliminate dependences.

We have attempted to define an interface that allows other users

to make use of the Omega test dependence analyzer. This interface

is described in include/lang-interf.generic and Lang-Interf3.0

(keep in touch if you plan to use this interface; we are continuing

to refine it).

* Extended tiny. We have extended Michael Wolfe's tiny tool. The

major extensions are:

* Improved user interface (scrolling, dependence browsing

windows)

* Capabilities that improve the accuracy of data dependence

analysis (induction variable recognition, forward

substitution).

* Features that demonstrate the additional information we

collect (scalar/array expansion and privatization,

interactive dependence zapping)

* A semi-automatic procedure for converting FORTRAN programs

into tiny

* A prototype implementation of a unified framework

for reordering transformations. The framework

unifies loop interchange, skewing, distribution,

fusion, reversal, statement reordering, and some

cases of access normalization, loop alignment, loop

peeling and index set splitting.

Also available are copies of several recent technical reports:

William Pugh & David Wonnacott, Eliminating False Data Dependences using

the Omega Test, Tech. Report CS-TR-2993, Dept. of Computer Science,

Univ. of Maryland, College Park (An earlier version of this paper

appeared at the ACM SIGPLAN PLDI'92 conference).

Abstract

Array data dependence analysis methods currently in use generate

false dependences that can prevent useful program transformations.

These false dependences arise because the questions asked are

conservative approximations to the questions we really should be

asking. Unfortunately, the questions we really should be asking

go beyond integer programming and require decision procedures for

a subclass of Presburger formulas. In this paper, we describe how

to extend the Omega test so that it can answer these queries and

allow us to eliminate these false data dependences. We have

implemented the techniques described here and believe they are

suitable for use in production compilers.

William Pugh, The Definition of Dependence Distance, Tech.

Report CS-TR-2992, Dept. of Computer Science, Univ. of

Maryland, College Park

Abstract

Data dependence distance is widely used to characterize data

dependences in advanced optimizing compilers. The standard

definition of dependence distance assumes that loops are normalized

(have constant lower bounds and a step of 1); there is not a

commonly accepted definition for unnormalized loops.

There are a number of subtleties involved in the definition of

dependence distance for unnormalized loops. In particular, a number

of compilers and programming environments seem to be implicitly using

a definition that can result in non-integral dependence distances.

Since non-integral distances were not anticipated, the compilers

decide that such dependences do not exist. This results in the

compiler failing to recognize the existence of a dependence that

actually exists, which allows the compiler to make illegal program

transformations. This error existed in Parafrase-2, Parascope, and

the KAP compiler (this error has been reported and may be corrected

in these systems by now).

William Pugh and Dave Wonnacott, Static Analysis of Upper Bounds

on Parallelism, Tech Report CS-TR-2994, Dept. of Computer

Science, Univ. of Maryland, College Park

Abstract

Substantial parallelism exists in all of the Perfect Benchmarks

codes, although existing automatic techniques are able to find it

in only a few cases. Exposing much of this parallelism required a

careful, manual examination of all the dependences which appeared to

prevent parallelism, and the development and manual application of

new program transformations that have not yet been automated.

We describe methods for statically analyzing the program to

determine which code segments must either be run sequentially or

rewritten using a new algorithm, and which might contain parallelism

that automatic techniques are unable to expose. We can also provide

some feedback about the kinds of additional, possibly manual,

analysis or transformations that may be required to expose the

parallelism.

In order to determine upper bounds on parallelism, we describe

methods for computing exact data dependence information, incorporating

information about array kills and conditional dependences. In cases

involving non-linear references, our information is inexact but we

calculate upper and lower bounds on the true dependences. Since

we do not have to compute this more exact information for every array

reference pair in a program, but those for which an apparent

dependence threatens our ability to run part of the program in

parallel, the increased cost of performing this analysis should not

be prohibitive.

Wayne Kelly and William Pugh, Generating Schedules and Code within a

Unified Reordering Transformation Framework, Tech

Report CS-TR-2995, Dept. of Computer Science, Univ. of

Maryland, College Park

Abstract

We discuss using schedules as a framework for unifying reordering

transformations such as loop interchange, loop skewing, loop

distribution and loop alignment. This framework subsumes unimodular

loop transformations. We describe algorithms to align schedules

and to produce the transformed code.

This framework is capable of handling more complex schedules than we

currently generate, such as schedules that describe loop blocking and

interleaving. We are able to check the validity of these schedules

and produce appropriate transformed code.

--

Post a followup to this message

Return to the
comp.compilers page.

Search the
comp.compilers archives again.