Omega test and extended tiny version 3.2.2 released

pugh@cs.umd.edu (Bill Pugh)
Sat, 13 Nov 1993 13:30:29 GMT

          From comp.compilers

Related articles
Omega test and extended tiny version 3.2.2 released pugh@cs.umd.edu (1993-11-13)
| List of all articles for this month |
Newsgroups: comp.compilers,comp.parallel,comp.sys.super
From: pugh@cs.umd.edu (Bill Pugh)
Keywords: tools, available, parallel, optimize, report
Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742
Date: Sat, 13 Nov 1993 13:30:29 GMT

Version 3.2.2 of our implementation of the Omega test is now available for
anonymous ftp from ftp.cs.umd.edu:pub/omega. The Omega test is
implemented in an extended version of Michael Wolfe's tiny tool, a
research/educational tool for examining array data dependence algorithms
and program transformations for scientific computations.


Version 3.2.2 is a maintance release, consisting mostly of minor bug fixes
in 3.1.x and 3.2.[0-1]. The next major release of our software will be
after a number of substantial internal modifications.


We are implementing a new interface to the Omega test which allows the
Omega test to manipulate arbitrary Presburger formulas. Our next release
will also perform exact computations of value-based flow dependences.


Features from version 3.0.0 include:


        Array & scalar expansion and privatization


        Reduction dependences


        Induction variable recognition


        Scalar forward substitution


        Storage classes


        A prototype partial implementation of a unified framework
        for reordering transformations. This framework reorders
        reordering transformations such as loop interchange, loop
        skewing, loop distribution, loop fusion and statement reordering.
        This framework can be used to optimize code fragments for
        parallelism and/or locality.


        A FORTRAN to tiny translator (f2t) based on f2c




Features from version 2.1:


        Exact dependence analysis algorithms.


        Array kill analysis (determining when intermediate writes kill
        a dependence). Array kill analysis is required in order to
        perform array privatization or expansion.


        Analysis of when "interesting" assertions can eliminate a dependence
        (a dependence is uninteresting if it is false only when the loop that
        carries the dependence executes less than 2 iterations).


This code is freely available for research or commercial use. The extended
version of tiny can be used as a educational or research tool. Several
research groups are incorporating the Omega test dependence analyzer into
their compilers/programming environments, include groups at University of
Illinois, Urbana-Champaign and at Georgia Tech. Anyone interested in doing
this is strongly urged to stay in contact with us (at omega@cs.umd.edu) as
we are continuing to update and improve the interface that allows external
users to hook into our data dependence analyze.




This release consists of three components:


          * The Omega test: A system for performing symbolic
manipulations of conjunctions of linear constraints
over integer variables. In particular, the operations
supported include:
* Checking if integer solutions exist


* Eliminating an existential quantifier. For example,
we can transform
{ Exists b s.t. 0 <= a <= 5; b < a <= 5b}
to
{ 2 <= a <= 5}


* Checking to see if one set of constraints implies
another. For example, we can check to see if:


                                      {0 <= x <= 3; -3 <= 2y <= 3} => {0 <= x+y, x-y <= 3}


The interface to the Omega test is described
in the file doc/omega_interface.tex


          * The Omega test dependence analyzer: A system built on top
of the Omega test to analyze array data dependences. This
system builds the appropriate problems and makes the appropriate
queries of the Omega test to analyze array-based data dependences
in programs. The analyzer computes data difference/direction
vectors, recognizes reduction dependences, analyzes array kills
(which enables array expansion and array privatization),
and determines when assertions can be used to eliminate dependences.


We have attempted to define an interface that allows other users
                to make use of the Omega test dependence analyzer. This interface
                is described in include/lang-interf.generic and Lang-Interf3.0
                (keep in touch if you plan to use this interface; we are continuing
                to refine it).


          * Extended tiny. We have extended Michael Wolfe's tiny tool. The
major extensions are:


* Improved user interface (scrolling, dependence browsing
windows)


* Capabilities that improve the accuracy of data dependence
analysis (induction variable recognition, forward
substitution).


* Features that demonstrate the additional information we
collect (scalar/array expansion and privatization,
interactive dependence zapping)


* An semi-automatic procedure for converting FORTRAN programs
into tiny


* A prototype implementation of a unified framework
for reordering transformations. The framework
unifies loop interchange, skewing, distribution,
fusion, reversal, statement reordering, and some
cases of access normalization, loop alignment, loop
peeling and index set splitting.




Also available are copies of several recent technical reports:




2993.dvi.Z
2993.ps.Z


        William Pugh & David Wonnacott, Eliminating False Data Dependences using
the Omega Test, Tech. Report CS-TR-2993, Dept. of Computer Science,
Univ. of Maryland, College Park (An earlier version of this paper
appeared at the ACM SIGPLAN PLDI'92 conference).


                                Abstract
            Array data dependence analysis methods currently in use generate
            false dependences that can prevent useful program transformations.
            These false dependences arise because the questions asked are
            conservative approximations to the questions we really should be
            asking. Unfortunately, the questions we really should be asking
            go beyond integer programming and require decision procedures for
            a subclass of Presburger formulas. In this paper, we describe how
            to extend the Omega test so that it can answer these queries and
            allow us to eliminate these false data dependences. We have
            implemented the techniques described here and believe they are
            suitable for use in production compilers.




2992.dvi.Z
2992.ps.Z
        William Pugh, Definitions of Dependence Distance, Tech.
Report CS-TR-2992.1, Dept. of Computer Science, Univ. of
Maryland, College Park,
Originally published November 1992, revised April 1993




                                Abstract
          Data dependence distance is widely used to characterize data
          dependences in advanced optimizing compilers. The standard
          definition of dependence distance assumes that loops are
          normalized (have constant lower bounds and a step of 1);
          there is not a commonly accepted definition for unnormalized loops.
          We have identified several potential definitions, all of which
          give the same answer for normalized loops. There are a number
          of subtleties involved in choosing between these definitions,
          and no one definition is suitable for all applications.




2994.dvi.Z
2994.ps.Z
        William Pugh and Dave Wonnacott, Static Analysis of Upper and Lower
          Bounds on Dependences and Parallelism, Tech Report CS-TR-2994.1,
          Dept. of Computer Science, Univ. of Maryland, College Park
          Originally published November 1992, revised Sept 1993


                                Abstract


          Existing compilers often fail to parallelize sequential code, even
          when a program can be manually transformed into parallel form
          by a sequence of well understood transformations (as is the case
          for many of the Perfect Club Benchmark programs). These failures
          can occur for several reasons: the code transformations implemented
          in the compiler may not be sufficient to produce parallel code, the
          compiler may not find the proper sequence of transformations, or the
          compiler may not be able to prove that one of the necessary
          transformations is legal. When a compiler fails to parallelize a
          program, the programmer may attempt to do so. This task is
          complicated by the fact that the programmer may not know which parts
          of the program can be parallelized, or what kinds of transformations
          are needed. The compiler generally does not give feedback about which
          (unparallelized) sections of the program are provablely sequential,
          and which the compiler could not prove to be parallel or sequential,
          and thus might be parallel.


          Standard program transformations and dependence abstractions cannot
          be used to provide this feedback.


          In this paper, we propose a two step approach for the search for
          parallelism in sequential programs: We first construct a set of
          dependence relations that provide complete information about which
          iterations of which statements can be executed concurrently. We
          produce several different sets of dependence relations, corresponding
          to different assumptions about which dependences must be respected, and
          which might be eliminated through additional analysis, transformations
          and user assertions. We then try to identify the kinds of reordering
          transformations that could be used to exploit scalable parallelism
          and the amount of parallelism that can be extracted by them.


          This approach lets us distinguish inherently sequential code from code
          that contains unexploited parallelism. It also produces information
          about the kinds of transformations that will be needed to parallelize
          the code, without worrying about the order of application of the
          transformations. Furthermore, when our dependence test is inexact,
          we can identify which unresolved dependences inhibit parallelism
          by comparing the effects of assuming dependence or independence.
          We are currently exploring the use of this information in
          programmer-assisted parallelization.






2995.dvi.Z
2995.ps.Z
        Wayne Kelly and William Pugh, A Framework for Unifying Reordering
          Transformations, Tech Report CS-TR-2995.1, Dept. of Computer
          Science, Univ. of Maryland, College Park
          Originally published November 1992, revised April 1993


                                Abstract


          We present a framework for unifying iteration reordering transformations
          such as loop interchange, loop distribution, skewing, tiling, index set
          splitting and statement reordering. The framework is based on the idea
          that a transformation can be represented as a schedule that maps the
          original iteration space to a new iteration space. The framework is
          designed to provide a uniform way to represent and reason about
          transformations. As part of the framework, we provide algorithms
          to assist in the building and use of schedules. In particular, we
          provide algorithms to test the legality of schedules, to align schedules
          and to generate optimized code for schedules.








3108.ps.Z
        Wayne Kelly and William Pugh, Determing Schedules based on Performance
        Estimation, Tech Report CS-TR-3108, Dept. of Computer
        Science, Univ. of Maryland, College Park, July 1993


                    Abstract


        In \cite{Uniform2.1} we presented a framework for unifying iteration
        reordering transformations such as loop interchange, loop distribution,
        loop skewing and statement reordering. The framework provides a
        uniform way to represent and reason about transformations. However, it
        does not provide a way to decide which transformation(s) should be applied
        to a given program.


        This paper describes a way to make such decisions within the context of
        the framework. The framework is based on the idea that a transformation
        can be represented as a schedule that maps the original iteration space to
        a new iteration space. We show how we can estimate the performance of a
        program by considering only the schedule from which it was produced.


        We also show how to produce an upper bound on performance given only a
        partially specified schedule. Our ability to estimate performance
        directly from schedules and to do so even for partially specified
        schedules allows us to efficiently find schedules which will produce
        good code.
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.