Related articles |
---|
[6 earlier articles] |
Re: Intermediate Representation mod@westford.ccur.com (Michael O'Donnell (508)392-2915) (1990-08-09) |
Re: Intermediate Representation grover@brahmand.Eng.Sun.COM (1990-08-09) |
Re: Intermediate Representation preston@titan.rice.edu (1990-08-10) |
Re: Intermediate Representation larus@primost.cs.wisc.edu (1990-08-12) |
Re: Intermediate Representation grover@brahmand.Eng.Sun.COM (1990-08-12) |
Re: Intermediate Representation hankd@dynamo.ecn.purdue.edu (1990-08-12) |
Re: Intermediate Representation jouvelot@brokaw.lcs.mit.edu (1990-08-12) |
Re: Intermediate Representation convex!csmith@uunet.UU.NET (1990-08-12) |
Re: Intermediate Representation preston@titan.rice.edu (1990-08-13) |
Re: Intermediate Representation preston@titan.rice.edu (1990-08-13) |
Re: Intermediate Representation jouvelot@brokaw.lcs.mit.edu (1990-08-13) |
Re: Intermediate Representation marti@antares.inf.ethz.ch (1990-08-13) |
Re: Intermediate Representation muller@procope.pa.dec.com (Eric Muller) (1990-08-14) |
[9 later articles] |
Newsgroups: | comp.compilers |
From: | jouvelot@brokaw.lcs.mit.edu (Pierre Jouvelot) |
Keywords: | code, optimize, design |
Organization: | MIT |
References: | <1990Aug08.171640.13892@esegue.segue.boston.ma.us> <1990Aug09.180627.18848@esegue.segue.boston.ma.us> <1990Aug9.225859.10175@rice.edu> |
Date: | Sun, 12 Aug 90 18:28:59 GMT |
In article <1990Aug9.225859.10175@rice.edu> preston@titan.rice.edu
(Preston Briggs) writes:
>>>AST's seem too high-level for doing lots of optimization. We want the
>>>details of array accesses, etc. exposed to the optimizer.
>The point is that people are seduced by the apparent simplicity of
>analysis of ASTs. Don't be fooled! Without care, you'll have to write
>routines to analyze DO loops, WHILE loops, REPEAT-UNTILs, LOOPs with
>EXITs, etc... I write routines that work with more traditional flow
>graphs (directed graphs) -- all loops look the same to me. Ditto for CASE
>statements, IF statements, GOTO's, ad nauseum.
With the AST approach, the cleanest way to deal with this overabundance of
control structures is to define a single "generic" loop construct; all the
control structures can then be trivially desugared (e.g., by the parser)
into your own version of loop. This is the only loop your transformation
algorithms will ever have to deal with (besides, admittedly, the
unparser).
>While it may be that AST fans can show ways around my complaints, can they
>also give me some positive reasons to switch? What's the gain?
It happens that a lot of tranformations applied by
optimizing/vectorizing/parallelizing compilers can be expressed consisely
in a recursive manner, i.e. can be defined by induction on your AST. This
gives a nice and powerful framework to express your algorithms and a data
structure that is perfectly matched to it.
The PIPS parallelizing compiler for Fortran which is under current
development at the Ecole des Mines by Remi Triolet, Francois Irigoin and me
applies this "generic" approach of AST to its most "extremist" point. The
whole Fortran77 syntax is expressed in only a couple of dozens of
different datatypes. More precisely, the whole IR, plus the information
generated by the different phases of the parallelizer, is described in one
page of domain equations (we use a tool akin to IDL, called NewGen and
developped in-house, that generates, from a high-level description of
domains, functions that manipulate values of those types; all the PIPS
data types are implemented in NewGen, so that records and other unions
data structures are never explictly used by programmers).
Pierre
--
Pierre Jouvelot
. CAI, Ecole des Mines, Paris (France): jouvelot@ensmp.fr
. LCS, MIT, Cambridge (USA): jouvelot@brokaw.lcs.mit.edu
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.