Re: Intermediate Representation (Preston Briggs)
Fri, 10 Aug 90 15:41:46 GMT

          From comp.compilers

Related articles
[2 earlier articles]
Re: Intermediate Representation (2001-10-12)
Intermediate Representation napi@rangkom.MY (1990-08-07)
Re: Intermediate Representation briscoe-duke@CS.YALE.EDU (Duke Briscoe) (1990-08-08)
Re: Intermediate Representation (Preston Briggs) (1990-08-08)
Re: Intermediate Representation (Michael O'Donnell (508)392-2915) (1990-08-09)
Re: Intermediate Representation grover@brahmand.Eng.Sun.COM (1990-08-09)
Re: Intermediate Representation (1990-08-10)
Re: Intermediate Representation (1990-08-12)
Re: Intermediate Representation grover@brahmand.Eng.Sun.COM (1990-08-12)
Re: Intermediate Representation (1990-08-12)
Re: Intermediate Representation (1990-08-12)
Re: Intermediate Representation convex!csmith@uunet.UU.NET (1990-08-12)
Re: Intermediate Representation (1990-08-13)
[13 later articles]
| List of all articles for this month |

Newsgroups: comp.compilers
From: (Preston Briggs)
Keywords: code, optimize, design
Organization: Rice University, Houston
References: <> <>
Date: Fri, 10 Aug 90 15:41:46 GMT

I wrote:
>>AST's seem too high-level for doing lots of optimization. We want the
>>details of array accesses, etc. exposed to the optimizer.

>>I believe there are many optimizations that can be carried out
>>most effectively on a high-level representation (for example,
>>those requiring dependence analysis) and many that should be
>>carried out on a low level representation (e.g., CSE elimination,
>>strength reduction).

grover@brahmand.Eng.Sun.COM (Vinod Grover) writes:
>First of all, there is no reason why an AST based IR cannot have low-level
>features (such as array/memory accesses).

>Second, as Bliss-11 showed global CSE elimination can be done at a high
>level. Similarly, strength reduction isnt that hard at a high level. Karl
>Ottenstein showed, in an IEEE paper several years ago, that strength
>reduction is easy to do on PDGs and can easily be extended to ASTs. (I do
>not have the reference handy but can post it, if anyone is interested.)

I'm mostly presenting my "religious convictions" here; I agree that there
are many ways to skin a cat. However, Ottenstein's paper serves a
nice illustration.

He discusses strength reduction of induction variables in the context of
DO/FOR-style loops, with the iteration variable clearly marked by the
header node, as increment expressions and the limit tests.

A more general algorithm (say Cocke and Kennedy) would perform strength
reduction over any loop (while loops, loops built out of if's and goto's),
would discover all the induction variables (not just the loop control
variable), and would make provision for linear function test replacement.
Other algorithms extend partial redundancy elimination to discover strength
reductions in linear regions (I've seen versions by Chow and by Dhamdhere).

The point is that people are seduced by the apparent simplicity of
analysis of ASTs. Don't be fooled! Without care, you'll have to write
routines to analyze DO loops, WHILE loops, REPEAT-UNTILs, LOOPs with
EXITs, etc... I write routines that work with more traditional flow
graphs (directed graphs) -- all loops look the same to me. Ditto for CASE
statements, IF statements, GOTO's, ad nauseum.

I also wonder about the memory requirements of a detailed AST; that is,
one with all the fun of each multidimensional array access written out in
tree form. PDG's and PDW's are even scarier in this respect.

While it may be that AST fans can show ways around my complaints, can they
also give me some positive reasons to switch? What's the gain?

(speaking as a devil's advocate, of course)

Preston Briggs looking for the great leap forward

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.