Sun, 4 Oct 1992 16:26:57 GMT

Related articles |
---|

Monotone Dataflow problems.... sreedhar@flo.cs.mcgill.ca (V.C. SREEDHAR) (1992-10-03) |

Re: Monotone Dataflow problems.... preston@dawn.cs.rice.edu (1992-10-04) |

Re: Monotone Dataflow problems.... grunwald@foobar.cs.colorado.edu (1992-10-11) |

Newsgroups: | comp.compilers |

From: | preston@dawn.cs.rice.edu (Preston Briggs) |

Organization: | Rice University, Houston |

Date: | Sun, 4 Oct 1992 16:26:57 GMT |

References: | 92-10-006 |

Keywords: | optimize, bibliography |

V.C. Sreedhar writes:

*>pointers to all the optimization problems that are very important for*

*>generating good code (for scalar, superscalar, and parallel machines)? *

For a scalar machine, I think the following optimizations are important

partial redundancy elimination (includes elimination of

common subexpressions and loop-invariant expressions)

value numbering

dead code elimination (unused expressions)

constant propagation

strength reduction

instruction scheduling

register allocation

memory hierarchy management (scalar replacement, blocking for

cache, loop interchange, unroll and jam)

Little different is required for a superscalar machine except (perhaps) a

more sophisticated instruction scheduling technique.

You asked for a classification in terms of forward, backward, and

bidirectional. These terms usually apply to "problems" whereas I've

cleverly listed a bunch of "optimizations". Typically, many "problems"

(e.g., live variables or reaching definitions) must be solved for a single

optimization. Part of the interesting work in the area is discovering new

ways to accomplish a given optimization in terms of different data-flow

problems. To me, interesting solutions are faster, smaller, or more

effective than earlier attempts. Simpler solutions are also nice, but not

(in my opinion) at the cost of sped, space, or effectiveness. Of course,

incorrect solutions need not apply.

*>Swartz (?) and Cocke book and Dragon book are a*

*>good reference for many of the analysis. I feel these two books are not*

*>the best ref. for recent results based on SSA-form and Sparse Eval. Graph*

*>methods.*

Schwartz.

Yes, these two books suffer somewhat in that the authors foolishly ignored

future results :-) The only references today are the papers.

title="Detecting Equality of Variables in Programs",

author="Bowen Alpern and Mark N. Wegman and F. Kenneth Zadeck",

booktitle=popl15,

year=1988,

month=jan,

pages="1--11"

A form of global value numbering

title="Global Value Numbers and Redundant Computations",

author="Barry K. Rosen and Mark N. Wegman and F. Kenneth Zadeck",

booktitle=popl15,

year=1988,

month=jan,

pages="12--27"

An aggressive form of partial redundancy elimination

title="An Efficient Method of Computing Static Single Assignment Form",

author="Ron Cytron and Jeanne Ferrante and Barry K. Rosen and

Mark N. Wegman and F. Kenneth Zadeck",

booktitle=popl16,

year=1989,

month=jan,

pages="25--35"

How to build SSA form efficiently

title="Efficiently Computing Static Single Assignment Form and the

Control Dependence Graph",

author="Ron Cytron and Jeanne Ferrante and Barry K. Rosen and Mark

N. Wegman and F. Kenneth Zadeck",

pages="451--490",

journal=toplas,

year=1991,

month=oct,

volume=13,

number=4

Archival version of the POPL paper,

more discussion

title="Automatic Construction of Sparse Data Flow Evaluation Graphs",

author="Jong-Deok Choi and Ron Cytron and Jeanne Ferrante",

booktitle=popl18,

year=1991,

month=jan,

pages="55--66"

An SSA-like method to solve forward and backward

problems

*>Also can someone give me reference for dataflow/optimization performed on*

*>explicit parallel programs? Can we directly use traditional methods into*

*>the parallel world.*

We can continue using traditional forms of analysis; however, some new

problems must be solved (e.g., laying out data across multiple

processors). Note that I would include such things as dependence analysis

(developed to attack vectorization) as important for scalar machines.

You're asking very broad questions. Your best move is probably to start

reading around in the proceedings from conferences such as PLDI, POPL,

Supercomputing, PPoPP, and all the other parallel conferences. Or, if

you're a grad student, discuss all this with your advisor.

Preston Briggs

--

Post a followup to this message

Return to the
comp.compilers page.

Search the
comp.compilers archives again.