Fri, 30 Apr 1993 20:58:15 GMT

Related articles |
---|

Questions about partial redundancy elimination baxter@Austin.slcs.slb.com (1993-04-30) |

Re: Questions about partial redundancy elimination preston@dawn.cs.rice.edu (1993-05-01) |

Re: Questions about partial redundancy elimination bill@amber.csd.harris.com (1993-05-03) |

Re: Questions about partial redundancy elimination max@nic.gac.edu (1993-05-04) |

Re: Questions about partial redundancy elimination preston@dawn.cs.rice.edu (1993-05-05) |

Questions about partial redundancy elimination jk@informatik.uni-kiel.dbp.de (1993-05-17) |

Re: Questions about partial redundancy elimination uday@cse.iitb.ernet.in (1993-05-18) |

Newsgroups: | comp.compilers |

From: | baxter@Austin.slcs.slb.com (Ira Baxter) |

Keywords: | optimize, question |

Organization: | Schlumberger Laboratory for Computer Science |

Date: | Fri, 30 Apr 1993 20:58:15 GMT |

There's an interesting paper in the most recent SIGPLAN on elimination of

partial redundancies by computing where expressions can be moved in a

control flow graph:

@article(Drechsler93a:efficient-lazy-code-motion,

author = "Karl-Heinz Drechsler and Manfred P. Stadel",

title = "{A Variation of Knoop, Ruthing and Steffen's LAZY CODE

MOTION}",

journal = "SIGPLAN Notices",

volume = 28,

number = 5,

pages = "29-38",

month = may,

year = 1993,

It shows how to compute "partial-redundancy" information efficiently,

removing invariants from loops and deleting redundant computations on a

control-flow graph representations. A two-phase process first determines

the earliest possible placements allowing maximal partial redundancy

elimination, while the second phase determines the latest place possible,

to minimize the length of value lifetimes (possibly saving space for

temporary variables, and time to spill them) making generated code more

efficient. [For those of us trying to optimize codes having huge arrays,

this is pretty attractive; one temp too many and we're dead.]

The scheme requires a straightforward

iterative solution of 3 unidirectional sets of boolean equations

computing (roughly):

AVAILABLE(e,b) [is expression e available at entry to a block b?]

ANTICIPATABLE(e,b) [could e be usefully computed in b?]

LATER(e,b1,b2) [can e be done later than block b?]

Derived equations tell where to insert and delete expressions:

INSERT(e,b1,b2) says to insert computation of e

"along the arc" between blocks b1 and b2;

DELETE(e,b1) says to delete the computation of e

from block b1

The paper contains the remark: "It is assumed that commands computing

expressions deliver their results to psuedo-registers, and that all

computations of the same expression always use the same psuedo-register as

the result register."

This seems to imply that common subexpression *detection*, (if not

elimination), complete with verification of identical control dependences

has already taken place, or multiple instances of the same expression

would have no such gaurantee. That seems rather circular, because the

computation proposed computes expression availability at a point, which

seems like a prerequistite for this. I'm assuming I've missed something;

anybody want to set me straight?

I find it interesting that partially-redundant *expression* computations

are eliminated. Is there a reason that such techniques aren't used to

eliminate redundant *effects*? (i.e., "Common Procedural Elimination"?)

Have techniques such as those of this paper been used to do so? [I ask

because our synthesis system refines abstract concepts into lower level

ones, sometimes of a procedural nature [[e.g., map SUM(X) into

LET[S;S=0;DO I,LENGTH(X),S=S+X(I) END;S] ]]; if an abstract concept is

used twice, then we'll end up with the identical effect twice, which seems

redundant redundant to me, and I'd like to do something about it. Yes,

yes, we plan to do CSE at the abstract level too, but I'd like to know

whether it has been done at a low level.]

Last, but not least, how are commutative operators handled with such a

technique? [As an example, I'd like to move motionable subsets of a

multioperand ADD operations].

--

Ira Baxter email: baxter@slcs.austin.slb.com ph: 1-512-331-3714

Schlumberger Laboratory for Computer Science

8311 North RR 620 [for snail mail, use PO Box 200015 instead]

--

Post a followup to this message

Return to the
comp.compilers page.

Search the
comp.compilers archives again.