Related articles |
---|
[8 earlier articles] |
Re: Optimizing Across && And || bill@amber.ssd.csd.harris.com (1995-02-28) |
Re: Optimizing Across && And || bill@amber.ssd.csd.harris.com (1995-02-28) |
Re: Optimizing Across && And || ryg@summit.novell.com (1995-03-03) |
Re: Optimizing Across && And || leichter@zodiac.rutgers.edu (1995-03-07) |
Re: Optimizing Across && And || preston@tera.com (1995-03-08) |
Re: Optimizing Across && And || pardo@cs.washington.edu (1995-03-13) |
Re: Optimizing Across && And || chase@centerline.com (1995-03-14) |
Re: Optimizing Across && And || jqb@netcom.com (1995-03-15) |
Safety and legality of optimizations(Re: Optimizing Across && And ||) Vinod.Grover@Eng.Sun.COM (1995-03-18) |
Re: Optimizing Across && And || cdg@nullstone.com (1995-03-20) |
Re: Optimizing Across && And || daniels@cse.ogi.edu (1995-03-21) |
Re: Optimizing Across && And || bill@amber.ssd.hcsc.com (1995-04-03) |
Newsgroups: | comp.compilers |
From: | chase@centerline.com (David Chase) |
Keywords: | optimize, design |
Organization: | CenterLine Software |
References: | 95-02-179 95-03-028 95-03-050 |
Date: | Tue, 14 Mar 1995 16:22:06 GMT |
leichter@zodiac.rutgers.edu writes:
> Powell's approach to the optimizer was what he called "best simple": Do the
> best job you can while keeping the optimizer itself simple and easy to under-
> stand (and thus get right).
...
> Despite its simplicity and small size, the compiler was very competitive,
> easily beating out other compilers for Modula-2, but also beating or coming
> close to much larger, more sophisticated compilers for Pascal, C, and Bliss.
> This lesson has to be re-learned periodically. I suppose the intellectual
> heir to Powell's Modula-2 today is lcc - small, fast, simple, but still
> manages to do a much better job than expected.
I think that the approach nowadays would be different. We know
more than we did then, and instead I'd aim for something like "best
framework" -- that is, choose one framework (e.g., SSA) that gets
most of the low-hanging fruit pretty easily, and then do those.
Which particular variant of SSA you use, depends on which paper you
can understand (there's incremental advances every year), and which
optimizations you decide to do, also depends on which paper you can
understand. "Simple" here is not always best -- the
SSA-transformation, though well-documented and delightfully fast
and elegant, is not the most obvious thing on the face of the earth.
There are a couple of instances where it is "too good" -- it's
a relatively straightforward task to write a dead code eliminator
that performs "infinite optimization" on infinite loops (that is,
the loop disappears and control falls through).
And yes, I've implemented SSA in a compiler, and it really is
fast, and it really does provide a nice framework for expressing
some useful optimizations. The reason you don't see it as often
as you might hope to in production compilers is that these typically
reflect the technology known-to-and-trusted-by the implementors
at the last major rewrite (at least, that has been the case
wherever I've known about the details).
David Chase (speaking for myself)
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.