Re: Optimizing Across && And ||

bill@amber.ssd.csd.harris.com (Bill Leonard)
Tue, 28 Feb 1995 19:54:30 GMT

          From comp.compilers

Related articles
[4 earlier articles]
Re: Optimizing Across && And || bart@cs.uoregon.edu (1995-02-21)
Re: Optimizing Across && And || whalley@fork.cs.fsu.edu (David Whalley) (1995-02-22)
Re: Optimizing Across && And || bart@cs.uoregon.edu (1995-02-23)
Re: Optimizing Across && And || bill@amber.ssd.csd.harris.com (1995-02-24)
Re: Optimizing Across && And || glew@ichips.intel.com (1995-02-27)
Re: Optimizing Across && And || bill@amber.ssd.csd.harris.com (1995-02-28)
Re: Optimizing Across && And || bill@amber.ssd.csd.harris.com (1995-02-28)
Re: Optimizing Across && And || ryg@summit.novell.com (1995-03-03)
Re: Optimizing Across && And || leichter@zodiac.rutgers.edu (1995-03-07)
Re: Optimizing Across && And || preston@tera.com (1995-03-08)
Re: Optimizing Across && And || pardo@cs.washington.edu (1995-03-13)
Re: Optimizing Across && And || chase@centerline.com (1995-03-14)
Re: Optimizing Across && And || jqb@netcom.com (1995-03-15)
[3 later articles]
| List of all articles for this month |

Newsgroups: comp.compilers
From: bill@amber.ssd.csd.harris.com (Bill Leonard)
Keywords: C, optimize, design
Organization: Harris Computer Systems, Ft. Lauderdale FL
References: 95-02-110 95-02-190
Date: Tue, 28 Feb 1995 19:54:30 GMT

glew@ichips.intel.com (Andy Glew) writes:
> You sound like every other compiler writer I have ever dealt with!!!!


Imagine that!


> One thing I have tried to encourage compiler folk to do is to make the
> back-end recursive or reentrant - have every optimization read in and
> write out the same intermediate form, along with arbitrarily
> extensible (typed) annotations.
>
> Ideally, the optimizations would not be invoked in a "phase order",
> but would instead be invoked with a particular scope (subtree, trace,
> hyperblock). Some optimizations might place constraints on the scopes
> they accept - truly global optimizations might require full program
> scope.


All global optimizations I know of require some form of global data-flow
information, which cannot be computed on pieces. This makes it impractical
to try to do those global optimizations on subtrees or other subdivisions
of a routine, because changing one subtree will affect the global data-flow
for all subtrees.


There has been some research done on incremental optimization, wherein the
global data-flow information can be updated "in place", but it really
requires that the optimizer be designed with this in mind from the very
beginning. Even incremental updating of the local data-flow information
(which is used to compute the global information) is difficult -- some
years ago I undertook to do this for our optimizer. The time savings was
very gratifying, especially for very large subroutines, but it took several
*years* to get all the bugs out (if one can ever say "all the bugs" :-).


Another consideration is the interdependence of optimizations. The phase
ordering approach allows the compiler designer to arrange the order to get
the most bang for the buck from the interaction of different optimizations.
Also, the design of a particular optimization phase may prevent other
optimizations from coming after. (For instance, if you use Partial
Redundancies to do CSEs, variable propagation done afterward may undo the
CSE optimization.)


> Optimizations should not be done in place, but should be done in a
> manner such that they can be easily backed out. Thus, a particular
> optimization module suggests an optimization, but the framework may
> evaluate several different optimizations, and only take one.


This would tend to be very space intensive. Optimizers already tend to be
memory hogs, so anything that adds to the memory usage is going to be
detrimental to the overall performance of the compiler.


> Optimizations should be coroutine oriented, over and above the
> continuation technique described above. Coroutining is one approach
> to solving the compile time constraint - a particular optimization may
> be invoked, allowed to grind for a pre-determined time, and then
> control passed to a second optimization module.


But if you have several of these optimizations in progress, where has all
your memory gone? You may save a few milliseconds of CPU time and cost
minutes of page swapping!


> Setting up such a framework does take a lot of work, and has some cost
> in compile time. The problem is, at any point in time such a general
> framework will probably only deliver results similar to an ad-hoc
> backend structure - where "ad-hoc" means that the structure has been
> carefully crafted to support the current state of the art. It is only
> over time that the flexible structure would show its advantage, by
> being able to incorporate new optimizations more easily.


As you say, a lot of work, and can you *prove* that it will be worth it?
I'd say it would take a bold engineer to make such a promise, and a bold
management team who would take the risk that he/she was right. And while
you're doing all this work, the competition is eating your lunch with their
"ad hoc" optimizer that got out the door 3 years before yours. :-)


> If you are comparing a 5% optimization that helps a wide variety of
> real user programs (e.g. one that helps a wide variety of Visual C++
> programs), versus a 50% optimization that helps one particular SPEC
> benchmark, but nothing else:


No, I wasn't. With today's RISC processors, exploiting parallelism is
still the best thing to do, for *all* user programs. Of course some
benefit more than others, but overall you have a much better chance at
improving the average program by doing, say, instruction scheduling than by
doing an optimization for one particular programming construct.


--
Bill Leonard
Harris Computer Systems Corporation
2101 W. Cypress Creek Road
Fort Lauderdale, FL 33309
Bill.Leonard@mail.hcsc.com
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.