Re: Optimization tradeoffs (time vs. space)

ames!coherent!dplatt@ncar.UCAR.EDU (Dave Platt)
5 Aug 88 17:29:01 GMT

          From comp.compilers

Related articles
Optimization tradeoffs (time vs. space) roy@phri.uucp (1988-08-01)
Re: Optimization tradeoffs (time vs. space) (1988-08-05)
Re: Optimization tradeoffs (time vs. space) ames!coherent!dplatt@ncar.UCAR.EDU (1988-08-05)
Re: Optimization tradeoffs (time vs. space)!willc@RELAY.CS.NET (Will Clinger) (1988-08-06)
Re: Optimization tradeoffs (time vs. space) (1988-08-08)
Optimization tradeoffs (time vs. space) (1988-08-09)
Re: Optimization tradeoffs (time vs. space) roy@phri.uucp (1988-08-10)
Re: Optimization tradeoffs (time vs. space) mmengel@cuuxb.ATT.COM (1988-08-11)
| List of all articles for this month |

From: ames!coherent!dplatt@ncar.UCAR.EDU (Dave Platt)
Newsgroups: comp.compilers
Date: 5 Aug 88 17:29:01 GMT
References: <1853@ima.ISC.COM>
Organization: Coherent Thought Inc., Palo Alto CA

In article <1853@ima.ISC.COM> roy@phri.uucp (Roy Smith) writes:
> Some compilers have options to select what kind of optimization to
> do: space or time. Can somebody give me some idea of how much difference
> it makes which you pick?

Dead-code removal, such as you give in your example, is one sort of
optimization that is Good in both time and space measures; it'll probably
occur in both optimization modes, if it occurs in either.

There are certain other optimizations, though, in which there can be a
real tradeoff between time and space. Some examples:

   for (i=0; i<10; i++)
   a[i] = b[i];

In the "optimize for time" mode, a compiler might rewrite this as

a[0] = b[0];
a[1] = b[1];
a[9] = b[9];
i = 10; /* if the value of "i" is used subsequently */

and thus avoid the loop overhead (and, potentially, the overhead involved
in using register indexing to access entries in the "a" and "b" arrays).

A space-optimizing compiler might call a "move memory to memory" subroutine
to transfer the contents of the arrays, and then (if necessary) set i=10.


struct foo a, b;

a = b;

may be done in-line with a loop, with unrolled load/store/load/store
instruction sequences, or with a call to a general-purpose
memory-moving subroutine. The instruction sequence chosen would depend
on sizeof(foo), on the optimization mode, the machine's instruction set
and addressing modes, and the state of the compiler-writer's liver and
the phase of the moon.

I'd tend to believe that optimizations tend to fall into two classes:

1) Those that occur prior to the actual generation of machine-language
      instructions. These optimizations will often be implemented as
      transformations on the code-generator's internal representation of the
      program (i.e. examining the "for loop" code-tree from the example above,
      and rewriting it as an unrolled set of moves).

      These transformations can have a very large effect on the actual code
      sequences generated... at times, it can be difficult to figure out how
      the code actually generated corresponds to the source code!

      The biggest time/space tradeoffs are made in these sort of optimizations,
      I believe.

2) Those optimizations made after code is actually generated. These
      tend to be more local... register optimizations, dead code
      elimination, short-circuiting jumps to jumps, etc. They tend to be
      Good in both the time and space senses.

Some compilers seem to rely heavily on the class-2 (late) optimizations,
using peephole optimizers and the like. More powerful, "optimizing"
compilers include class-1 (early) optimizations and transformations as

Dave Platt VOICE: (415) 493-8805
    USNAIL: Coherent Thought Inc. 3350 West Bayshore #205 Palo Alto CA 94303
    UUCP: ...!{ames,sun,uunet}!coherent!dplatt DOMAIN:
    INTERNET: coherent!,,

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.