Re: non trivial constant folding (David Whitney)
9 Jan 2001 23:21:57 -0500

          From comp.compilers

Related articles
[6 earlier articles]
Re: non trivial constant folding (Chris F Clark) (2001-01-06)
Re: non trivial constant folding (2001-01-09)
Re: non trivial constant folding (2001-01-09)
Re: non trivial constant folding (Robert Metzger) (2001-01-09)
Re: non trivial constant folding (2001-01-09)
Re: non trivial constant folding (2001-01-09)
Re: non trivial constant folding (2001-01-09)
Re: non trivial constant folding (MickaŽl Pointier) (2001-01-09)
Re: non trivial constant folding (Michael Morrell) (2001-01-09)
Re: non trivial constant folding (Dennis Ritchie) (2001-01-11)
Re: non trivial constant folding (Conor O'Neill) (2001-01-11)
Re: non trivial constant folding (2001-01-18)
Re: non trivial constant folding (2001-01-18)
[4 later articles]
| List of all articles for this month |

From: (David Whitney)
Newsgroups: comp.compilers
Date: 9 Jan 2001 23:21:57 -0500
Organization: Silicon Graphics, Inc.
References: 01-01-015
Keywords: optimize
Posted-Date: 09 Jan 2001 23:21:57 EST

MickaŽl Pointier asks:

> My problem is that I do not manage to find a clean solution
> to optimise this:
> eval(1+var+1)

Others have commented on the safety of such a transformation, I would
like to comment on its usefulness.

The initial description made me think that these optimizations were
part of a peephole optimizer that is only concerned with local
transformations. In this situation it is important to ask how much
effort should be expended looking for optimization possibilities that
infrequently occur. Peephole optimization can be very useful, but
there are limits to what it should be asked to do. In other words,
there comes a point in the evolution of a compiler when it is
necessary to stop trying to add optimizations and look for a more
general approach.

The approach that production compilers take is dependent on the
performance implications of the expression. When the simple "add one"
operation represents an expensive matrix or vector operation, it is
worth considerable effort to find one that can be eliminated. On the
other hand, even a floating point add is a hardwired (Wires? Showing
my age again.) instruction on most modern architectures and is only
expensive if it is executed a large number of times. In any case, the
optimization should only be performed if the language allows it, or
the user request that it be done.

I am familiar with several compilers that are able to make this
optimization transformation, although it is a byproduct of another
transformation that is done to reduce execution time for the generated
code. The intent is to group loop invariants together so that they do
not need to be reevaluated on each trip through the loop. Once the
expression tree is segmented into dependent and independent segments,
there are often new possibilities for compile-time evaluation.

Searching trees for patterns and then rewriting the code can be an
expensive undertaking both in your time, to construct the algorithms,
and in the time it takes to execute. It is quite possible that new
algorithms will be discovered that will make this optimization trivial
to perform, but most production compilers are asked to do so much more
that this transformation can be easily caught, if the compiler writers
decide that it is worthwhile.

Doing these optimizations is easy if the compiler was designed
accommodate them. From a practical viewpoint, it becomes increasingly
difficult to add new optimizations to an existing compiler.

-David Whitney

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.