|Unsafe Optimizations vmars!alex@relay.EU.net (Alexander Vrchoticky) (1990-06-20)|
|Unsafe optimizations email@example.com (1990-06-20)|
|Re: Unsafe Optimizations firstname.lastname@example.org (1990-06-21)|
|Re: Unsafe optimizations email@example.com (1990-06-21)|
|Re: Unsafe Optimizations davidh@dent.Berkeley.EDU (David S. Harrison) (1990-06-22)|
|From:||firstname.lastname@example.org (Dale Worley)|
|Date:||Wed, 20 Jun 90 04:22:38 GMT|
I find this whole discussion somewhat odd. While it may be possible
for the programmer to "know what he is doing", if the compiler is
doing serious optimizations, it is impossible for him to know what the
optimizer is doing!
There are times when "unsafe optimizations" are reasonable -- when the
programmer explicitly declares that he wants some semantic rule of the
language relaxed in the interests of efficiency. But these
relaxations are of the form "the implementation is no longer required
to detect certain violations of the language rules", usually array
bounds violations, arithmetic overflow, and other out-of-range
conditions. They are defined in terms of the semantics of the
language definition, not "I know what the compiler will do with this
piece of code if I let it," and so are well-defined and safe (in the
sense that you have defined the limits of what might happen).
My personal opinion is that when people want to allow "unsafe
optimizations", they either want a semantic relaxation (as above) or
want to make up for a deficient compiler that doesn't know how to
prove enough about the program. When are people going to start
puttting serious interprocedural analysis into optimizing compilers?
Dale Worley Compass, Inc. email@example.com
Return to the
Search the comp.compilers archives again.