|How to verify that optimizations preserve semantics linuxkaffee_@_gmx.net (Stephan Ceram) (2010-05-11)|
|Re: How to verify that optimizations preserve semantics firstname.lastname@example.org (=?ISO-8859-1?Q?Bj=F6rn_Franke?=) (2010-05-12)|
|Re: How to verify that optimizations preserve semantics email@example.com (Stefan Monnier) (2010-05-12)|
|Re: How to verify that optimizations preserve semantics firstname.lastname@example.org (Jeremy Wright) (2010-05-13)|
|Re: How to verify that optimizations preserve semantics email@example.com (Tom Crick) (2010-05-13)|
|Re: How to verify that optimizations preserve semantics firstname.lastname@example.org (Walter Banks) (2010-05-14)|
|Re: How to verify that optimizations preserve semantics email@example.com (BGB / cr88192) (2010-05-14)|
|From:||Walter Banks <firstname.lastname@example.org>|
|Date:||Fri, 14 May 2010 11:05:01 -0400|
|Organization:||Aioe.org NNTP Server|
|Posted-Date:||15 May 2010 02:08:28 EDT|
We do use regression tests to make sure future releases have the same
behavior. We also add cases to our regression tests regularly to
refine our coverage. During implementation we do often run random
tests with unoptimized code and optimized code in cases where we
suspect might have possible corner case problems.
Our compilers have a lot of small optimizations that are designed to
be used in combination in generated code.
Optimization validation is an extremely tough problem for all the
reasons you have stated. With nothing formal or automated it primarily
There are simple cases where many compilers have significant simple
optimizations wrong. Take the case of right shifts for divides by 2 to
a power. It will work fine for positive numbers and not for negative
numbers even for a processor with sign extension shifts.
One comment that is relevant is most compiler developers eventually
become very careful adding optimizations as soon as they implement a
second target processor. The eureka moment for many is value in the
carry bit after a subtraction.
Byte Craft Limited
> I was wondering how compiler optimisations can be verified,
> i.e. whether they perform always valid code modifications? How is it
> done in practice?
> I assume that the only safe way would be to formulate the applied code
> modifications as formal transformations that model every possible
> situation that can ever occur. But on the other hand this seems to be
> infeasible for most optimisations since they are too complex for
> analytical models.
> An alternative would be regression tests, but are such tests safe? I
> mean you can never be sure that you did not miss a scenario that may
> occur in practice.
Return to the
Search the comp.compilers archives again.