Related articles |
---|
[20 earlier articles] |
Re: inlining + optimization = nuisance bugs andrewf@slhosiery.com.au (Andrew Fry) (1998-09-24) |
Re: inlining + optimization = nuisance bugs comments@cygnus-software.com (Bruce Dawson) (1998-09-24) |
Re: inlining + optimization = nuisance bugs Martin.Ward@SMLtd.Com (1998-09-26) |
Re: inlining + optimization = nuisance bugs toon@moene.indiv.nluug.nl (Toon Moene) (1998-09-29) |
Re: inlining + optimization = nuisance bugs wclodius@aol.com (1998-09-29) |
Re: inlining + optimization = nuisance bugs ralph@inputplus.demon.co.uk (Ralph Corderoy) (1998-09-29) |
Re: inlining + optimization = nuisance bugs luddy@concmp.com (Luddy Harrison) (1998-09-29) |
Re: inlining + optimization = nuisance bugs tim@wagner.princeton.edu (1998-09-29) |
Re: inlining + optimization = nuisance bugs dmr@bell-labs.com (Dennis Ritchie) (1998-09-29) |
Re: inlining + optimization = nuisance bugs dmr@bell-labs.com (Dennis Ritchie) (1998-09-29) |
Re: inlining + optimization = nuisance bugs zalman@netcom.com (1998-10-01) |
Re: inlining + optimization = nuisance bugs wclodius@aol.com (1998-10-01) |
Re: inlining + optimization = nuisance bugs qjackson@wave.home.com (Quinn Tyler Jackson) (1998-10-04) |
[17 later articles] |
From: | Luddy Harrison <luddy@concmp.com> |
Newsgroups: | comp.compilers |
Date: | 29 Sep 1998 15:45:18 -0400 |
Organization: | Compilers Central |
Keywords: | arithmetic, comment |
Our esteemed moderator writes:
>> If you think that floating point operations don't have a
>> correct answer, then you definitely shouldn't be using floating point.
>> Unfortunately, too many compiler writers seem to share this misconception.
It would be absurd, of course, to say that floating point operations
don't have correct and incorrect answers. But it is not at all absurd
to speak of a range of possible correct outcomes for such arithmetic.
If it were as obviously incorrect as you assert, then virtually every
program that uses floating point would fail to run, because by your
standard for correctness, there is not a single compiler that I know
of whose default optimization configuration preserves the bitwise
semantics of floating point that you advocate. (Is there any compiler
shipping for the x86 family that, when given only the -O flag, writes
out every double precision floating number after every operation? Is
there any compiler shipping for the PowerPC familiy that, when given
only the -O flag, suppresses the generation of fmadd and fmsub? None
that I know of.)
This is the critical point: the difference between the result of an
operation performed in extended precision and the same operation
performed in (say) basic double precision is the roundoff error that
occurs when converting the value from the extended to the basic
format. When we insist on having a 'predictable' result from such
operations, we are effectively computing using the roundoff error.
While there may be good arguments in favor of this practice, it is
hardly absurd to suggest that a program should be robust to the point
that it does not depend on the value of discarded (rounded-off) bits.
>> As we've said many times before, approximate is not the
>> same as unpredictable. -John]
We can agree that 'approximate' does not mean the same thing as
'unpredictable', while still disgreeing about whether 'unpredictable'
means the same thing as 'incorrect'. Let me give several examples of
things which are unpredictable, which standard practice nevertheless
labels as correct.
A C program uses the addresses of structures as keys in a hashtable.
The addresses vary from run to run, so the bits processed by the
program (for the same input) also vary from run to run, but the result
is correct in the large. The addresses of the structures are
unpredictable, but they are nevertheless useful.
A program calls a random number generator to obtain a perfect hashing
function. The values returned by the generator vary from run to run,
but the overall result is patently correct.
Similarly, one could argue that while roundoff will vary from run to
run (or from compilation to compilation), the resulting floating point
operations are nonetheless predictable enough to be useful.
Now I will engage in a bit of intentional provocation.
You design a circuit using resistors with a 10% tolerance. You build
and test it. It works. You (suddenly, unpredictably) replace the
resistors with new ones whose tolerance is 1%. The circuit stops
working. Where does the fault lie?
-Luddy Harrison
[The purchasing department. -John]
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.