Related articles |
---|
Re: inlining + optimization = nuisance bugs luddy@concmp.com (Luddy Harrison) (1998-09-29) |
Re: floating point, was inlining + optimization = nuisance bugs chase@world.std.com (David Chase) (1998-10-04) |
Re: floating point, was inlining + optimization = nuisance bugs toon@moene.indiv.nluug.nl (Toon Moene) (1998-10-04) |
Re: floating point, was inlining + optimization = nuisance bugs genew@vip.net (1998-10-05) |
Re: floating point will@ccs.neu.edu (William D Clinger) (1998-10-05) |
Re: floating point comments@cygnus-software.com (Bruce Dawson) (1998-10-07) |
Re: floating point will@ccs.neu.edu (William D Clinger) (1998-10-10) |
Re: floating point dmcq@fano.demon.co.uk (David McQuillan) (1998-10-13) |
[10 later articles] |
From: | David Chase <chase@world.std.com> |
Newsgroups: | comp.compilers |
Date: | 4 Oct 1998 01:09:12 -0400 |
Organization: | NaturalBridge LLC |
References: | 98-09-164 |
Keywords: | arithmetic |
I hope this doesn't come off too much like a flame, but the
pro-flaky-FP crowd is starting to sound like Kahan's "Guilt Squad",
and doesn't seem to appreciate the problems this imposes for testing,
bug diagnosis, maintenance, and portability.
Luddy Harrison wrote:
> You design a circuit using resistors with a 10% tolerance. You
> build and test it. It works. You (suddenly, unpredictably) replace
> the resistors with new ones whose tolerance is 1%. The circuit
> stops working. Where does the fault lie?
"You", of course -- you designed it, you built it, you tested it, you
changed it, you also should have known that all those +/- 10%
resistors actually (if I recall correctly, from the last time I
designed and built such a sensitive circuit myself) have resistances
between +5% and +10% from their marked values (*). How well did you
test it? Did you try all combinations of resistors at the extreme
expected values (R+10%, R-10%) when you tested? (N resistors, that's
only 2-to-the-N tests to run).
(*) I don't actually know this to be true for 10% parts. It was true
for 5% parts; obviously, the manufacturer sorts the resistors by
correspondence to target value. Those found to be +/- 2% are marked
and priced accordingly, leaving +/- 5% with a hole in the middle. For
some reason, of the 100 resistors I sampled, all were to the high
side, my assumption here being that erring high reduces the power (for
a given voltage, which is usual case) dissipated in the resistor, so
that is what is done. Note that sorting resistors before use is one
way to protect yourself when building a sensitive analog circuit.
That's life with analog components. This is different from floating
point in digital computers; each computer has a well-defined (if
perhaps not well-documented) hardware behavior. That life would be
far worse if we were working in the analog world doesn't seem to
justify allowing compilers to replace predictable (*) results with
unpredictable results.
(*) "predictable", meaning documented, understandable, and understood.
I am well aware that optimizing compilers can be completely repeatable
in their actions, but their exact output is not predictable by mere
mortals.
There's two audiences for floating point arithmetic; to one audience,
speed is more important than exactly reproducible answers across
compilations and across platforms, but to the other, what matters more
is the ability to reliably test and reproduce program behavior. Java
has put this into people's faces in a relatively delightful way by
declaring, at least for a time, that floating point arithmetic is
defined down to the last bit. They didn't define everything this
tightly (notably, windowing, thread scheduling, GC effectiveness,
finalization order, and Object.hashCode() can all vary across
platforms) but they defined FP like this. There are crude design
rules and coding standards that can protect you somewhat from the
other sources of Java variance; if FP were not well specified, I think
the crude design rule is "don't use FP", which is not very useful.
The intent of such a tight definition is simply that if you have
tested a program to such a degree that you believe that it runs, then
you have a reasonable confidence that it will run in the field, and
run when recompiled, and run on other platforms. Testing and
maintenance is simpler in these situations, because there is one right
answer for each test, and either you get it, or you don't. If a fix
changes answers, then perhaps the old answers were wrong, but now
there is a consistent and reproducible difference, and you can hope to
track down the cause of the change. In particular, the answers will
not change if you compile for debugging, or not, and will not change
if you insert debugging probes.
Placing such an emphasis on testing seems a bit like admitting defeat
(from the point of view of "good design" and "program proof"), but in
my experience no amount of careful design and program proof (as if
anyone proved significant programs correct, with the notable exception
of the people at CLI) can protect you from the typical blizzard of
typos and thinkos committed by people working in a hurry. If it isn't
tested it doesn't work (no matter who designed it), and good testing
is difficult and expensive even without gratuitous compiler-
introduced variations.
David Chase
Return to the
comp.compilers page.
Search the
comp.compilers archives again.