|re: Compiler bugs firstname.lastname@example.org (David Chase) (2002-01-03)|
|Re: Compiler bugs email@example.com (Christian Bau) (2002-01-05)|
|Re: Compiler bugs firstname.lastname@example.org (David Chase) (2002-01-14)|
|Re: floating point accuracy, was Compiler bugs email@example.com (Christian Bau) (2002-01-17)|
|Re: floating point accuracy, was Compiler bugs firstname.lastname@example.org (2002-01-18)|
|Re: floating point accuracy, was Compiler bugs email@example.com (David Chase) (2002-01-18)|
|Re: floating point accuracy firstname.lastname@example.org (2002-01-24)|
|Re: floating point accuracy, was Compiler bugs email@example.com (Christian Bau) (2002-01-24)|
|From:||David Chase <firstname.lastname@example.org>|
|Date:||18 Jan 2002 21:06:49 -0500|
|Organization:||The World : www.TheWorld.com : Since 1989|
|References:||02-01-015 02-01-029 02-01-054 02-01-069|
|Posted-Date:||18 Jan 2002 21:06:49 EST|
Christian Bau wrote:
> You can look at rounding errors in two ways: Instead of producing f (x)
> you produce f (x) + eps, and you want eps small. Or instead of producing
> f (x) you produce f (x + delta), and you want delta small. So instead of
> almost getting the answer to the question you asked, you get an answer
> to almost the question you asked. The Java spec demands the first
> behaviour. Have a look what happens if you demand the second behaviour:
> So if you look at the errors the other way round, the picture changes
If so, then yes, but
1) that is a peculiar way to look at the machine, and it is not even
described that way in the Intel documentation. The transcendental
functions are not desribed as "exactly SIN(x+some_epsilon)" --
they are described as "SIN(x), with a small error". I assume
they thought they knew what they were talking about.
2) that is not how I was taught (in numerical analysis courses
at Rice) to look at a machine. In particular, I was not taught
to reason about the correctness of algorithms expressed in that
sort of machine arithmetic. I'm not saying it's impossible,
just that it's not what I was taught, and I think my education
in this area was relatively mainstream.
I think you are proposing something rather unusual, and you have
not given me a good reason to buy into your unusual view of the
world. The view that motivates the design of fdlibm, on the other
hand, is so accepted that people talk in that way even when it
does not correspond to what they do (as is the case with the
> And implementations that are a lot faster (like Pentium hardware, or
> MacOS standard C library) while still getting decent results do just as
> good (the MacOS implementation calculates sin (x+eps) or cos (x+eps)
> within something like 0.7 ulp, and eps is less that x * 2^-100).
Problem is, the Pentium hardware isn't a lot faster. For small inputs
that require no range reduction, it is actually slower (or at least it
was on two different Pentiums, last time I benchmarked).
> Or let my ask this question: The fdlibm functions for sine/cosine
> contain some huge tables. If one bit in these tables were incorrect,
> producing completely wrong results for x > 1e100 (but still in the range
> -1 to +1), would anyone notice?
I probably would have noticed. I noticed errors in VC++'s compilation
of fdlibm that showed up in the LSB of the log functions. It would
not surprise me if the guys who wrote fdlibm would have noticed, too;
I once worked with them, and they cared quite a lot about getting
First, recall that the fdlibm strategy is first range reduction, then
approximation. Any error in the range reduction tables will be
systemic within a certain range, and not confined to point values.
Errors in the approximation after range reduction would be noticed on
small inputs. And yes, I have tested in that range in the past, and
now that I have a better test harness, it is a relatively trivial
matter to test there again. I just ran 100000 samples, uniform on PI
* (0 .. 2**360) and the error distribution was the same as it is on
the smaller range (i.e., good).
Return to the
Search the comp.compilers archives again.