Re: floating point accuracy, was Compiler bugs

Christian Bau <>
17 Jan 2002 00:30:32 -0500

          From comp.compilers

Related articles
re: Compiler bugs (David Chase) (2002-01-03)
Re: Compiler bugs (Christian Bau) (2002-01-05)
Re: Compiler bugs (David Chase) (2002-01-14)
Re: floating point accuracy, was Compiler bugs (Christian Bau) (2002-01-17)
Re: floating point accuracy, was Compiler bugs (2002-01-18)
Re: floating point accuracy, was Compiler bugs (David Chase) (2002-01-18)
Re: floating point accuracy (2002-01-24)
Re: floating point accuracy, was Compiler bugs (Christian Bau) (2002-01-24)
| List of all articles for this month |

From: Christian Bau <>
Newsgroups: comp.compilers
Date: 17 Jan 2002 00:30:32 -0500
Organization: Compilers Central
References: 02-01-015 02-01-029 02-01-054
Keywords: arithmetic, errors
Posted-Date: 17 Jan 2002 00:30:32 EST

David Chase wrote:

<about the Java specification that the sine and cosine function must
produce the infinitely exact result, rounded to one of the two nearest
floating point numbers>

> I'm a little curious about one thing -- does anyone, besides a
> specifications- and-exact-correctness weenie like myself, actually
> care about this level of detail in machine floating point?

You can look at rounding errors in two ways: Instead of producing f (x)
you produce f (x) + eps, and you want eps small. Or instead of producing
f (x) you produce f (x + delta), and you want delta small. So instead of
almost getting the answer to the question you asked, you get an answer
to almost the question you asked. The Java spec demands the first
behaviour. Have a look what happens if you demand the second behaviour:

First, the Pentium hardware implementation of sin (x) becomes quite
excellent. The problem cases happen when x is close to pi and therefore
sin (x) very close to 0, and for absurdely large x. In the first case,
the Pentium will calculate sin (x + eps) where eps is very small
compared to x (about 2^-66 instead of 2^-53), in the second case any
result between -1 and +1 would be "good". Then there is the cosine
function: Here the goal of calculating cos (x + delta) where delta is
very small cannot be achieved. For small x, cos x is approximately 1 -
x^2 / 2. If u is defined such that 1 + u is the smallest floating point
number greater than 1, then 1 - u/2 is the largest floating point number
< 1. Let x = sqrt (u) / 2, then cos x is about 1 - u^2/4. The two
nearest floating point numbers are 1 and 1 - u/2. cos (0) = 1, cos (sqrt
(u) / sqrt (2)) = 1 - u/2, and both numbers are quite far away from 1 -

So if you look at the errors the other way round, the picture changes

The whole IEEE standard is all about quality to get reliable results, so
that programs behave the way that a non-expert programmer expect as
often as possible. Now what does the Java spec for sine and cosine
achieve: As an example, examine the follows of sin (x)*sin (x) + cos
(x)*cos (x). These should always be close to 1. The Java spec doesn't
achieve that the results are always one. Even if sin (x) and cos (x) are
both within 1/2 ulp all the time, you can get relatively large errors.
And implementations that are a lot faster (like Pentium hardware, or
MacOS standard C library) while still getting decent results do just as
good (the MacOS implementation calculates sin (x+eps) or cos (x+eps)
within something like 0.7 ulp, and eps is less that x * 2^-100).

Or let my ask this question: The fdlibm functions for sine/cosine
contain some huge tables. If one bit in these tables were incorrect,
producing completely wrong results for x > 1e100 (but still in the range
-1 to +1), would anyone notice?

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.