24 Jan 2002 14:51:02 -0500

Related articles |
---|

re: Compiler bugs chase@world.std.com (David Chase) (2002-01-03) |

Re: Compiler bugs christian.bau@cbau.freeserve.co.uk (Christian Bau) (2002-01-05) |

Re: Compiler bugs chase@world.std.com (David Chase) (2002-01-14) |

Re: floating point accuracy, was Compiler bugs christian.bau@cbau.freeserve.co.uk (Christian Bau) (2002-01-17) |

Re: floating point accuracy, was Compiler bugs sos@zjod.net (2002-01-18) |

Re: floating point accuracy, was Compiler bugs chase@world.std.com (David Chase) (2002-01-18) |

Re: floating point accuracy, was Compiler bugs christian.bau@cbau.freeserve.co.uk (Christian Bau) (2002-01-24) |

From: | Christian Bau <christian.bau@cbau.freeserve.co.uk> |

Newsgroups: | comp.compilers |

Date: | 24 Jan 2002 14:51:02 -0500 |

Organization: | Compilers Central |

References: | 02-01-015 02-01-029 02-01-054 02-01-069 02-01-087 |

Keywords: | arithmetic |

Posted-Date: | 24 Jan 2002 14:51:02 EST |

David Chase wrote:

*>*

*> Christian Bau wrote:*

*>*

*> > You can look at rounding errors in two ways: Instead of producing f (x)*

*> > you produce f (x) + eps, and you want eps small. Or instead of producing*

*> > f (x) you produce f (x + delta), and you want delta small. So instead of*

*> > almost getting the answer to the question you asked, you get an answer*

*> > to almost the question you asked. The Java spec demands the first*

*> > behaviour. Have a look what happens if you demand the second behaviour:*

*>*

*> If so, then yes, but*

*>*

*> 1) that is a peculiar way to look at the machine, and it is not even*

*> described that way in the Intel documentation. The transcendental*

*> functions are not desribed as "exactly SIN(x+some_epsilon)" --*

*> they are described as "SIN(x), with a small error". I assume*

*> they thought they knew what they were talking about.*

*>*

*> 2) that is not how I was taught (in numerical analysis courses*

*> at Rice) to look at a machine. In particular, I was not taught*

*> to reason about the correctness of algorithms expressed in that*

*> sort of machine arithmetic. I'm not saying it's impossible,*

*> just that it's not what I was taught, and I think my education*

*> in this area was relatively mainstream.*

There is the classical problem of calculating eigenvectors and

eigenvalues of a large matrix. It turns out to be extremly difficult

to calculate eigenvectors of a matrix A with a small error, but you

can proove that a good algorithm will find the eigenvectors of a

matrix A' that is very close to A. Now if your matrix A didn't contain

exact values, then you couldn't expect a better result anyway, because

you didn't start with the matrix you wanted, but with one close to it.

Other example: sin (1.0e300). It doesn't give the sine of ten to the

threehundredth power. 1.0e300 is replaced by a floating point number

that is most likely more than +/- 10^240 away from ten to the

threehundredth power. That means sin (1.0e300) doesn't ask the

question you wanted to ask, but a slightly different one. So if it

then answers a slightly different question again, not much additional

harm is done.

*> I think you are proposing something rather unusual, and you have not*

*> given me a good reason to buy into your unusual view of the world.*

*> The view that motivates the design of fdlibm, on the other hand, is*

*> so accepted that people talk in that way even when it does not*

*> correspond to what they do (as is the case with the Pentium*

*> documentation).*

Looking at problems in a different way is often a good idea. fdlibm

looks at a certain problem in a certain way. Other ways of looking at

the same problem are equally valid. It turns out that for sin and cos

the precision requirement given by fdlibm's view is excessive when

compared to the requirement if you look at the problem in a different

way. But if you look at log and exp, things are different.

Or look at cos (x) for |x| < 10^-10. fdlibm returns a result of 1.0

which is perfectly fine, and you can't do any better anyway. But that

doesn't mean you got a useful result. Looking at it in a different

way, the result you got is cos (0) exactly, so you have a rather huge

error. Of course there is nothing much the library implementor could

have done, you just can't expect a useful result from cos (1e-10).

Post a followup to this message

Return to the
comp.compilers page.

Search the
comp.compilers archives again.