1 Dec 1998 02:46:13 -0500

Related articles |
---|

[9 earlier articles] |

Re: floating point comments@cygnus-software.com (Bruce Dawson) (1998-11-01) |

Re: floating point darcy@usul.CS.Berkeley.EDU (1998-11-06) |

Re: floating point darcy@CS.Berkeley.EDU (Joseph D. Darcy) (1998-11-06) |

Re: floating point comments@cygnus-software.com (Bruce Dawson) (1998-11-07) |

Re: floating point eggert@twinsun.com (1998-11-19) |

Re: floating point dmcq@fano.demon.co.uk (David McQuillan) (1998-11-21) |

Re: floating point darcy@CS.Berkeley.EDU (Joseph D. Darcy) (1998-12-01) |

Floating Point jimp@powersite.net (Jim) (1999-11-02) |

From: | "Joseph D. Darcy" <darcy@CS.Berkeley.EDU> |

Newsgroups: | comp.compilers |

Date: | 1 Dec 1998 02:46:13 -0500 |

Organization: | Compilers Central |

References: | 98-09-164 98-10-018 98-10-040 98-10-120 98-11-015 98-11-031 98-11-059 98-11-093 |

Keywords: | arithmetic |

Bruce Dawson <comments@cygnus-software.com> writes:

*> >I'll have to*

*> >partially retract my statement about nobody being happy with the x87 -*

*> >it doesn't implement double precision as badly as I had feared, since*

*> >the only unavoidable problem if you set the rounding to double is the*

*> >exponent range - which will rarely matter.*

eggert@twinsun.com (Paul Eggert) wrote:

*> Stick to your guns! The basic problem with x86 and strict `double' is*

*> that, even in 64-bit mode, the x86 doesn't round denormalized numbers*

*> properly. It simply rounds the mantissa at 53 bits, resulting in a*

*> double-rounding error. The proper behavior is to round at fewer bits.*

The rounding used on the x86 is explicitly allowed by the IEEE 754

standard (section 4.3). The intention of the x86 design is to reduce

the occurrence of floating point exceptions and thereby generate the

correct numerical answer more often.

*> I've seen claims of efficient workarounds, but whenever I see details,*

*> it's clear that the methods are either incorrect or inefficient.*

Roger Golliver of Intel has developed a refinement of the store-reload

technique that is both correct and efficient, comparable to the

store-reload idiom that exhibits double rounding. Using a floating

point exception handling optimization, Golliver's technique implements

correct pure double rounding with a speed penalty of 2X to 4X. For

details, see the Java Grande documents

"Improving Java for Numerical Computation"

http://math.nist.gov/javanumerics/jgfnwg-01.html

or

"Making Java Work for High-End Computing"

http://www.javagrande.org/sc98/sc98grande.{ps,pdf}

(The latter has a few formatting errors absent from the former.)

*> Most people don't care about the errors,*

Such discrepancies occur very rarely in practice and are quite

unlikely to break a practical program.

*> though, which is why the Java spec is being relaxed to allow*

*> x86-like behavior (and PowerPC multiply-add, too). For the vast*

*> majority of floating point applications, performance is more*

*> important than bit-for-bit compatibility, so it's easy to see why*

*> bit-for-bit compatibility is falling by the wayside.*

The new JVM spec uses a bit in a method's descriptor to indicate which

of two floating point semantics the method uses:

1. strict Java 1.0 floating point for bit-for-bit reproducibility

2. to improve performance, in some contexts float and double values

are allowed to have extended exponents

Existing class files will have the latter semantics.

-Joe Darcy

darcy@cs.berkeley.edu

Post a followup to this message

Return to the
comp.compilers page.

Search the
comp.compilers archives again.