Related articles |
---|
[8 earlier articles] |
Re: floating point comments@cygnus-software.com (Bruce Dawson) (1998-11-01) |
Re: floating point comments@cygnus-software.com (Bruce Dawson) (1998-11-01) |
Re: floating point darcy@usul.CS.Berkeley.EDU (1998-11-06) |
Re: floating point darcy@CS.Berkeley.EDU (Joseph D. Darcy) (1998-11-06) |
Re: floating point comments@cygnus-software.com (Bruce Dawson) (1998-11-07) |
Re: floating point eggert@twinsun.com (1998-11-19) |
Re: floating point dmcq@fano.demon.co.uk (David McQuillan) (1998-11-21) |
Re: floating point darcy@CS.Berkeley.EDU (Joseph D. Darcy) (1998-12-01) |
Floating Point jimp@powersite.net (Jim) (1999-11-02) |
From: | David McQuillan <dmcq@fano.demon.co.uk> |
Newsgroups: | comp.compilers |
Date: | 21 Nov 1998 12:08:05 -0500 |
Organization: | Home |
References: | 98-09-164 98-10-018 98-10-040 98-10-120 98-11-015 98-11-031 98-11-059 98-11-093 |
Keywords: | arithmetic |
eggert@twinsun.com (Paul Eggert) wrote:
> Stick to your guns! The basic problem with x86 and strict `double' is
> that, even in 64-bit mode, the x86 doesn't round denormalized numbers
> properly. It simply rounds the mantissa at 53 bits, resulting in a
> double-rounding error. The proper behavior is to round at fewer bits.
>
> I've seen claims of efficient workarounds, but whenever I see details,
> it's clear that the methods are either incorrect or inefficient. I'm
> not saying it's impossible, but I've never seen it done, and I believe
> it's infeasible to implement strict IEEE `double' arithmetic
> efficiently on the x86 architecture.
The trick of using fscal does always work as opposed to the problem of
double rounding denormalised numbers that storing and loading give. I
don't know what the performance is like I guess it is fine on modern
machines. The thing that really amused me though was I looked up this
486 book and it said fscal could be used as a fast alternative to
multiply and divide by a power of two. However it took twice as many
cycles as a general multiply :)
> Most people don't care about the errors, though, which is why the Java
> spec is being relaxed to allow x86-like behavior (and PowerPC
> multiply-add, too). For the vast majority of floating point
> applications, performance is more important than bit-for-bit
> compatibility, so it's easy to see why bit-for-bit compatibility is
> falling by the wayside.
I would disagree with this though not in a vehement way. The proper
spec does allow of some nice reasoning about what will happen, and
more to the point I think is that different results on different
machines means support costs. I have before now quite happily cut out
useful features simply because the documentation would get a bit more
obscure or because I thought people would start asking questions and
taking up time. Not being sure of what the machine will do with a
program and having that written into the spec gives me this nasty sour
expression I'd rather not have.
--
David McQuillan
Return to the
comp.compilers page.
Search the
comp.compilers archives again.