Re: inlining + optimization = nuisance bugs

Luddy Harrison <luddy@concmp.com>
22 Sep 1998 01:14:41 -0400

          From comp.compilers

Related articles
[10 earlier articles]
Re: inlining + optimization = nuisance bugs joachim.durchholz@munich.netsurf.de (Joachim Durchholz) (1998-08-19)
Re: inlining + optimization = nuisance bugs roy@prism.gatech.edu (1998-08-20)
Re: inlining + optimization = nuisance bugs awf@robots.ox.ac.uk (Andrew Fitzgibbon) (1998-08-20)
Re: inlining + optimization = nuisance bugs bear@sonic.net (Ray Dillinger) (1998-08-22)
Re: inlining + optimization = nuisance bugs luddy@concmp.com (Luddy Harrison) (1998-09-18)
Re: inlining + optimization = nuisance bugs cfc@world.std.com (Chris F Clark) (1998-09-19)
Re: inlining + optimization = nuisance bugs luddy@concmp.com (Luddy Harrison) (1998-09-22)
Re: inlining + optimization = nuisance bugs zalman@netcom.com (1998-09-22)
Re: inlining + optimization = nuisance bugs chase@world.std.com (David Chase) (1998-09-22)
Re: inlining + optimization = nuisance bugs christian.bau@isltd.insignia.com (1998-09-22)
Re: floating point precision discussion jacob@jacob.remcomp.fr (1998-09-22)
Re: floating point precision discussion chrisd@reservoir.com (Chris Dodd) (1998-09-22)
Re: inlining + optimization = nuisance bugs andrewf@slhosiery.com.au (Andrew Fry) (1998-09-24)
[13 later articles]
| List of all articles for this month |

From: Luddy Harrison <luddy@concmp.com>
Newsgroups: comp.compilers
Date: 22 Sep 1998 01:14:41 -0400
Organization: Compilers Central
Keywords: arithmetic

>> Chris F Clark <cfc@world.std.com> writes:
>>
>> However, you can have "too much" precision if you can't have it
>> universally. Take the following fragment of a basic derivative:
>>
>> 100 input a, b
>> 110 let x = a*a - b*b
>> 120 print x
>> 130 end


>> ... If your implementation keeps either a*a
>> or b*b in a register before performing the subtraction (and does not
>> store both values into memory), then it will get the wrong answer (it
>> will not get 0, which is the right answer according to both real
>> arithmetic and its computer approximation).


This is a good example.


In low precision, we have


      l1 = a*a
      l2 = b*b
      x = l1 - l2


In mixed precision, we have


      h1 = a*a // extended precision temp
      l2 = b*b
      x = h1 - l2


The difference between these results is h1 - l1, which is equal to the
roundoff that results from converting a*a from high to low precision.


As Chris points out, when we run the program with a = b, we get a
result of 0.0 from the low precision version, which looks very good.
But the floating point precision does not justify interpreting this an
exact zero. All that we can say is that it is a value that is near to
zero.


Now run the program with a = b + delta, where 0 < delta < sqrt(h1 - l1).
This time the mixed precision version gives a strictly more accurate
result than the low precision one. The low precision version
continues to give us a zero.


Run the program once more with a chosen such that a*a = +inf in low
precision, but does not overflow in extended precision. Again the
mixed precision version gives a strictly more accurate result than the
low precision one.


I would argue that if the programmer is surprised by any of these
outcomes, then the program has been written to assume strict equality
where it should assume only approximate equality. In other words, it
is attributing undue significance to the low-precision floating point
format.


My reading of the programming manual for the PowerPC is that the
internal registers that hold the intermediate result for a
multiply-add instruction have extended precision. If this is true,
and if it is true that occasional use of extended precision is
'wrong', then there is virtually no circumstance under which a
compiler can correctly generate the PowerPC fmadd and fmsub
instructions, for example. This strikes me as a very severe
interpretation of floating point semantics.


-Luddy Harrison
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.