Re: Arithmetic accuracy, was Compiler bugs

nmm1@cus.cam.ac.uk (Nick Maclaren)
18 Jan 2002 21:02:12 -0500

          From comp.compilers

Related articles
re: Compiler bugs chase@world.std.com (David Chase) (2002-01-03)
Re: Compiler bugs christian.bau@cbau.freeserve.co.uk (Christian Bau) (2002-01-05)
Re: Compiler bugs chase@world.std.com (David Chase) (2002-01-14)
Re: Arithmetic accuracy, was Compiler bugs joachim_d@gmx.de (Joachim Durchholz) (2002-01-16)
Re: Arithmetic accuracy, was Compiler bugs nmm1@cus.cam.ac.uk (2002-01-18)
Re: Arithmetic accuracy, was Compiler bugs jgd@cix.co.uk (2002-01-24)
Re: Arithmetic accuracy, was Compiler bugs nmm1@cus.cam.ac.uk (2002-01-24)
| List of all articles for this month |
From: nmm1@cus.cam.ac.uk (Nick Maclaren)
Newsgroups: comp.compilers
Date: 18 Jan 2002 21:02:12 -0500
Organization: University of Cambridge, England
References: 02-01-015 02-01-029 02-01-054 02-01-063
Keywords: errors, arithmetic
Posted-Date: 18 Jan 2002 21:02:11 EST

Joachim Durchholz <joachim.durchholz@gmx.de> wrote:
>David Chase wrote:
>
>> I'm a little curious about one thing -- does anyone, besides a
>> specifications- and-exact-correctness weenie like myself, actually
>> care about this level of detail in machine floating point?
>
>I can identify several groups of people who would be fussy about such
>things:
>
>Numeric algorithm implementors (because some algorithms depend
>crucially on such precision, or - even more relevant - that the
>round-off behaviour is predictable because the algorithm cancels them
>out).
>
>Anybody who uses this type of numeric library (typically scientific
>computations and simulation).
>
>People who are doing calculations for stuff that's safety-critical
>(bridge construction, building construction) - not because any serious
>problems are expected, but because these people cannot easily evaluate
>whether the deficit is serious or not. (Plus there are enough reliable
>applications around, so why switch to one with known deficits?)


Er, no. I am responding you your posting, but this isn't specific to
your points, and applies to other people's.


Firstly, numerical experts do NOT depend on such things, because they
know better. One of their main objections (sic) to consistent
floating-point is that it is extremely harmful in many important ways,
and it leads non-experts to believe that their answers are right
because they are the same on multiple systems. I see this delusion
regularly.


Any serious optimisation or parallel working will lead to variations
in result, and so the demand for consistency means none of that.
For example, consider a trivial problem like the accumulation of a
sum in parallel (MPI_Gather(...,MPI_SUM,...)). This isn't just a
minor slowdown, but means that you have to abandon ANY attempt at
improving performance.


And, of course, it does not help with improving accuracy, in the
normal case where that is constrained by the stability of the
algorithm and quality of input data. It merely hides the fact that
the answers are likely to be consistently wrong! What is needed
for numerical accuracy is a known set of mathematically consistent
properties of the arithmetic.


As an aside, Kahan does not seem to like probabilistic rounding,
because he (correctly) claims that it does not give a reliable
error analysis. However, he omits the points that (a) it exposes
this aspect of arithmetic (which is good) and (b) is actually MORE
accurate than nearest rounding on several important classes of
operation. However, while statisticians are happy to work with
probabilities, numerical analysts aren't, and most computer
scientists react with exorcisms!


He (and his standard bearer, Sun) are perfectly correct that, if you
need to know for certain whether arithmetic errors are causing
inaccuracies, then interval arithmetic is the solution. And it is
both necessary and sufficient. But the problems of that are another
issue.


You must remember the context in which IEEE 754 was written. There
were a lot of different arithmetics (not a problem), some with evil
error accumulation properties (IBM 370 being the classic), others
with INCONSISTENT arithmetic (e.g. several CDCs), several with
very limited ranges, and many with no reliable error checking at
all. IEEE 754 put some order into this chaos, but many people
believe that it went too far - LIA-1 is an attempt to steer the
middle line.


So the summary is that arithmetic needs a precise mathematical model,
which describes the operations, their invariants, their accuracy and
their error bounds (e.g. LIA-1, LIA-2 and LIA-3), but does NOT need
'precise' arithmetic. It does not help, and is more harmful than
beneficial.




Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nmm1@cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.