Thu, 31 Mar 1994 20:21:13 GMT

Related articles |
---|

[3 earlier articles] |

Re: Floating point constant question bill@amber.csd.harris.com (1994-03-29) |

Re: Floating point constant question maydan@mti.mti.sgi.com (1994-03-29) |

Re: Floating point constant question hbaker@netcom.com (1994-03-30) |

Re: Floating point constant question conte@ece.scarolina.edu (1994-03-30) |

Re: Floating point constant question chase@Think.COM (1994-03-30) |

Re: Floating point constant question hbaker@netcom.com (1994-03-31) |

Re: Floating point constant question przemek@rrdjazz.nist.gov (1994-03-31) |

Newsgroups: | comp.compilers |

From: | przemek@rrdjazz.nist.gov (Przemek Klosowski) |

Keywords: | arithmetic |

Organization: | U. of Maryland/NIST |

References: | 94-03-157 94-03-191 |

Date: | Thu, 31 Mar 1994 20:21:13 GMT |

chase@Think.COM (David Chase) writes:

In general, I think this trend is nuts (yes, I'm aware that reordering can

occur in Fortran as long as it does not disobey parentheses). I can

tolerate accuracy-enhancing optimizations (such as use of fused-madd, or

replacing div-imprecise with mult-precise-reciprocal) under the control of

a flag, but if you are trying to ensure that your application will exhibit

no bugs in the field, then you do not want to monkey with its behavior in

any way, even if you are making it "better".

In addition, verification of a compiler is made more difficult by these

sorts of things. No longer is there a single right answer -- now there is

a range of correct answers. Testing (to the same degree of confidence)

becomes much more expensive. This is especially true if you put other

behavior-"improving" optimizations under the control of the "-O11" flag.

On the other hand, if the result CAN depend on reordering, then perhaps it

is not well-defined in numerical sense. The thing that is lacking is not

some special ordering giving a 'blessed' result, but rather an error bound

on the result.

Interval arithmetic is one of the methods for obtaining such error

estimate; I understand that it is currently impractical for production

numerical work, because of speed penalty and because the intervals grow up

pretty fast. At the same time, I propose that for the purposes of compiler

certification one could consider a program written using interval

arithmetic algorithm and compare the intervals rather than the raw numbers.

Actually, since I am responding to chase@think.com, I might ask you

whether massively parallel processors couldn't somehow make the interval

arithmetic more palatable? after all, the interval computation should just

require at most double the number of processors.

I had this idea after hearing a talk by Jack Dongarra, who complained that

most parallel algorithms no longer have nice predictable error bounds

provided by single-threaded algorightms, so that some effort to make error

estimates will be needed anyway.

--

przemek klosowski (przemek@rrdstrad.nist.gov)

Reactor Division (bldg. 235), E111

National Institute of Standards and Technology

Gaithersburg, MD 20899, USA

(301) 975 6249

--

Post a followup to this message

Return to the
comp.compilers page.

Search the
comp.compilers archives again.