Thu, 24 Oct 2013 07:41:12 -0400

From: | George Neuner <gneuner2@comcast.net> |

Newsgroups: | comp.compilers |

Date: | Thu, 24 Oct 2013 07:41:12 -0400 |

Organization: | A noiseless patient Spider |

References: | 13-10-026 |

Keywords: | arithmetic, optimize |

Posted-Date: | 24 Oct 2013 08:57:48 EDT |

On Wed, 23 Oct 2013 13:47:10 -0700 (PDT), Abid <abidmuslim@gmail.com>

wrote:

*>Does back end compiler optimizations affect the floating point*

*>accuracy?*

As John said, the answer is "sometimes".

FP hardware registers usually are wider than are the compiler's memory

storage formats (most often 32 and 64 bit IEEE). The registers

typically carry a few to several extra bits of precision which are

lost when the value is stored to memory and then reloaded.

For maximum precision you want to keep intermediate values in

registers as much as possible, minimizing stores to and reloads from

memory.

For maximum consistency of results - across compiler modes or

different FPUs - you generally must do the opposite: aggressively

store and reload intermediate results before using them. [On many

machines, clipping intermediate values can be done by moving them to

and from an integer register without actually going all the way to

memory. But however it is done, it slows the calculation.]

Floating point operations, in general, are not commutative: changing

the order of evaluation of sub-expressions can change the results.

Substitutions using mathematical equivalences can change the results

because floating point numbers are only an approximation of real

numbers - many real equivalences don't hold for FP math.

Using different FPUs may yield different results. From the back end

POV this is important if there are multiple different hardware units

to choose from. E.g., on the x86, the x87 FPU and the SIMD FPU have

different register widths and some corresponding instructions round

results differently. It is safe to do unrelated calculations on

either unit, but you have to be extremely careful about using both to

work on different parts of the same calculation.

*>Is there any research work in this area?*

There has been quite a lot of work aimed at improving FP in the middle

(in the IR), but I'm not aware of any research specifically targeting

the back end.

George

[I think you mean they're not associative. I don't know any situations

where a+b != b+a, but lots where a+(b+c) != (a+b)+c -John]

Post a followup to this message

Return to the
comp.compilers page.

Search the
comp.compilers archives again.