From: | glen herrmannsfeldt <gah@ugcs.caltech.edu> |
Newsgroups: | comp.compilers |
Date: | Thu, 24 Oct 2013 02:56:06 +0000 (UTC) |
Organization: | Aioe.org NNTP Server |
References: | 13-10-026 |
Keywords: | arithmetic, optimize |
Posted-Date: | 24 Oct 2013 08:56:09 EDT |
Abid <abidmuslim@gmail.com> wrote:
(snip)
> Does back end compiler optimizations affect the floating point
> accuracy? Is there any research work in this area?
As far as I know, the biggest cause of problems with optimization
is the extended precision of the registers in the Intel x87.
The original design of the 8087 was for a virtual stack that would
spill on overflow, and (what is the opposite of spill?) on underflow.
Not until the hardware was ready did anyone try to write the interrupt
routine to handle it, and it turned out not to be possible. The
required state information was either not available, could not be
restored, or both.
Compilers could write temporary intermediates to memory in temporary
real format, but that is rare.
Next comes the complication of optimization. Common subexpression
elimination between statements, or in general keeping values in
registers, means that some values will retain extra precision, but you
(the programmer) don't know which ones. Even if you assign to a
variable and reload, the optimizer might keep it in a register. (Many
compilers have an option to force the store, but that can slow things
down.)
I don't know about the research that might be done. The x87 has an
eight element stack. Research won't change that, though the use of SSE
avoids the problem (and advantage) of extended precision.
The technology for doing optimization is reasonably well understood.
Knowing which ones you should do it a different problem.
-- glen
Return to the
comp.compilers page.
Search the
comp.compilers archives again.