Related articles |
---|
How to read/print fp numbers accurately david@glance.ch (1990-08-22) |
Newsgroups: | comp.compilers |
From: | david@glance.ch (David Mosberger) |
Keywords: | design, Fortran, question |
Organization: | Compilers Central |
Date: | Wed, 22 Aug 90 05:22:37 GMT |
Recently, two articles appeared in a SIGPLAN paper concerning the optimal
conversion of floating point numbers represented in a decimal scientific
notation to binary floating point representation and back. ``Optimal'' is
meant in the sense of ``best approximation to the true binary/decimal
value''. The presented algorithms were very elaborate. However, they
require either multi-precision integers or extended precision
floating-point operations or both.
I would like to know, what the best is one can get using single precision
floating point only? I.e., if you want to convert to/from single precision
floating point numbers, the algorithm should only use single precision
floating point operations (of course, integers with a ``usual'' size may
be used as well). Is there an optimality criterion for such an algorithm?
David Mosberger Glance Ltd.
Software Engineer Gewerbestrasse 4
david@glance.ch 8162 Steinmaur
UUCP: {...}!{uunet,mcsun}!elava!david Switzerland
X.400: S=david;O=glance;P=switch;A=arCom;C=ch
BITNET: david@glance.ch or david at glance.ch
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.