Re: Subtraction + comparison in one asm instruction?

"Jan C. =?iso-8859-1?Q?Vorbr=FCggen?=" <jvorbrueggen@mediasec.de>
13 Nov 2002 12:20:16 -0500

          From comp.compilers

Related articles
[12 earlier articles]
Re: Subtraction + comparison in one asm instruction? vincent+news@vinc17.org (Vincent Lefevre) (2002-09-14)
Re: Subtraction + comparison in one asm instruction? vbdis@aol.com (VBDis) (2002-09-19)
Re: Subtraction + comparison in one asm instruction? anton@mips.complang.tuwien.ac.at (Anton Ertl) (2002-09-19)
Re: Subtraction + comparison in one asm instruction? joachim_d@gmx.de (Joachim Durchholz) (2002-09-19)
Re: Subtraction + comparison in one asm instruction? sander@haldjas.folklore.ee (Sander Vesik) (2002-11-12)
Re: Subtraction + comparison in one asm instruction? sander@haldjas.folklore.ee (Sander Vesik) (2002-11-12)
Re: Subtraction + comparison in one asm instruction? jvorbrueggen@mediasec.de (Jan C. =?iso-8859-1?Q?Vorbr=FCggen?=) (2002-11-13)
| List of all articles for this month |

From: "Jan C. =?iso-8859-1?Q?Vorbr=FCggen?=" <jvorbrueggen@mediasec.de>
Newsgroups: comp.compilers
Date: 13 Nov 2002 12:20:16 -0500
Organization: MediaSec Technologies GmbH
References: 02-09-038 02-09-076 02-09-083 02-11-051
Keywords: arithmetic, design, optimize
Posted-Date: 13 Nov 2002 12:20:16 EST

> Well, then you obviously have never worked with anything where getting
> the same result repoeatably for the same results is absolutely crucial.
> And one should not have to turn off optimisations to get well-defined
> results.


The results are still well-defined - they are just different from other
well-defined results. See John's comment below.


> > [This is an ancient sore point. In the IBM Fortran X compiler about
> > 30 years ago, the guy working on optimization set as a ground rule for
> > himself that any new code an optimization generated had to produce
> > bit-identical results to the old code, and he managed to get some pretty
> > amazing results anyway. But in general I concur that any code that
> > meets the language spec is valid, regardless of whether the results
> > are identical to some other code that also meets the language spec.
> > -John]


...snip...


> [I wouldn't disagree. I think the problem is that languages have too
> often been defined with more thought about making optimizations possible
> than to making numerical results good. -John]


One of the problems of language design, it seem to me, is that we don't,
and very often even don't _want_, to think about specifying an algorithm
to the detail that would be required to get "bit-identical" results. Even
with almost anything on the market nowadays using two's complement integers
and some approxmation of IEEE floating point (cf. Intel's extended internal
format), you will have slight differences in the output for a "portable"
program when you use different compilers on the same machine or the same
compiler (e.g., gcc) on different machines and so on. Restricting this so
that "bit-identical" results are guaranteed is hard work at all levels -
for what benefit? Would you like to write your program in occam2, for
instance, that requires explicit parentheses _everywhere_ - i.e., an
unambiguous equation with no default/implied precendence rules?


AFAIK, most "popular" languages (e.g., Fortran) have such precedence rules,
but require the compiler to honour parentheses that are explicitly written.
How often do you think use is made of this feature?


Of course, array operations and parallel processing would then be anathema
to you.


As a sample case, read John Henning's article in (IEEE) Computer on SPEC
CPU2000 and how this benchmark handles deviations from the "groundtruth"
output. A very difficult subject, I can tell you from personal experience
as a developer of one of the CFP2000 components - about half of my time on
that was spent on making the code compile in various environments, and the
other half in tweaking the algorithm - basically _very_ error tolerant (it's
image processing, the input data only has ~6-7 bits of accuracy in the best
case) - to "validate" on the different platforms.


The final straw was xlf90 at the highest optimization level - which, BTW, is
a good citizen and generates a warning that it will possibly change program
semantics in certain cases - producing values substantially greater than 1
for a dot product, and generating different results for the same dot product
computed in close proximity. I never found out quite _what_ was going wrong...
but I adapted the code to handle such cases more gracefully, and also argued
with John and the committee how we could ensure that the different runs did
"substantially the same work" (see the article).


Jan


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.