Re: inlining + optimization = nuisance bugs

Luddy Harrison <luddy@concmp.com>
4 Oct 1998 01:07:22 -0400

          From comp.compilers

Related articles
[27 earlier articles]
Re: inlining + optimization = nuisance bugs tim@wagner.princeton.edu (1998-09-29)
Re: inlining + optimization = nuisance bugs dmr@bell-labs.com (Dennis Ritchie) (1998-09-29)
Re: inlining + optimization = nuisance bugs dmr@bell-labs.com (Dennis Ritchie) (1998-09-29)
Re: inlining + optimization = nuisance bugs zalman@netcom.com (1998-10-01)
Re: inlining + optimization = nuisance bugs wclodius@aol.com (1998-10-01)
Re: inlining + optimization = nuisance bugs qjackson@wave.home.com (Quinn Tyler Jackson) (1998-10-04)
Re: inlining + optimization = nuisance bugs luddy@concmp.com (Luddy Harrison) (1998-10-04)
| List of all articles for this month |
From: Luddy Harrison <luddy@concmp.com>
Newsgroups: comp.compilers
Date: 4 Oct 1998 01:07:22 -0400
Organization: Compilers Central
References: 98-09-164 98-10-001
Keywords: arithmetic, optimize, comment

Zalman Stern (zalman@netcom.com) writes:


> As many of us have said, the x86 architecture is flawed. Especially in
> that the precision flags don't handle the xponent range. Witness the
> whole Java issue. (Which I'm sure you'll dismiss out of hand. But I'm
> also sure that if the x86 was some low volume exotic people would have
> blown it off as unfit for Java in a heartbeat.)


I don't dismiss any of this out of hand. I see it as a terrible
problem: the floating point hardware which is supplied in many
processors is dreadfully incompatible with the correct implementation
of floating point, by one widely accepted definition of correct. This
is made much more complicated by the fact that many users of compilers
regard the compiler as a tool for programming a particular processor
(as opposed to those who view it as a tool for portable translation of
a fixed programming language semantics). Those who are trying to
program, say, a floating point DSP directly (but who don't want to
write asm), have a very different semantic model in their head than
those who have a venerable FORTRAN program which runs on many systems.


> : Luddy Harrison (luddy@concmp.com) writes:
> : Is there any compiler shipping for the PowerPC familiy that, when given
> : only the -O flag, suppresses the generation of fmadd and fmsub? None
> : that I know of.)
>
> Every one of them I know of (Two from Apple, Metrowerks, xlc from IBM,
> gcc) provides an option to turn off fused multiply and add
> operations. Likewise for other architectures that provide this feature
> (e.g. MIPS-IV.) It allows compiler vendors to say "If you have sensitive
> code that depends on 'exact' IEEE-754, throw this switch and you'll get
> it." Which is a BIG selling point to some people.


Yes, this seems to be the state of the art. And there are usually
more options for supressing the reassociation of floating point
addition and multiplication, and so on. Then there are options to
take you in the other direction, to permit stuff that is flatly
incorrect but fast. Perhaps its just me, but I find this "command
line steering" to be really dreadful. When modules are linked
together, it is seldom possible to reliably check that all their
command-line knobs are compatible. And building a software system
under several compilers which require special command-line options is
a pain. It's a BIG selling point in the same way that a bilge pump is
a BIG selling point on a boat with a BIG hole in it.


: You design a circuit using resistors with a 10% tolerance. You build
: and test it. It works. You (suddenly, unpredictably) replace the
: resistors with new ones whose tolerance is 1%.


>> You design a circuit which depends on matched components. The precise
>> value of some parameter of the components (capacitance, resistance,
>> whatever) can vary (say with temperature), but a set of components are
>> guaranteed to vary in a correlated fashion.


I knew that was coming. (:


As I said in my first post, the IEEE 754 spec seems to offer up the
extended precision type precisely for the purpose of holding temporary
values (by which I understand the unnamed intermediate results of
expressions). It is becoming clear, however, that for the programmers
to whom precision matters the most, this type can't be used implicitly
or casually. More is less, more or less.


So I guess we'll all be typing a lot of this:


cc -DONT_FROB_THE_PRECISION -FLOATING_POINT_NUMBERS_HAVE_NO_ALGEBRA foo.bar


(:


-Luddy Harrison
[The x86 series suffers all over the place from wrong guesses about
the way that they would be programmed, e.g., segments which are OK for
Pascal programs with small objects but horrible for large C programs.
Shortly after the 8087 came out, I heard a talk by one of its
designers, and it was clear that he understood the issues involved
pretty clearly. But in retrospect, they didn't expect it to be used
on systems with mainframe level performance (the original 8087 ran at
5MHz and took at least 25us to multiply) nor did they expect its code
to be generated by sophisticated compilers. In a simple PL/M or C
compiler that stores the result of every expression in memory and
doesn't do any register spilling nor inter-expression optimization,
it's pretty easy to understand the programming model -- expressions
are computed to 80 bits, stored values to 64. But these days,
expressions are computed to 80 bits except when they're not, and I
think we all agree that's a pretty unpleasant environment in which to
try to get reliable answers. -John]


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.