|Optimizations and Language Definitions email@example.com (Dale Worley) (1988-09-13)|
|Re: Optimizations and Language Definitions firstname.lastname@example.org (1988-09-20)|
|Re: Optimizations and Language Definitions email@example.com (Henry Spencer) (1988-09-22)|
|Date:||Tue, 13 Sep 88 10:26:59 EDT|
|From:||Dale Worley <firstname.lastname@example.org>|
> From: email@example.com (Wendy Thrash)
> Moreover, I'm concerned about the application of arithmetic
> "identities" at compile time: if I write y = (x - 1.0) + 1.0;
> there's a very good reason for it, and I don't want the compiler
> to mess it up, no matter what it is allowed to do by the language
> definition. Please, at least honor my parentheses in
> floating-point computations.
Oh, God! Please, please, please, never say "I don't care what the
language definition says"! You will unleash a horde of one hundred
thousand dweebs who say "it works on *my* compiler, it should work
everywhere!". The ultimate example was the fellow who wanted (in C)
"&a + 1 == &b" when he declared "int a, b;"! (That is, the order of
storage allocation must exactly match the order in which the
declarations are written.) The only way to control this chaos is to
*absolutely* respect the language definition, both in terms of what it
guarantees, and in terms of what it does *not* guarantee.
But this still leaves problems for, particularly, numerical code.
Sometimes we want the order of evaluation to be very particularly
controlled, and other times we don't. There are times when applying
something as nasty as the distributive law to even F.P. calculations
is innocent, and others when applying something as "innocent" as the
associative law is disastrous. It would be nice if there was a
language in which the programmer could specify which details were
important and which weren't, but I haven't seen any. The ANSI C
committee (with its unary + operator) made a valiant effort, but was
shouted down. Oh, well.
[Your point about language defintions is very well taken. It's peculiar that
people who design languages in which numerical computations are performed seem
so often to fall prey to the traditional hacker misconceptions about floating
point arithmetic, e.g. "since floating point results are approximate, they're
not well defined," or "adding a little extra precision at random times won't
[From Dale Worley <firstname.lastname@example.org>]
Return to the
Search the comp.compilers archives again.