From: | bobduff@world.std.com (Robert A Duff) |
Newsgroups: | comp.compilers,comp.dsp |
Date: | 22 Mar 1996 21:35:21 -0500 |
Organization: | The World Public Access UNIX, Brookline, MA |
References: | 96-03-006 96-03-098 96-03-144 |
Keywords: | architecture |
Robert A Duff <bobduff@world.std.com> wrote:
>This is a language problem, IMHO...
John Carr <jfc@mit.edu> wrote:
>Unless you move away from a static typed language, the C model has a
>great advantage. If the result of an operation is defined to have
>infinite precision a long expression gains bits for every operator and
>the code to implement it is big and slow.
>
>To get the exact result for
>
> (x + y) * (a + b) / d
>
>where all variables are 32 bits requires 66 bit arithmetic. Addition
>adds 1 bit and multiplication doubles the number of bits. The quotient
>has the same number of bits as the dividend.
Yes, of course the reason for these rules is efficiency.
But at the cost of correctness. If the result of the above expression
really does need 66 bits, then C will give the wrong answer. Ada is
not much better -- it will give a run-time exception.
If x,y,a,b,d really are 32-bit quantities, then I really want the right
answer above.
On the other hand, if they're all "range 1..100", then clearly the above
can be done efficiently. So why not require the programmer to declare
that fact?
Suppose I want to average two numbers: "(x+y)/2". If x and y can really
be as large as 2**31-1, then I want double-precision, rather than a
wrong answer or overflow exception. If they're not (i.e. if I know that
this calculation can be done in 32 bits), I should say so: declare x and
y to be less than 2**30-1, and it should be both efficient and correct.
- Bob
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.