From: | pardo@cs.washington.edu (David Keppel) |
Newsgroups: | comp.compilers,comp.dsp |
Date: | 25 Mar 1996 21:56:10 -0500 |
Organization: | Computer Science & Engineering, U. of Washington, Seattle |
References: | 96-03-078 96-03-098 96-03-144 |
Keywords: | architecture, types |
>>>[Implementation-defined precision of numeric types.]
>Robert A Duff <bobduff@world.std.com> wrote:
>>This is a language problem, IMHO. ...
jfc@mit.edu (John Carr) writes:
>Unless you move away from a static typed language, the C model has a
>great advantage [: the problem is that it's slow otherwise because
>you have to use the most general implementation but the data values
>rarely demand it.]
I'd like to finesse two points here:
First, the language *could* have constructs like "integer that can
represent numbers in the range X to Y". In many cases the programmer
knows that range, and the static compiler could used a fixed-size
representation, albeit a representation that would vary from platform
to platform. "The problem" with C is that you have to toss a coin and
hope that a "long" holds 35 bits of precision.
Second, the performance issue is not whether the language is
statically-*typed*. The issue might be whether the *implementation*
is static. For example, you can construct an "arbitrary precision"
numeric package where the code is initially compiled to use ints.
When they overflow, the code is dynamically recompiled to use double
ints. Then quad ints, etc. The language, however, is still
statically typed. I think the confusion between static typing and
static implementation arises because, historically, it is
dynamically-typed languages that have received dynamic
implementations.
;-D on ( Just that type of guy ) Pardo
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.