Re: Integers on 64-bit machines (Torben =?iso-8859-1?Q?=C6gidius?= Mogensen)
Wed, 04 Jul 2007 11:30:40 +0200

          From comp.compilers

Related articles
Integers on 64-bit machines (Denis Washington) (2007-07-02)
Re: Integers on 64-bit machines (2007-07-04)
Re: Integers on 64-bit machines (Marco van de Voort) (2007-07-04)
Re: Integers on 64-bit machines (Amit Gupta) (2007-07-05)
Re: Integers on 64-bit machines (Hans-Peter Diettrich) (2007-07-05)
Re: Integers on 64-bit machines (2007-07-05)
Re: Integers on 64-bit machines (Dmitry A. Kazakov) (2007-07-05)
Re: Integers on 64-bit machines (glen herrmannsfeldt) (2007-07-05)
[20 later articles]
| List of all articles for this month |

From: (Torben =?iso-8859-1?Q?=C6gidius?= Mogensen)
Newsgroups: comp.compilers
Date: Wed, 04 Jul 2007 11:30:40 +0200
Organization: Department of Computer Science, University of Copenhagen
References: 07-07-007
Keywords: arithmetic, design, comment
Posted-Date: 04 Jul 2007 20:37:10 EDT

Denis Washington <> writes:

> I'm currently developing a little C-like programming language as a
> hobby project. After having implemented the basic integral integer
> types like known from Java/C# (with fixed sizes for each type), I
> thought a bit about 64-bit machines and wanted to ask: if you develop
> on a 64-bit machine, would it be preferable to still leave the
> standard integer type ("int") 32-bit, or would it be better to have
> "int" grow to 64 bit? In this case, I could have an
> architecture-dependent "int" type along with fixed-sized types like
> "int8", "int16", "int32" etc.
> What do you think?
> [I would make my int type the natural word size of the machine. If people
> want a particular size, they can certainly say so. -John]

I never really liked C's machine-dependent integer type. I prefer
integer types to have explicit fixed sizes (and a selection of those)
or be unbounded. However, I'm happy to allow the implementation to
use more bits than required, so an int16 could be implemented as a
32-bit integer on machines where operating on 16-bit entities is
difficult or costly.

Even better than a small fixed number of sizes (such as int8, int16,
int32 and int64) is to (like in Pascal) explicitly state the required
minimum and maximum values, so you have types like -10..10 or 0..255.
You would be guaranteed that all values in the interval would be
representable. Ideally (as in Pascal), you would get errors if you
put a larger value into a variable than its type support, but if you
are worried about performance, it would be acceptable to drop these
tests. Many of them could be eliminated at compile-time, though, as
as index checks.

In addition to explicitly bounded numbers, you could have an integer
type that is only bounded by the available memory to store it. If you
just add a machine-dependent bounded integer type (as Pascal), people
will tend to use it instead of the bounded type and just make tacit
assumptions about the range of values.

[PL/I let you specify how big all your integers needed to be, and I
can't say that part was a rousing success. -John]

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.