|32-bit vs. 64-bit x86 Speed email@example.com (Jon Forrest) (2007-04-11)|
|Re: 32-bit vs. 64-bit x86 Speed firstname.lastname@example.org (glen herrmannsfeldt) (2007-04-13)|
|Re: 32-bit vs. 64-bit x86 Speed email@example.com (Marco van de Voort) (2007-04-13)|
|Re: 32-bit vs. 64-bit x86 Speed firstname.lastname@example.org (2007-04-13)|
|Re: 32-bit vs. 64-bit x86 Speed DrDiettrich1@aol.com (Hans-Peter Diettrich) (2007-04-13)|
|Re: 32-bit vs. 64-bit x86 Speed email@example.com (Ian Rogers) (2007-04-13)|
|Re: 32-bit vs. 64-bit x86 Speed firstname.lastname@example.org (Michael Meissner) (2007-04-13)|
|Re: 32-bit vs. 64-bit x86 Speed email@example.com (George Peter Staplin) (2007-04-13)|
|Re: 32-bit vs. 64-bit x86 Speed firstname.lastname@example.org (Michael Tiomkin) (2007-04-13)|
|Re: 32-bit vs. 64-bit x86 Speed email@example.com (Tony Finch) (2007-04-13)|
|Re: 32-bit vs. 64-bit x86 Speed firstname.lastname@example.org (2007-04-13)|
|[9 later articles]|
|From:||Hans-Peter Diettrich <DrDiettrich1@aol.com>|
|Date:||13 Apr 2007 01:31:56 -0400|
Jon Forrest wrote:
> Here's what I said:
> "One thing I've noticed about 64-bit computing in general is that it's
> being oversold. The **only** reason for running in 64-bit mode is if
> you need the additional address space.
IMO the address space is not serious restriction on 32 bit processors.
It may seem convenient to have more than 4 GB of address space
available, but the limiting factor is the amount of physical RAM,
available at runtime. As soon as a program uses more RAM than
physically present, the system must start swapping from and to disk.
> Indeed, for some apps this is
> critical and 64-bit computing solves a real problem.
As long as there exists a chance, that large amounts of data cannot be
hold in RAM, a clever data management inside the application will
result in better performance of the program, compared to an
unoptimized ("brute force" ;-) program, that simply makes use of the
full virtual 64 bit address space.
The only important difference between 32 and 64 bit machines is the
crossing of the 4 GB barrier, so that more than 4 GB of RAM *can* be
used. But future must show how the typically *available* amount of RAM
on actual machines really will increase, most actual machines still
are equipped with less than 4GB of RAM. This is more a matter of
memory technology, than of CPU technology. It's no problem to produce
a 128 or 1024 bit CPU with nowadays technology, but the memory
designers are bound to the actually available technology. In so far a
change from 2-D "chip" to 3-D "cube" technology would have an much
bigger impact on computing, than a trivial change from 32 bit to 64
> For apps that don't need the extra address space, the benefits of
> the additional registers in x86-64 are nearly undone by the need to
> move more bits around, so 32-bit and 64-bit modes are pretty much a
> push. When you add the additional difficulty of getting 64-bit
> drivers and what-not, I don't think it's worth messing with 64-bit
> computing for apps that don't need the address space."
It's more a problem of the OS than of the applications. Support for 32
bit programs will decrease in future versions of 64 bit systems, new
features will be built only into the 64 bit OS versions. For some time
we'll have a need for *both* 32 and 64 bit applications, but the need
for the 32 bit programs will decrease with the number of 32 bit systems
still in use.
As outlined above, code should not stupidly rely on the availability of
an huge amount of RAM. Instead the assumed or actually available size of
physical memory should be considered in code, regardless of the
available address space.
> Let's say you're a Linux user who never needs to run programs that
> don't fit in 32-bits.
Never say never ;-)
> Would you run a 32-bit or a 64-bit version of Linux?
Every user will have to move to a 64 bit OS on a future machine, when
support for older systems (drivers!) will vanish.
> You compiler people probably have intimate knowledge of the ISA
> issues here so I'm interested in what you have to say.
IMO compilers are not affected by the mere move from 32 bit to 64 bit
address space. A compiler can support the creation of 8, 16, 32 and 64
bit code at the same time, from the same source code, and for the same
machine - as far as supported or required by the given target machine.
Compilers instead have to keep abreast of the features of new machines,
which are not bound to the register size. I don't know how much
compilers are affected by the new technology, because some of the new
features may change or disappear as fast as they have appeared. IMO the
impact is almost a matter of optimization and scheduling, for multi-core
and other parallel processing capabilities, which already existed in the
32 bit world. I feel a need for new or extended programming languages,
with explicit support for parallel processing. But I doubt that such
languages will be accepted by the big market :-(
Actually there *exists* a need for 64 bit systems, because machines with
more than 4 GB of memory are feasable and affordable. The placement of
64 bit machines in the consumer market will do the rest, because most
users do not buy what they need, but instead must have whatever is
advertised as the "state of the art" ;-)
Return to the
Search the comp.compilers archives again.