Re: 32-bit vs. 64-bit x86 Speed

haberg@math.su.se (Hans Aberg)
23 Apr 2007 07:52:58 -0400

          From comp.compilers

Related articles
[8 earlier articles]
Re: 32-bit vs. 64-bit x86 Speed tmk@netvision.net.il (Michael Tiomkin) (2007-04-13)
Re: 32-bit vs. 64-bit x86 Speed dot@dotat.at (Tony Finch) (2007-04-13)
Re: 32-bit vs. 64-bit x86 Speed kenney@cix.compulink.co.uk (2007-04-13)
Re: 32-bit vs. 64-bit x86 Speed DrDiettrich1@aol.com (Hans-Peter Diettrich) (2007-04-14)
Re: 32-bit vs. 64-bit x86 Speed DrDiettrich1@aol.com (Hans-Peter Diettrich) (2007-04-14)
Re: 32-bit vs. 64-bit x86 Speed gah@ugcs.caltech.edu (glen herrmannsfeldt) (2007-04-18)
Re: 32-bit vs. 64-bit x86 Speed haberg@math.su.se (2007-04-23)
Re: 32-bit vs. 64-bit x86 Speed haberg@math.su.se (2007-04-23)
Re: 32-bit vs. 64-bit x86 Speed anton@mips.complang.tuwien.ac.at (2007-04-25)
Re: 32-bit vs. 64-bit x86 Speed haberg@math.su.se (2007-04-26)
Re: 32-bit vs. 64-bit x86 Speed haberg@math.su.se (2007-04-27)
Re: 32-bit vs. 64-bit x86 Speed jon@ffconsultancy.com (Jon Harrop) (2007-04-28)
| List of all articles for this month |

From: haberg@math.su.se (Hans Aberg)
Newsgroups: comp.compilers
Date: 23 Apr 2007 07:52:58 -0400
Organization: Virgo Supercluster
References: 07-04-031 07-04-038
Keywords: architecture
Posted-Date: 23 Apr 2007 07:52:58 EDT

  Hans-Peter Diettrich<DrDiettrich1@aol.com> wrote:


> IMO the address space is not serious restriction on 32 bit processors.
> It may seem convenient to have more than 4 GB of address space
> available, ...


Actually, 2 GiB = 2^31 is a common limitation per address space, not using
the hight bit in order to avoid signed int problems. (And in UNIX, a
process has several address spaces: stack, frees store, etc.)


> ...but the limiting factor is the amount of physical RAM,
> available at runtime.


Computing from the figures of the Macs since 1984, I got:
  CPU frequency doubles every three years
  RAM amount doubles every two years
  hard drive amount doubles every year


So the the latest Mac OS X, version 10.5, to be released in October, is
designed to deal with programs with 8 GB and higher. And it seems this and
more memory is going to be needed in programs dealing heavily with
graphics. (There is an interesting video for developers, "State of the
Union", that one can get if one signs up (for free) at Apple.)


One can note that it is difficult to push CPU frequency, due to the amount
of energy it consumes, and thus needs to be transported away. Therefore,
adding parallel processes seems to become important from now on. Suppose
the figure above holds, and on can double CPU frequency every three years,
and the numbers of POU every two years. Then this gives five doubling in
six years, or nearly a doubling in performance every year (the figures
aren't that exact in a prediction).


> As soon as a program uses more RAM than
> physically present, the system must start swapping from and to disk.


This becomes only a problem if the amount that is actively used on
a computer exceeds the RAM, i.e., if the number of page-outs per time unit
is high. (On a Mac OS X, running UNIX then with the developer package
Xcode installed, this can be checked by entering at the console say
'vm_stat 10', and the check the last column from time to time.)


    Hans Aberg



Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.