Re: 32-bit vs. 64-bit x86 Speed

haberg@math.su.se (Hans Aberg)
27 Apr 2007 11:30:05 -0400

          From comp.compilers

Related articles
[12 earlier articles]
Re: 32-bit vs. 64-bit x86 Speed DrDiettrich1@aol.com (Hans-Peter Diettrich) (2007-04-14)
Re: 32-bit vs. 64-bit x86 Speed gah@ugcs.caltech.edu (glen herrmannsfeldt) (2007-04-18)
Re: 32-bit vs. 64-bit x86 Speed haberg@math.su.se (2007-04-23)
Re: 32-bit vs. 64-bit x86 Speed haberg@math.su.se (2007-04-23)
Re: 32-bit vs. 64-bit x86 Speed anton@mips.complang.tuwien.ac.at (2007-04-25)
Re: 32-bit vs. 64-bit x86 Speed haberg@math.su.se (2007-04-26)
Re: 32-bit vs. 64-bit x86 Speed haberg@math.su.se (2007-04-27)
Re: 32-bit vs. 64-bit x86 Speed jon@ffconsultancy.com (Jon Harrop) (2007-04-28)
| List of all articles for this month |

From: haberg@math.su.se (Hans Aberg)
Newsgroups: comp.compilers
Date: 27 Apr 2007 11:30:05 -0400
Organization: Virgo Supercluster
References: 07-04-031 07-04-045 07-04-091 07-04-103 07-04-117
Keywords: architecture, performance
Posted-Date: 27 Apr 2007 11:30:05 EDT

johnl@iecc.com (John Levine) wrote:


> [In 32 bit mode, an Athlon 64 is basically a Pentium. He's saying that
> these chips run legacy 32 bit code faster than 32 bit chips do, which is
> not all that surprising if they can use the wide data paths for things
> like code fetches. -John]


Some stuff I recalled after my post:


It was mentioned in the Apple Developers video "State of the Union",
that compiling for 64-bits is not always faster. Another source said
compiling 32-bit for 64-bit machines did not grow the code much, int's
being the same 32-bit, which John Levine says is atypical for
64-models [private communications]; perhaps Apple has decided to avoid
compatibility problems between 32- and 64-bit code.


The techniques they mentioned in that video to gain performance, for their
personal computers then, were, in hardware:
H1. More parallel CPUs, as pushing CPU frequency does not give as much.
H2. More RAM.
H3. More powerful graphics cards.
As for hard drives, the next upcoming Mac OS X, version 10.5, announced to
be released in October, contains an experimental ZFS:
  http://en.wikipedia.org/wiki/ZFS
  http://www.sun.com/software/media/flash/demo_zfs_learningcenter/index.html
This will admit for all practical purposes unlimited secondary memory, as
the hard drives comes along. Though (in reply to a question) the current
Mac 64-bit models have 16 MB primary memory (RAM) limit, the video
mentioned above discussed how performance for certain software could be
speeded up by using 8 GB. So I would think that when new CPU models come
forth from Intel, there will a much higher RAM limit. Intel announced some
chip with 32 CPUs which can be shut down to conserve energy, so many CPUS
seems to be a push on the personal computers side now (the maximum for
Macs is 8).


As for software, H1 calls for better concurrent computer languages, as
traditional threading by hand is tricky. Apple has introduced some
technique for it, which unfortunately I do not recall exactly what it
was :-); some hierarchical model (as in environments) perhaps. (From the
POSIX/UNIX standardization list, I recall that on decided to give C/C++
semantics in order to be able to properly handle atomic code, as
optimization otherwise can disturb it.) As for H3, graphics cards are now
more like computers, and can handle more advanced programming; I am not
sure how this will affect computer languages. Perhaps just more high level
commands in order to generate graphical effects. One technique mentioned
in the video above, though was to not store intermediate graphical
results, but to recompute them on the fly, at need.


    Hans Aberg


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.