Re: kickass optimizing compilers?

"dablick" <db@digital.com>
1 Feb 2004 12:35:15 -0500

          From comp.compilers

Related articles
[5 earlier articles]
Re: kickass optimizing compilers? andrew@codeplay.com (Andrew Richards) (2004-01-12)
Re: kickass optimizing compilers? vanevery@indiegamedesign.com (Brandon J. Van Every) (2004-01-16)
Re: kickass optimizing compilers? colohan+@cs.cmu.edu (Christopher Brian Colohan) (2004-01-16)
Re: kickass optimizing compilers? Jeffrey.Kenton@comcast.net (Jeff Kenton) (2004-01-16)
Re: kickass optimizing compilers? Robert@Knighten.org (Robert Knighten) (2004-01-17)
Re: kickass optimizing compilers? walter@bytecraft.com (Walter Banks) (2004-01-18)
Re: kickass optimizing compilers? db@digital.com (dablick) (2004-02-01)
Re: kickass optimizing compilers? blitz@bad-logic.com (2004-02-04)
Re: kickass optimizing compilers? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2004-02-08)
Re: kickass optimizing compilers? vidar@hokstad.name (2004-02-08)
Re: kickass optimizing compilers? vbdis@aol.com (2004-02-12)
| List of all articles for this month |
From: "dablick" <db@digital.com>
Newsgroups: comp.compilers
Date: 1 Feb 2004 12:35:15 -0500
Organization: Hewlett-Packard Company
References: 04-01-044 04-01-082
Keywords: assembler
Posted-Date: 01 Feb 2004 12:35:15 EST

Regarding ASM jocks vs compilers...


Back in the days when machines were fairly simple, I could buy that ASM
jocks might, with great effort, be able to out-code compilers.


But so much has changed.


There's a running joke that what the RISC acronym really means is: Relegate
Important Stuff to Compilers! :-)


1) Computers can do more and accordingly, so can compilers


Many optimization algorithms were too complex to run on slow machines
whose memory was measured in kbytes. It would take too long to
compile. Today more stuff gets thrown into compilers because
compiletime speed usually doesn't need to be at the forefront of a
compiler-writers concerns.


2) Machines have gotten WAY more complex


How well you can write assembler code is a function of how well you
can model the machine in your brain. It wasn't hard to understand all
the things you needed to know to code in PDP-11 macro.


The "search space" for the optimal code for something was fairly small
and the rules fairly easy: in most cases it was "least # of
instructions".


Ah... those were the good ole' days.


But look at something like Itanium. WAY more instructions. Complex
rules for multi-issue, predication, rotating register sets.


Frankly, I find it very hard to believe that many humans could out-do
a compiler in the all-important area of code scheduling for the
Itanium. You not only have to understant the instruction set, you
have to understand each CPU models particular implementation. You not
only have to know how many ways there are to do a multiplication, but
you also have to know how many cycles they take.


And if, by some stroke of genius you managed to come up with the
optimal schedule for one implementation... is that good enough?


Lots of software has to run well on a number of different machines
within teh same architecture. So not only do you have to know the
intricate details of "a" particular implementation, but in real life,
you really need to understand how various things trade off for the
entire architectural family.


So... again... while it may have been possible for assembler jocks to
do a good job in the old days of simple machines... I'm sorry.... I
don't think they can compete with compilers these days. The problem
is just way too hard for humans now.


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.