Re: Folk Theorem: Assemblers are superior to Compilers

leichter@thorium.rutgers.edu
Wed, 27 Oct 1993 16:34:05 GMT

          From comp.compilers

Related articles
Folk Theorem: Assemblers are superior to Compilers elliottm@csulb.edu (1993-10-24)
Re: Folk Theorem: Assemblers are superior to Compilers dmartin@andy.bgsu.edu (1993-10-26)
Re: Folk Theorem: Assemblers are superior to Compilers napi@cs.indiana.edu (Mohd Hanafiah Abdullah) (1993-10-26)
Folk Theorem: Assemblers are superior to Compilers ssimmons@convex.com (1993-10-27)
Re: Folk Theorem: Assemblers are superior to Compilers vick@wotangate.sc.ti.com (1993-10-27)
Re: Folk Theorem: Assemblers are superior to Compilers leichter@thorium.rutgers.edu (1993-10-27)
Re: Folk Theorem: Assemblers are superior to Compilers cliffc@rice.edu (1993-10-27)
Re: Folk Theorem: Assemblers are superior to Compilers macrakis@osf.org (1993-10-27)
Re: Folk Theorem: Assemblers are superior to Compilers amn@ubik.demon.co.uk (1993-10-27)
Re: Folk Theorem: Assemblers are superior to Compilers synaptx!thymus!daveg@uunet.UU.NET (Dave Gillespie) (1993-10-27)
Re: Folk Theorem: Assemblers are superior to Compilers winikoff@munta.cs.mu.OZ.AU (1993-10-28)
Re: Folk Theorem: Assemblers are superior to Compilers prener@watson.ibm.com (1993-10-28)
[22 later articles]
| List of all articles for this month |

Newsgroups: comp.compilers
From: leichter@thorium.rutgers.edu
Keywords: assembler, optimize
Organization: Compilers Central
References: 93-10-114
Date: Wed, 27 Oct 1993 16:34:05 GMT

napi@cs.indiana.edu (Mohd Hanafiah Abdullah) writes:
| Modern optimizing compilers have been designed to be excellent in "big
| catches" code improvement via global optimizations using well researched
| techniques and algorithms; not to mention the myriads of local
| optimization techniques like peephole optimization that is employed since
| the early days of compiled languages.
|
| A good comparison maybe a well-written chess program versus a human. The
| human can do remarkably clever moves and deceptions against the opponent.
| Unfortunately, these manipulations are relatively shallow and narrow in
| scope, although highly intelligent. While the dumb computer can be made
| to look ahead many steps accurately in picking the best move which a human
| will find virtually impossible to do. Plus, powerful AI learning
| algorithms that are programmed in the computer may have numerous board
| configurations to choose from and improve upon, stored in its reliable and
| relatively large memory.


Up to here, I agree with what's been said - though I have my doubts about
the "powerful AI learning techniques". The strongest chess programs today
rely on brute force. They don't learn; they don't have to. And, in fact,
it is worth keeping in mind that, for all the immense compute power
available, the best human beings *still* beat the best programs.
Analogous things hold true for compilers vs. human assembler writers.


| To cite certain programming techniques that assembly language programmers
| tend to overlook or find it too hard to implement cost effectively and
| virtually error-free are:


This is a bad set of examples.


| o Register allocation (by graph coloring maybe)


In my experience, human beings tend to be extremely good about this. As
in the case of programs, of course, they approach the problem entirely
differently from the way a computer would approach it. It would be
crazy to first write the code, then try to apply graph coloring (or
whatever) to it by hand. Instead, good human coders start off with a
high-level understanding of where they are going and what the resources
available to them are, and structure the entire solution around that.
What goes in the registers is the important DATA, thought of abstractly.
Humans can do this to the extent that they can correctly understand what
the important data in an algorithm really is. There are limits to that
understanding, but within those limits humans have much more freedom to
consider alternative designs than compilers do: Humans start with an input
and output specification, while compilers have to start with a bunch of
code subject to the semantics of the language, and some of the constraints
imposed by the semantics are irrelevant - but the compiler can't know
that. (This is, of course, one of the arguments for functional langua-
ges: They impose almost no irrelevent constraints. In practice, however,
the compilers for these languages don't do all that well, despite years of
effort.)


| o Data flow analysis for optimizing common-subexpressions, loop optimizations
| (eg. fast array referencing, and loop invariants), copy/constant
| propagations, dead code elimination, inter-procedural register allocation,
| and so forth. Also, constant folding and stack height reduction.


Again, most of these are trivial for a human being IF DONE DURING CODE
DESIGN. For example, no decent assembler coder would ever write any dead
code to begin with! Dead code elimination is important in compilers only
because other optimizations tend to produce dead code. A human applying
an analogous optimization would discard the dead code "in his head".
Similarly, copy/ constant propagation is, to a degree, a side-effect of
the ease of having multiple names for things - important for clear
exposition in high-level language. A good assembler programmer things
about the DATA, not the names, and will usually do this automatically.
And so on.


| o Recursive programming style that is powerful in solving many programming
| problems. Scheme compilers can convert recursions into loops using the
| tail recursion technique.


Where do you think this technique came from to begin with? The standard
set of macros used in programming RSTS/E code in PDP-11 assembler had a
CALL instruction (which became a JSR) and a CALLX instruction (which
became a JMP). It was common, unexceptional practice to use CALLX when
you knew the next thing after a CALL would be a RET. Note that, unlike a
compiler, a human would not likely write CALL; RET then optimize it to
CALLX; he'd just write CALLX.


It's true that doing this conversion gets more complex when you have a lot
of stuff in local variables on the stack, but assembler code rarely does
things that way.


| These are huge and complex algorithms that compiler writers take months or
| years to implement, in order make code faster and smaller, reliably and
| quickly. I don't see how one could do the above in any assembly language
| everytime he/she writes a software program.


Again, you are looking at it wrong. No human being would use the
algorithms a compiler uses for optimization, any more than a human chess
player would try to learn how to do alpha/beta pruning and hash searching.
Good coders use entirely different approaches. If compilers could use
those approaches, in addition to being able to apply brute force as they
do now, we'd all be using some super-high-level functional languages and
languages even at the level of C or Lisp or Prolog would have been long
forgotten. (If AI research were at the point where chess programs could
use the techniques humans do in addition to the techniques they do use,
they'd be far and away stronger than any human chess player by now, too.)


No human chess player in the world could defeat even a 1960's-era chess
program in a tournament if required to play 500 games straight through
with no breaks. For today's programs, the number would probably be 10
rather than 500.


No compiler can touch the best hand-coders for fairly small pieces of code
that do complex things under messy conditions (e.g., on hairy machines
like 80x86's). No human being can touch a good compiler when optimizing
large amounts of "boring" code that has been written for clarity and human
under-standing, rather than tuned for a particular architecture. No
human being can match the ability of optimizing compilers to keep the code
optimized as it changes or as it's moved among different types of
hardware.


-- Jerry
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.