Object code optimization

compilers@ima.UUCP
10 Jan 86 15:46:00 GMT

          From comp.compilers

Related articles
Object code optimization compilers@ima.UUCP (1986-01-10)
Re: Object code optimization compilers@ima.UUCP (1986-01-13)
Object code optimization compilers@ima.UUCP (1986-01-16)
| List of all articles for this month |

From: compilers@ima.UUCP
Newsgroups: mod.compilers
Date: 10 Jan 86 15:46:00 GMT
Article-I.D.: ima.136300043
Posted: Fri Jan 10 10:46:00 1986
Date-Received: 12 Jan 86 13:52:45 GMT

[from kurt at ALLEGRA/FLUKE (Kurt Guntheroth)]


Organization: John Fluke Mfg. Co., Inc., Everett, WA
1. When I first learned about optimization, I thought the ideal
optimizer would work in the following way: Parse the source into
trees/dags/IL/whatever and reorder and simplify the trees to the
optimal equivalent program. Then generate code by any reasonably good
technique. The code would be almost perfect since it was generated
from an optimal program. Now I find out that this doesn't work too
well. Real machines are so un-orthogonal that you inevitably
de-optimize the code when you generate machine instructions. People
seem to be concentrating more and more on the machine language,
generating machine instructions simply and then optimizing the
instructions by performing translations permitted by some rule set
(grammars, tables, etc.) People like Frazer swear by this method.
Other people I have talked to say that almost any optimization might
de-optimize code for a given processor by making it more difficult to
generate some instruction sequence. What do you practitioners
consider the 'right' way to do things? It seems that optimizations
involving moving code would be much more difficult if you do them on
machine instructions (the grammars don't handle it too well). What
about this, oh gurus?


2. I once considered generating code for the 6502. This miserable
little processor doesn't even have a 16 bit register so you must form
and move addresses one byte at a time (ugh). I suspect the problems of
generating code for the 6502 must be similar to the problems of
generating good code for a segmented machine like the x86, where the
low address byte is like an offset and the high address byte is like a
segment. Any comments?


3. How do I get the GNU compiler tools? Ideally, are they small enough
to post to usenet's net.sources? Also, is there a public domain S/SL
compiler? S/SL is a tool from U of Toronto that builds table driven
ad hoc top down compilers. This may sound strange, but it seems to
combine many of the nice intuitive features of ad hoc recursive descent
compilers with the speed and compactness of table driven parsers.
Building the S/SL translator is not difficult (especially compared to
writing a LALR parser generator) but I havn't done it yet and I am
interested if somebody else has.


4. Has anybody seen good books on actually implementing good
optimization? I have Wolfe's book on BLISS -- I mean any other ones.
I can find literature about optimization, but it is generally at a very
abstract level. I would like to steal implementation-level ideas if I
can, instead of reinventing all the wheels in the world.
Kurt Guntheroth
John Fluke Mfg. Co., Inc.
{uw-beaver,decvax!microsof,ucbvax!lbl-csam,allegra,ssc-vax}!fluke!kurt


[I haven't seen any books, but there have been many articles on machine-
specific optimizations. Look at the November 1980 issue of the IBM Journal
of R+D, or at the various compiler construction conference proceedings
published by Sigplan. -John]





Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.