Re: Object code optimization

compilers@ima.UUCP
13 Jan 86 22:05:00 GMT

          From comp.compilers

Related articles
Object code optimization compilers@ima.UUCP (1986-01-10)
Re: Object code optimization compilers@ima.UUCP (1986-01-13)
Object code optimization compilers@ima.UUCP (1986-01-16)
| List of all articles for this month |

From: compilers@ima.UUCP
Newsgroups: mod.compilers
Date: 13 Jan 86 22:05:00 GMT
Article-I.D.: ima.136300055
Posted: Mon Jan 13 17:05:00 1986

[from Chris Torek <harvard!mimsy.umd.edu!gymble!chris>]


Like yourself, I `thought the ideal optimizer would ... parse the
source into trees/dags/IL/whatever and reorder and simplify [these]
to the optimal equivalent program'. If I understand correctly,
the problem with this is that the compiler's back end has to do
some really strange things for some of what the front end might
generate. But if this is the case then it seems to me the *real*
problem is a lack of communication from the back end to the front
end. What is needed is a way that the code generator can tell the
optimizer, `x is not good', `y is very good', `z is only worthwhile
under the following conditions', etc.


Maybe a weighted production system style database that is interpreted
by the front end during optimization would be the way to do this.
Then you could say `if there is repeated division by a power of
two inside a loop, try to determine whether the dividend is positive,
even if you have to put an ``if'' around the outside and make two
loops', or `loops that count down to zero are twice as good as
those that count up to a constant', or whatever. (The former lets
you directly use arithmetic shifts instead of integer divides on
two's complement machines---this is extremely important on many
microprocessors---and the latter allows use of the `sob' style VAX
instructions or the NS32000 count-to-zero instructions.)


Of course with enough rules this might produce the world's slowest
compiler. . . . :-)
--
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 4251)
UUCP: seismo!umcp-cs!chris
CSNet: chris@umcp-cs ARPA: chris@mimsy.umd.edu


[My understanding of the PL.8 compiler from IBM is that first it compiles
into an extremely low-level intermediate code, second it does machine
independent optimizations such as common subexpression and loop invariant
removal, and then it does register assignment and instruction selection.
The intermediate code has an infinite number of registers and very simple
operations, so that real machine instructions usually subsume several
intermediate instructions. I don't know how they deal with giving hints to
the intermediate code generator so that it can use fancy real machine
instructions, but this was part of the 801 RISC project which had a machine
with very few "clever" instructions. They get superb code, but the
compiler is so slow that I gather nothing less than a 370/168 will do.
-John]





Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.