Summary on Source Level Optimization (long)

icsu8209@ming.cs.montana.edu (Glassy)
Tue, 11 Sep 90 17:53:04 -0600

          From comp.compilers

Related articles
Summary on Source Level Optimization (long) icsu8209@ming.cs.montana.edu (1990-09-11)
| List of all articles for this month |

Newsgroups: comp.compilers
From: icsu8209@ming.cs.montana.edu (Glassy)
Keywords: optimize
Organization: Compilers Central
Date: Tue, 11 Sep 90 17:53:04 -0600

Here is a summary of replies I received to my questions about source-level
optimization (SLO). I've quite ruthlessly paraphrased a number of the
messages I received (paraphrasing indicated by a '[' preceding the hacked
text). Any misquotes and inaccuracies may safely and certainly be
attributed to me, and not to the people I'm paraphrasing.


A large thank-you to all who responded, including bright@Data-IO.COM (Walter
Bright), who gave some good clues on how to do IR optimization...


Thanks,
Lou Glassy (icsu8209@caesar.cs.montana.edu)


-----------------------------------------------------------


[A book related to SLO is _Writing Efficient Programs_ by Jon Bentley.]


(ccs010@castor.ucdavis.edu (Clifford Mather))


---------------------------------------------------------------


[Most optimizers for current parallel/supercomputers work at the source
level. (Parafrase Illinois is one).]


[They generally take in fortran, and spit out annotated fortran code.]


[These (vectorizing) optimizations tend to be at a very high-level.
(i.e., they often need to gather information about array subscripts).]


(Mark Streich <streich@boulder.Colorado.EDU> )


[Similar observation from (Jim Davies (jrbd@craycos.com)), who also gives
as a reference:
        D. Padua and M. Wolfe, "Advanced Compiler Optimizations
        for Supercomputers", CACM Vol. 29, No. 12, Dec. 1986.]


------------------------------------------------------------------


[SLO has been explored extensively in existing compilers. However,
IR optimization will remain dominant because:


[---Almost all optimizations done by compilers today are created by
compiler generated code. The most commonly optimized expressions are
those generated for addressing arrays. Many optimizers don't bother
with user-created loop invariant code, etc., on the chance the
optimizer might break user code.


[---SLO of the obvious sort (dead code elimination common subexpression
elimination, loop invariant code motion) almost never applies to well
written user code.


[The assumption that IR optimizers are wholly machine dependent is
incorrect. Generally, about 90% of optimizations done at the IR level
are machine independent with the only machine dependent parts being
peephole optimizations, operator strength reductions, exploitation of
machine idiosyncracies, etc. The bulk of work done by optimizers
(data flow analysis) is 100% machine independent.]


(Bob Blainey <bjb@csri.toronto.edu>)
---------------------------------------------------------------------


1. [SLO isn't the only avenue to portable optimization.
Some approaches to object code level optimisation -are- portable.


2. [Most of the payoff for CSE, code motion,and strength reduction, comes
from reducing the cost of compiler- (not programmer-) generated operations.]


(Bill Immerman (immerman@mitre.org) )
-----------------------------------------------------------------------------


[In-line expansion of function calls or macros is one type of SLO.


[Research on function inlining is going on at Rice University. The
basic approach is to replace all function calls in Fortran code with
the equivalent code from the function. They're generating *huge*
functions, and several of the compilers they ran the code throught
were not happy about it.


[A problem with SLO is maintaining consistency with the original code.
How do you make sure that the code the optimizer produces looks like
something related to the input code?]


(Scott Storkel <sstorkel@us.oracle.com>)


-----------------------------------------------------------------------




I have one [source-level optimizer] which I wrote for JOVIAL J-73 code
here at GE Corporate R&D center. The way I did it was to parse the
source code, build an intermediate representation, optimize that
representation, and then pretty print the intermediate representation
as JOVIAL.


Is it too slow? Well, that depends on the application. You should
only optimize on programs that are going to be run multiple times (or
are endless loops). It is faster than a full compile.


Advantages:
It can be done without mucking with the compiler, which was our problem, the
compiler for JOVIAL was not available with sources.


It can be used if you're delivering source to others.


Disadvantages:
You need to know the source language well to work around all its quirks, i.e.
you come darn close to writing the compiler.


For the same language the optimizations might differ depending on which
compiler you use. I've got a pair of C compilers for which declaring a
local variable "register" improves the resulting code on one and pessimizes
it on the other.


Reply-To: hammondr@sungod.crd.ge.com (Richard A Hammond)
-------------------------------------------------------------------------


Reply-To: Cliff Click <cliffc@rice.edu>


Here at Rice we're using SLO for a variety of optimizations, while
leaving CSE, DCE and register allocation to the low-level "back-end".
Seems to work pretty well, but we're not worried about speed of
compilation (it's a research compiler). Also AT&T's C++ compiler
makes C code as it's output.


The kind of source-level transformations we're doing make the output
quite unreadable, even for relatively simple files. For purposes of
debugging the compiler's output source code isn't much better than IR.


cliffc@owlnet.rice.edu
------------------------------------------------------------------------
Reply-To: pardo@cs.washington.edu (David Keppel)


Today, many people use C as an IR. A recent Ph.D. from Yale (forget
the refernce, though) uses essentially LISP as the IR.


One problem with SLO -- conceptually, anyway -- is that profitibility
analysis gets harder. A second problem is that you are limited to the
operators the source language. Since C is high-fructose assembler,
you're pretty safe, but in many HLLs, operations such as `+' have
complex semantics including overflow that make transformations more
difficult (if nothing else because the operator has more attributes to
check).


I think Kennedy at Rice is doing work on Fortran S->S optimizers. See
also the extensive European literature on partial evaluation. Most of
the evaluators are LISP->LISP translators.


[pardo included a long bibliography, which I'll gladly send
to anyone who'd like to peek at it... --Lou]
------------------------------------------------------------------------------


[SLO may not (in the general case) be very useful. Just as it's hard
to re-compress already compressed data, two levels of optimization
(source level + compiler) would be redundant. Since a good compiler
can do tricks that the programmer can't (register scoping, for
example), reasonably good source code and a good optimizing compiler
may be a better bet.]


(Dan Breslau (uunet!codex!dan))


[From icsu8209@ming.cs.montana.edu (Glassy)]
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.