Related articles |
---|
using optimizing compiler technology to advance software engineering clements@cs.utexas.edu (1989-10-30) |
using optimizing compiler technology to advance software engineering cline@sun.soe.clarkson.edu (Marshall Cline) (1989-11-15) |
Re: using optimizing compiler technology to advance software engineeri madd@world.std.com (1989-11-03) |
Date: | Fri, 3 Nov 89 16:28:57 EST |
From: | madd@world.std.com (jim frost) |
References: | <1989Oct30.230405.1148@esegue.segue.boston.ma.us> |
>My question for this group is: What
>CAN we say? How much of the postulated "magic" optimizer exists? Can
>exist? Might exist? For instance, are there any optimization techniques
>that can examine the "liveness" of variables in order to fold them into
>the same storage area, and thus minimize -- not just improve, but minimize --
>the memory used for variable storage? This is a common trick of assembly
>language programmers, but something that is not in general possible in a
>system developed as a set of independent modules.
You ask the damndest questions.
In my experience the "modularity" of a program has little to do with
its speed and compactness, but will much improve implementation and
debugging times. Additionally it is much easier to make complete
implementation changes in a module than it is to do in code which has
outside functions dependent on implementation.
"Modularity" and data-hiding do eat up space, but in a stack-oriented
environment it's not much space since the stack is continually
re-used. Generally you loose very little but gain a lot of
readability. Well-designed systems will have few globals which can be
re-used to save space; if you have a lot of them, you're not designing
right and it shouldn't be up to the compiler to fix design problems.
Now to optimizing compilers.
I've used both completely unoptimizing and highly optimizing C
compilers. What I expect a highly optimizing compiler to do is:
* find the most efficient way to code particular constructs,
including most efficient data-flow implementation. a
compiler ought to be able to do most-efficient analysis on
whole program segments, while a programmer usually cannot
remember that much. State-of-the-art compilers such as the
MIPS compiler (and increasingly the GNU compiler) bear out
this opinion.
* eliminate unused code and variables, with warnings.
* perform minor algorithmic changes to improve performance,
especially when dealing with unusual hardware
characteristics (eg on some systems rotating bits by one is
slower than multiplying by two). The kind of optimization
i'm talking about here is something like strength-reduction
and loop-unrolling.
As it is, most people who say that they "can't" program in a clean,
simple manner because of performance problems aren't using the right
solutions to the problem. It was the case in past years when you had
to get rid of that last instruction to make everything fit; nowadays
you don't usually have quite that much to worry about and can
concentrate on finding a good algorithm for your performance/memory
limits. Usually this does not mean writing twisted code.
Basically, a perfect optimizing compiler should be able to make the
best possible object code for your algorithm on the given hardware,
but it's up to you to make the algorithm good. Strength-reduction is
about the farthest I want a compiler to go in making my code "better",
and even there I'd like it to tell me what it did so I'll know to do
it that way in the future. Compilers should hide hardware
differences, not make up for poor style.
jim frost
software tool & die "The World" Public Access Unix for the '90s
madd@std.com +1 617-739-WRLD 24hrs {3,12,24}00bps
Return to the
comp.compilers page.
Search the
comp.compilers archives again.