|Compilers 2000 firstname.lastname@example.org (Jeff Prothero) (1990-09-28)|
|Re: Compilers 2000 email@example.com (1990-10-02)|
|Re: Compilers 2000 firstname.lastname@example.org (1990-10-06)|
|Re: Compilers 2000 email@example.com (1990-10-07)|
|From:||firstname.lastname@example.org (Preston Briggs)|
|Organization:||Rice University, Houston|
|Date:||Tue, 2 Oct 90 01:12:14 GMT|
Jeff Prothero <email@example.com> writes:
>Anyone want to offer comments on what is Right
>and Wrong with the compiler field today, and what compilers will look like
>ten or twenty years from now?
It's easy to name wrongs things. We can start with mistakes I make,
and assume they generalize.
- Don't know enough of the literature. This comes from reading too
shallowly and missing important sources. (Some are hard to find, like
early POPL proceedings, but should be pursued.)
- This leads to massive wheel re-invention. Cockiness comes into play
here too, along with shallow reading.
"Those idiots. They missed this obvious improvement!."
Well, sometimes they did, but usually it's an old rejected idea, probably
already documented somewhere else. With thought, or experimentation, or
further reading, the reasons behind particular decisions often become
clear. If not, write it up!
- Ignoring algorithmic efficiency in favor of simple implementation. This
is tough, and I suppose we can make good arguments for simple and clean;
but the reality is that compilers have to deal with large programs, large
routines, large basic blocks, and complex expressions. Nowadays we tend
to have large memories and fast machines, but that just means the linear
parts get less important (but should not be ignored!). O(n^2) or worse is
still scary and a good deal of research effort goes into reducing
(Taken another way: It's possible to imagine all sorts of optimizations.
We can dream up things all day. It's much more interesting to actually
- Testing, particularly with large test cases to expose the algorithmic
problems (performance bugs).
- Documentation, including publishing good and bad ideas of all sorts
(presumably labeled "good idea" and "mistake"). How can we expect
each other to be well read if there's nothing to read?
We should be as proud of publishing a "mistake" (with an explanation
of why it's wrong) as we are of our latest brain-storm.
- Study a compiler, in depth, for a long time. Fix all the bugs
(correctness, performance, code quality).
A story I like about IBM... (Perhaps from Scarborough and Kolsky.) As
part of a project to enhance the VS Fortran compiler, IBM collected a
large test suite. They compiled the suite and examined the code for
*every* inner loop and (by hand) wrote the best assembly code they could
for each loop. Then they worked on their optimizer until it produced code
of equal quality for each loop.
For the future...
(Disclaimer: This is my pretty low level and pretty narrow view.
I don't know enough to comment in other areas, and perhaps not this one.)
Good optimizing compilers will do dependence analysis (even for scalar
machines). Dependence analysis tells us about array accesses in loops.
With it, you can get speedups of 3 on today's workstations by keeping
portions of arrays in registers.
Good compilers will worry about the entire memory hiearchy: disk, main
memory, 1 or more levels of cache, and registers. The main tool will be
As the compilers improve, architectures will evolve towards what can be
handled in the compiler. RISC is the easy example and VLIW another, but
dependence analysis will change things again.
Preston Briggs looking for the great leap forward
Return to the
Search the comp.compilers archives again.