From: | terryg@uswest.net (G-Man) |
Newsgroups: | comp.compilers |
Date: | 4 Aug 1998 22:15:56 -0400 |
Organization: | Compilers Central |
References: | 98-08-011 98-08-023 |
Keywords: | C++, performance |
bob.morgan@digital.com (Bob Morgan) wrote:
>1) One of the fallacies of high level languages, is that high level
>optimizations can be applied without low-level optimizations. My
>experience with higher-level languages is that the high-level
>optimizations (vectorization for example) are of little use without
>the low-level optimizations to clean up the code. Thus a C++ compiler
>needs all of the optimizations that a C compiler together with the OO
>optimizations specific to C++.
Although I agree with the sentiment, I have to point out that
vectorization is actually a very low-level optimization. In all Cray
(now SGI) compilers, vectorization is performed in or just before code
generation. A better example would be inlining, which leaves a real
mess (in Fortran, especially) unless cleaned up by lower level
optimizations.
>A side comment: have you ever wondered when the technology was
>available to build an optimizing compiler as effective as the
>compilers available today? I claim that all the technology needed to
>build a parallelizing, vectorizing, compiler for RISC computers was
>available in 1972. (well maybe not trace scheduling). At that point
>instruction scheduling was known, parallelization was known (ILLIAC
>IV), optimization was known, register allocation was known, etc.
I suspect you're going to get a lot of comments on this statement.
We now have to deal with multi-cpu parallelism with and without shared
memory, vectorization, multiple functional units, register renaming,
all kinds and flavors of cache memory, speculative execution,
memory/processor speed imbalances, shared resource competition...
optimizing for all of these at once is really more of an art than a
science. We may know how to do each portion quite well, but coming up
with an optimal overall solution is exceptionally difficult, and not
nearly enough research is devoted to moving this from a rather arcane
art to an actual science. IMHO. <g>
RISC, by the way, stands for 'Really Invented by Seymour Cray'. :)
Your claim that all the technology existed in 1972 to build such a
compiler is true, but it would be blown away by compilers using modern
methods. Dependence analysis, as one example, was not much more than
simple pattern matching until Bannerjee/Wolfe. Outer loop
vectorization was not developed until the early 1990's (Viet Ngo).
We're developing new optimization technology at Cray/SGI this very
moment that could support a half-dozen PhD thesis, and I seriously
doubt that we're alone in developing innovative new ways of getting
that critical loop to run "one clock faster".
Speaking only for myself... I'm supposed to say that, I think.
-- Terry Greyzck
terryg@uswest.net
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.