Related articles |
---|
different results with different f77 optimizers keyes@chem.bu.edu (Tom Keyes) (1995-09-24) |
Re: different results with different f77 optimizers ah739@cleveland.Freenet.Edu (1995-09-29) |
Re: different results with different f77 optimizers roberson@hamer.ibd.nrc.ca (1995-09-30) |
Re: different results with different f77 optimizers cliffc@ami.sps.mot.com (1995-10-05) |
Re: different results with different f77 optimizers preston@tera.com (1995-10-06) |
Newsgroups: | comp.sys.sgi.apps,comp.compilers |
From: | preston@tera.com (Preston Briggs) |
Keywords: | Fortran, optimize |
Organization: | Tera Computer Company, Seattle, WA |
References: | 95-09-146 95-10-026 |
Date: | Fri, 6 Oct 1995 00:27:42 GMT |
>See CMG Transactions (CMG=Computer Measurement Group)
>number 52, Spring 1986, article "Do Fortran Compilers Really Optimize?"
>by Dr. David S. Lindsay.
>He compares optimized and unoptimized versions of Fortran code
>specifically coded to be optimizable, on commercial compilers.
>Quote from his Conclusion:
>"1. Compiler optimization of generated code is haphazard at best.
"Haphazard" is surely the wrong word. We used to say "unstable",
appealing to the numeric sense of the word. That is, the
effectiveness of a particular optimizer on a particular piece of code
isn't readily predictable. One might hope that better optimizer would
be more stable. (Of course, I suppose no optimization under any
circumstances might count as some sort of trivial stability).
> 2. Compiler optimization if _not_ a mature technology
Surely he's right in that we continue to make progress, both in theory
and in implementation.
> 3. Trade, technical, and academic sources have no basis for their
> dogma about how all optimizing compilers do so-and-so.
Didn't state the dogma so it's hard to call him, but I think it's a
misleading statement. Technical and academic people both know what's
done and what's doable. Sometimes because they built the things;
other times because they do just the sorts of testing he advocates.
> 4. Empirical tests are _badly_ needed to verify vendors' claims of
> their compilers' generation of optimized code.
I agree. NULLSTONE represents one example. I suppose Lindsay's work
is another (haven't seen it). I also began such an effort when I was
at Rice, NULLSTONE seems much more extensive and complete. Interested
parties can grab a copy, such as it is, via anonymous ftp from
cs.rice.edu
in the directory public/preston/eval
> 5. Empirical tests are probably also the only way to improve
> optimization technology.
I disagree. You can test all you want, and complain about the results
all you like, but someone has to solve the problems. Empirical tests
didn't invent data-flow analysis, dependence analysis, graph coloring
allocation, etc.
> 6. Academics should have their students run tests, not just
> learn techniques."
They do, of course. Though I agree with the sentiment. Even more
than simply running tests, I encourage everyone to look at the code
emitted by their compilers. Do it regularly. Do it for the test
suites mentioned above; do it for your own favorite tests.
Preston Briggs
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.