Re: Optimizing Across && And ||

ryg@summit.novell.com
Fri, 3 Mar 1995 14:10:33 GMT

          From comp.compilers

Related articles
[4 earlier articles]
Re: Optimizing Across && And || whalley@fork.cs.fsu.edu (David Whalley) (1995-02-22)
Re: Optimizing Across && And || bart@cs.uoregon.edu (1995-02-23)
Re: Optimizing Across && And || bill@amber.ssd.csd.harris.com (1995-02-24)
Re: Optimizing Across && And || glew@ichips.intel.com (1995-02-27)
Re: Optimizing Across && And || bill@amber.ssd.csd.harris.com (1995-02-28)
Re: Optimizing Across && And || bill@amber.ssd.csd.harris.com (1995-02-28)
Re: Optimizing Across && And || ryg@summit.novell.com (1995-03-03)
Re: Optimizing Across && And || leichter@zodiac.rutgers.edu (1995-03-07)
Re: Optimizing Across && And || preston@tera.com (1995-03-08)
Re: Optimizing Across && And || pardo@cs.washington.edu (1995-03-13)
Re: Optimizing Across && And || chase@centerline.com (1995-03-14)
Re: Optimizing Across && And || jqb@netcom.com (1995-03-15)
Re: Optimizing Across && And || cdg@nullstone.com (1995-03-20)
[2 later articles]
| List of all articles for this month |

Newsgroups: comp.compilers
From: ryg@summit.novell.com
Keywords: C, optimize
Organization: Compilers Central
References: 95-02-179
Date: Fri, 3 Mar 1995 14:10:33 GMT

This trend has turned into: why are compilers not optimizing this
particular sequence. This thrend might have been the best, if these
sequences came from real problems that people have, as in: hey, why does
this simple program take so long? well, let's see.. this loop index is not
even is a register, this compiler must be out of its mind... Rather,
posters have some total quolity type approach, expecting compilers to
optimize every thing that can be optimized. I think this is unfortunate.
As much as I had to except the idea that I will not live long enough to
enjoy every beautiful beach in the world, I also had to except the idea
that I will not live long enough to implement every possible optimization.
Both make be sorry ;-)


Bart Massey < bart@cs.uoregon.edu> Wrote:
>Bill Leonard <Bill.Leonard@mail.csd.harris.com> wrote:
> How likely is this to occur in real code?
[...]
I suspect that this sort of thing occurs often in macro expansions.
[...]
in which case there are some serious opportunities for
optimization here. Of course, you'll now insist that I go find
examples of this sort of putc() usage, in real programs, in
places where it matters... Unfortunately, no one pays me
enough to chase this any further :-).


So don't you think it's unfair on your part to complain to the people who
do verify which opportunities exist and which dont? If you dont know that
such opportunities exist, why are you so sure they should take priority?


Preston Briggs Wrote:


It looks to me like you're doing things right.
I'd say the blame rests with the compilers.


Back when I was still at Rice, I wrote a large collection of test
cases, similar in spirit to the ones you have discussed, along with a
driver that would allow me to test C compilers in a black box fashion.
Friends around the net helped me refine the tests and collect a lot of
numbers on a lot of machines.


[Stuff deleted]


The results were very disappointing. I found no available compiler
(free, cheap, or expensive) that approached what I had ignorantly
considered "the state of the art."


Talking to various people about this, I've come across several
possible explanations:


C compilers don't work that hard, traditionally.
More interesting would be a similar set of tests on Fortran
compilers.


Nobody in the industry reads recently published compiler papers.
^^^^^ or understands
^^^^^^^^^^^ or believes


More papers are written that apply only to fortran code. Main difference
being the language (no pointer) and type of apps (nice loops, looping
forever on nice uniform arrays). There's not much you can do with that
type of stuff for C programs, which are mostly a bunch of flacky branches
with not much between them. It's a different problem. "Branch predication
for free" By Ball and Larus is a paper that industry people have certainly
read and believe.


where only "reads" is a criticism of the industry.
"Understands" and "believes" are meant to suggest
possible weaknesses in the papers.


None of the things I test really matter. The only things that
matter these days are software pipelining and cache management.


Writing optimizers is expensive and time consuming.
Everything I tested was fairly old (had to be, since they were
released products). How could I expect them to incorporate
recent research results?


Probably all of these excuses contribute, in different proportions for
different compilers, to the actual explanation.


BTW, after I began talking to people about my results, some of the
compiler people at SGI put me in touch with Chris Glaeser (the
originator of this thread). He has written a similar package that's
commercially available from his company Nullstone. Here "similar"
means "similar in spirit." His is much more complete in every
dimension: documentation, correctness, test coverage, etc.


That's that "total quality" type approach. Making sure that optimization
apply to all possible combinations of types and operators is not the best
way to go. If some form of constant propagation does not work on unsigned
char op long casted to (void *), and this is discovered only by nullstone
but not by any benchmark, not even by programs ;-), then it's ok by me and
I'd rather spend my time improving scheduling or branch elimination.


--
speaking of myself at most,
Robert Geva
Custom made optimizations
Novell, Unix systems group
(908) 522 6078
ryg@summit.novell.com
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.