|[4 earlier articles]|
|Re: Best, Simple versus Best email@example.com (1995-03-16)|
|Re: Best, Simple versus Best firstname.lastname@example.org (1995-03-16)|
|Re: Best, Simple versus Best email@example.com (Steven D. Majewski) (1995-03-20)|
|Re: Best, Simple versus Best firstname.lastname@example.org (1995-03-21)|
|Re: Best, Simple versus Best email@example.com (Prof Herman Venter) (1995-03-30)|
|Best, Simple versus Best firstname.lastname@example.org (1995-03-30)|
|Re: Best, Simple versus Best email@example.com (1995-04-15)|
|Re: Best, Simple versus Best firstname.lastname@example.org (1995-04-07)|
|From:||email@example.com (Robert Geva)|
|Date:||Sat, 15 Apr 1995 04:06:57 GMT|
Preston Briggs quotes:
> >Gabriel's point about the "New Jersey" school is that it goes for simplicity
> >in the implementation, even if that imposes either complexity or errors on
> >the users of that implementation. For a compiler, that corresponds to
> >oddball special requirements to get good code
> >I see nothing in Powell's suggestions that suggest either. His argument for
> >choosing simple optimization strategies is that he can get them right, not
> >primarily that it will let him finish his compiler faster. He also claims
> >that he has chosen strategies that work with "natural", well-written code,
> >rather than with odd-ball cases.
> OK, here's a specific example. I don't see the words "strength
> reduction" in the paper, but there's one paragraph that mentions
> "loop induction analysis." Now, it doesn't explicitely say that
> nothing except FOR-loop indices are considered, but he does say that
> only limited classes of FOR-loop indices are considered. After
> reading it, I come way with the (perhaps wrong) idea that he only does
> strength reduction on FOR-loop indices where all the uses are
> multiplied by the same constant value.
> So, if you address 2 arrays of different-sized structures, or 2 arrays
> of different shape, or look at an array in a WHILE loop (imagine
> you're writing something exotic like quicksort), then he won't do
> strength reduction. Doesn't seem very general to me. Seems like
> he'll soon have users trained to use FOR loops in unnatural places.
I agree that while loops should be optimized as well as for loops.
However, the case of strength reduction when the loop accesses structs of
different sizes is problematic. It may be more general to perform the
strength reduction on these loops, but will it improve performance?
It is very likely to introduce temps, and the overhead will cost more
than the expected performance gain. The tradeoff is, IMHO, target specific.
For one, it depends on the available addressing modes. For the Pentium,
I voted against the more general implementation in favor of fater executables.
> >>Everybody always likes the magic phrase "best, simple" (almost as much
> >>as the words "linear" and "optimal"). Why not say "I did the best I
> >>could" or "I built what I could decipher" versus trying to make a
> >>virtue out of doing a shoddy job.
> >Because that's not at all what Powell to do.
> I'm sorry; I didn't mean Powell here. I meant "everybody" who's
> sprung the little "best, simple" phrase on me as an excuse for doing a
> poor job. I didn't really want to write an attack on Powell or his
> compiler; I only wanted to dispell some of the golden glow around the
> good old days when life was simple, engineering was straightforward,
> and programmers had good posture.
The general approach may be more elegant, but it is not necessarily better.
For one, recognizing a larger set of cases as opprtunities for an optimization
is sometimes the easier way. It may indicate that your costing function is
not tuned fine enough. It is a strange goal to implement optimization as
general as possible. The goal of optimizations is to generate faster code.
It someties means that you optimize fewer cases.
Custom made optimizations
Novell, Unix systems group
(908) 522 6078
Return to the
Search the comp.compilers archives again.