|[3 earlier articles]|
|Best, Simple versus Best Jon.Bertoni@Eng.Sun.COM (1995-03-15)|
|Re: Best, Simple versus Best email@example.com (1995-03-16)|
|Re: Best, Simple versus Best firstname.lastname@example.org (1995-03-16)|
|Re: Best, Simple versus Best email@example.com (Steven D. Majewski) (1995-03-20)|
|Re: Best, Simple versus Best firstname.lastname@example.org (1995-03-21)|
|Re: Best, Simple versus Best email@example.com (Prof Herman Venter) (1995-03-30)|
|Best, Simple versus Best firstname.lastname@example.org (1995-03-30)|
|Re: Best, Simple versus Best email@example.com (1995-04-15)|
|Re: Best, Simple versus Best firstname.lastname@example.org (1995-04-07)|
|From:||email@example.com (Preston Briggs)|
|Date:||Thu, 30 Mar 1995 18:24:35 GMT|
>I don't entirely disagree with many of Preston Briggs's comments - for one
>thing, I cited Powell's paper as someone else's arguments, not mine - but I
>think Mr. Briggs overstates the the case in his response.
Oh, probably so.
>Gabriel's point about the "New Jersey" school is that it goes for simplicity
>in the implementation, even if that imposes either complexity or errors on
>the users of that implementation. For a compiler, that corresponds to
>oddball special requirements to get good code
>I see nothing in Powell's suggestions that suggest either. His argument for
>choosing simple optimization strategies is that he can get them right, not
>primarily that it will let him finish his compiler faster. He also claims
>that he has chosen strategies that work with "natural", well-written code,
>rather than with odd-ball cases.
OK, here's a specific example. I don't see the words "strength
reduction" in the paper, but there's one paragraph that mentions
"loop induction analysis." Now, it doesn't explicitely say that
nothing except FOR-loop indices are considered, but he does say that
only limited classes of FOR-loop indices are considered. After
reading it, I come way with the (perhaps wrong) idea that he only does
strength reduction on FOR-loop indices where all the uses are
multiplied by the same constant value.
So, if you address 2 arrays of different-sized structures, or 2 arrays
of different shape, or look at an array in a WHILE loop (imagine
you're writing something exotic like quicksort), then he won't do
strength reduction. Doesn't seem very general to me. Seems like
he'll soon have users trained to use FOR loops in unnatural places.
Looking at his section on "Effects of Optimization", he notes that
strength reduction had the least effect of any of his optimizations
and, if fairness, wonders if the restrictions on its implementation
hampered it's usefulness. I'd guess "yes" although he thinks not.
Part of the problem here may be the limited benchmark suite (as Powell
>that's very different
>from deliberately pushing complexity onto the compiler's users - as many C
>compilers of the same era did, requiring programmers to decide what to put in
>registers, move invarient code out of loops, avoid subscripting in favor of
>pointer hacking, and so on. Powell doesn't propose any of this
You're right-- definitely a nice thing. On the other hand, we always
see posts from people pointing out that compilers need not do any of
this stuff since "well written" programs don't offer opportunities for
optimization. If probably true, and probably because the programmers
have been trained, in best Skinnerian fashion, by their compilers.
>>Everybody always likes the magic phrase "best, simple" (almost as much
>>as the words "linear" and "optimal"). Why not say "I did the best I
>>could" or "I built what I could decipher" versus trying to make a
>>virtue out of doing a shoddy job.
>Because that's not at all what Powell to do.
I'm sorry; I didn't mean Powell here. I meant "everybody" who's
sprung the little "best, simple" phrase on me as an excuse for doing a
poor job. I didn't really want to write an attack on Powell or his
compiler; I only wanted to dispell some of the golden glow around the
good old days when life was simple, engineering was straightforward,
and programmers had good posture.
I have this sort of philosophy about optimizers. I didn't invent it,
of course; I inherited most of it from my advisor, who got most of it
from his advisor, etc (though perhaps it was distorted in transmission).
The basic idea is that there are only a few distinct optimizations.
If you handle these few, in the most general way possible, then you'll
get good code.
The alternative approach is that there are dozens and dozens of
optimizations: each distinct, each machine dependent, and each working
only on some unusual special case.
People who subscribe to the 2nd view don't always implement all the
optimizations they can imagine because, in their view, the special
cases they attack don't arise often enough to justify the effort.
People who subscribe to the 1st view build their few optimizations and
are finished. Since they're only doing a few distinct optimizations,
they can work hard to do the best job possible on those few
Return to the
Search the comp.compilers archives again.