|Re: 'Superoptimizers' firstname.lastname@example.org (1995-11-09)|
|Re: 'Superoptimizers' email@example.com (1995-11-14)|
|Re: 'Superoptimizers' firstname.lastname@example.org (1995-11-15)|
|Re: 'Superoptimizers' email@example.com (1995-11-17)|
|Re: 'Superoptimizers' firstname.lastname@example.org (1995-11-20)|
|Re: 'Superoptimizers' email@example.com (1995-11-21)|
|Re: 'Superoptimizers' firstname.lastname@example.org (1995-11-22)|
|[3 later articles]|
|From:||email@example.com (Andy Glew)|
|Organization:||Intel Corp., Hillsboro, Oregon|
|Date:||Thu, 9 Nov 1995 08:48:39 GMT|
>From: firstname.lastname@example.org (Thomas Womack)
>Are present-day compilers constrained in any way by the amount
>of time that people are prepared to wait for a compile normally?
I hope that Intel's compiler team - who do a great job - doesn't mind
if I say that I, as an architect, sometimes wish that they spent less
time worrying about compile time, and more about getting even better
>Do there exist 'release' compilers which will, given the right
>optimisation parameters, spent a week on a P6 compiling a program
>but produce noticably better code than normal, 'development'
Dammit, I wish that we had such a compiler!!!! I would gladly spend a
week optimizing to get better release numbers.
Now, admittedly, there are guys who spend days figuring out the best
combinations of compiler switches - but each compilation is "human
scale". I know of several optimizations that were scaled back because
they took too long.
In fact, one of the most difficult to detect problems in working
with a compiler is recognizing when an arbitrary limit, like "turn off
basic block optimizations if there are more than 2000 basic blocks in
a procedure", has been exceeded - especially when the limit is not
necessarily available as a command line switch. And when the limit
was provided to limit compile time on some old processor, and now
should be considerably scaled up.
This touches on a pet peeve of mine: compilers are really "slow
real-time programs" - but their algorithms take absolutely no account
of real time, except in arbitrary scale factors.
I would love to have compilers that:
(a) Used profiling feedback to guide how much time to spend on
optimizing a particular routine. I.e. they not only would use
profiling feedback to decide what optimizations to apply, but they
would also use profiling feedback to decide how much time to spend
optimizing a particular piece of code.
(b) Used "compilation time" as a command line parameter. I.e. I wish
I could say "take as long as 30 minutes to optimize, but not much
longer" - and have the compiler quickly produce a quick and dirty
code, and then refine it a few times by applying more and more
Rather than me, myself, figuring out by hand what compiler
switches to turn on and off in order to properly balance compilation
time and execution speed.
Andy "Krazy" Glew, email@example.com, Intel,
M/S JF1-19, 5200 NE Elam Young Pkwy, Hillsboro, Oregon 97124-6497.
Place URGENT in email subject line for mail filter prioritization.
Return to the
Search the comp.compilers archives again.