Related articles |
---|
Grammars for future languages schinz@guano.alphanet.ch (1995-10-22) |
Death by error checks. ianr@lsl.co.uk (1995-11-21) |
Re: Grammars for future languages sethml@dice.ugcs.caltech.edu (1995-11-21) |
Death by error checks. dave@occl-cam.demon.co.uk (Dave Lloyd) (1995-11-27) |
Re: Death by error checks. jgj@ssd.hcsc.com (1995-11-28) |
Re: Death by error checks. sethml@dice.ugcs.caltech.edu (1995-11-28) |
Re: Death by error checks. Terry_Madsen@mindlink.bc.ca (1995-11-30) |
Re: Death by error checks. cliffc@ami.sps.mot.com (1995-11-30) |
Re: Death by error checks. sethml@avarice.ugcs.caltech.edu (1995-12-09) |
Re: Death by error checks. veeru@hpclearf.cup.hp.com (Veeru Mehta) (1995-12-17) |
Re: Death by error checks. hbaker@netcom.com (1995-12-19) |
Newsgroups: | comp.compilers |
From: | cliffc@ami.sps.mot.com (Cliff Click) |
Keywords: | optimize |
Organization: | none |
References: | 95-10-103 95-11-192 95-11-251 |
Date: | Thu, 30 Nov 1995 22:22:41 GMT |
About profile-based optimization:
In general I agree with you, but I think you're missing some points.
Terry_Madsen@mindlink.bc.ca (Terry Madsen) writes:
> Anyway, for data-processing code of the "2 megabytes and no loops" sort,
> this is at best a very time-consuming process: build one night, run all
> the profiling seed code, then build it again.
Yes, profiling is slower & more complex.
> This is assuming that the seed code can adequately exercise all the code
> of interest enough times to make the profiler (if it exists) notice
> something it considers significant.
If the seed data runs for very long and doesn't touch some code, then
that code isn't time-consuming in the final product (or you've got lousy
seed data!), and feedback optimization isn't important to it.
> What if ... a program change causes the usage pattern to change, and [an
> object] to change, that you didn't touch at the source level? Many
> places ... test and *approve* object (executable) code: a changed object
> is an untested one.
Don't recompile the object because the profile data changes, recompile only
when the _source_ changes. Use the profile approach at the bitter end: one
run (no profiling optimizations) to profile, then recompile, then test. If
testing shows a "profile only" bug, you can ship the non-profiled code
(with a perhaps lurking bug!) or debug the profiled version. It's a
software engineering issue, not a profile-based optimization issue: your
software process should handle this.
> I fail to see why being able to profile code and feed the results back to
> a second compile is a "better" way to tell the compiler something that
> the programmer knows in the first place.
It's not a "better" way: it's another way.
(1) Not all branches, especially in an optimizing compiler, are explicit
in the source code. These branches CANNOT be human annotated.
(2) Humans generally pick only SOME branches to announce frequency on;
many they don't care about or don't know enough to choose.
(3) Humans sometimes (not always!) are wrong about frequency choices.
(4) More common than (3), humans are often _unaware_ of which branches
are frequently executed and have predictable direction.
On the other hand, seed data can be bad and "mispredict" branches in
real codes.
What really happens, is that the compile/profile/compile cycle is a pain,
and it really isn't done except by the very few who have critical
performance needs, and benchmark bashers.
Cliff
--
Cliff Click Compiler Researcher & Designer
RISC Software, Motorola PowerPC Compilers
cliffc@risc.sps.mot.com (512) 891-7240
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.