Re: Death by error checks.

sethml@avarice.ugcs.caltech.edu (Seth M. LaForge)
9 Dec 1995 19:27:11 -0500

          From comp.compilers

Related articles
[2 earlier articles]
Re: Grammars for future languages sethml@dice.ugcs.caltech.edu (1995-11-21)
Death by error checks. dave@occl-cam.demon.co.uk (Dave Lloyd) (1995-11-27)
Re: Death by error checks. jgj@ssd.hcsc.com (1995-11-28)
Re: Death by error checks. sethml@dice.ugcs.caltech.edu (1995-11-28)
Re: Death by error checks. Terry_Madsen@mindlink.bc.ca (1995-11-30)
Re: Death by error checks. cliffc@ami.sps.mot.com (1995-11-30)
Re: Death by error checks. sethml@avarice.ugcs.caltech.edu (1995-12-09)
Re: Death by error checks. veeru@hpclearf.cup.hp.com (Veeru Mehta) (1995-12-17)
Re: Death by error checks. hbaker@netcom.com (1995-12-19)
| List of all articles for this month |
From: sethml@avarice.ugcs.caltech.edu (Seth M. LaForge)
Newsgroups: comp.compilers
Date: 9 Dec 1995 19:27:11 -0500
Organization: California Institute of Technology
References: 95-10-103 95-11-192 95-11-251
Keywords: optimize, performance

Terry Madsen <Terry_Madsen@mindlink.bc.ca> wrote:
>sethml@dice.ugcs.caltech.edu (Seth M. LaForge) writes:
>> [ Profile-based optimization ]
>
>Here's another problem: what if late in a project, a program change causes
>the usage pattern to change, and a DLL to change, that you didn't touch at
>the source level? Many places don't test source code, they test and
>*approve* object (executable) code: a changed object is an untested one.
>Putting object code optimization out of the programmer's reach, with the
>risk of something changing two days before ship, won't fly; given the
>choice between profiling-based optimization and none at all, I'd choose the
>latter for this reason alone.


I don't see this as a problem. Worst case, you don't recompile and
you lose a few % performance. Moreover, I suspect most changes in the
use of a library don't change the paths taken all that significantly.
Any changes to client code that are made two days before ship had
better be pretty minor changes anyway.


Consider that most programs are mostly tested compiled in debug mode,
and shipped compiled with optimization. This is a much greater
difference in result than re-profiling and re-compiling a library. It
all comes down to how much faith you have in the correctness of your
compiler; you'd better be pretty confident in it, or it's time to
switch to a different compiler.


>Most importantly, regardless of the environment, I fail to see why being
>able to profile code and feed the results back to a second compile is a
>"better" way to tell the compiler something that the programmer knows in
>the first place. This has been a bit of a peeve of mine: the claim that
>profilers know which branches are taken better than the programmers who
>wrote the code. Even if this is the case for "algorithmic" branches,
>profiling makes for a clumsy way to build a large product, and leaves the
>error-checking branches no better than if they'd been explicitly specified
>in the source.


I agree that having the programmer specify branch production would in
many cases produce better code. However, no existing C code has these
annotations, and most projects involve old code, or code written by
others. It's very nice to be able to get a few % speedup from
accurate branch prediction without having to rewrite the code. The
programmer might not even be able to predict branches, especially if
he/she is writing library-type code.


>Since many compilers already claim to have internal heuristics to decide
>whether a branch should be compiled as faster-taken or faster-not-taken (on
>machines where it makes a difference), it seems as if a user directive
>could be made an additional (overriding) parameter to this decision
>process.


The biggest problem I see with this approach is that it requires the
compiler to extend the C language in some way. #pragmas have many
inherent problems; this is the reason gcc avoids using them. Any
method is going to add some degree of unportability to the program.


I think profile-based optimizations are a fine thing, and will become
more common as memory becomes more of a bottleneck and branch
prediction becomes more important. On the other hand, processors are
making compiler-generated branch prediction a little less important;
the PowerPC 604 for instance has some really amazing branch caching to
dynamically predict branches based on what they've done in the past.
Check out Motorola's home page for details; it's pretty cool.


Seth
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.