Related articles |
---|
Grammars for future languages schinz@guano.alphanet.ch (1995-10-22) |
Death by error checks. (Summary - Long) ianr@lsl.co.uk (1995-12-16) |
From: | ianr@lsl.co.uk ( Ian Ringrose) |
Newsgroups: | comp.compilers |
Date: | 16 Dec 1995 23:55:39 -0500 |
Organization: | Compilers Central |
References: | 95-10-103 |
Keywords: | C, optimize, summary |
======== My post ========
I have a problem, a big one (over 1E6 lines of 'c'). The coding
standard says that all functions must return a status (of type int),
and must be called with code of the form:
status = f(....);
if (status != NORMAL)
{
/* stack error message include file name and line number */
return status;
}
The above checking code is in a macro, so can be changed, the method
of error checking can not be changed, we have too much code that
use it.
What I would like to do, is give the 'c' compiler a hint in the
macro, that the 'if' should be predicted not taken. Are there any
'c' compilers that let me give the hint. (We build on Unix systems
from Sun, Dec, IBM, and HP.)
======== "comp.lang.c.moderated" <clc@solon.com> =======
I am unaware of away; if you use gcc, which you may wish to, I bet you
there is *some* -f flag for this. Probably.
ME -> (I check in the gcc docs and could not find any, but that does
not mean much)
====== mikey@ontek.com (Mike Lee) =====
You can try
| status = f(....);
| if (status == NORMAL)
| ;
| else
| {
| /* stack error message include file name and line number */
| return status;
| }
but you'll have to look at the assembler output from each
compiler to see if it makes a difference.
mikey
====== Cliff Click <cliffc@risc.sps.mot.com> ====
Sigh. Looks like a silly standard to me. Whatever happened to
setjmp/longjmp, which efficiently handle the unwinding semantics?
Or simply call "abort", and use the debugger for the stack trace?
Do you really need a stack traceback in a _running_ program?
ME -> Would setjmp/longjmp be faster?
-> The standard was wrote 5 years ago, so was a lot of the
-> code.
What I would like to do, is give the 'c' compiler a hint in the
macro, that the 'if' should be predicted not taken. Are there any
'c' compilers that let me give the hint. (We build on Unix systems
from Sun, Dec, IBM, and HP.)
Rumor has it that some compilers would trigger off of key ID names
in tests, including "fail", "error" or "errorno". I personally
don't know of any that do, but you might fool around a little.
A more common one might be testing a pointer against NULL (assume
not NULL), but again this probably isn't very common.
Ok, last suggestion: switch to a C++ compiler (still using your
old C code). Then use C++ exceptions to get the desired behavior.
Cliff
====== "Barton C. Massey" <bart@cirl.uoregon.edu> =====
Have you tried the cheap stuff, like
if (result == NORMAL) {} else {
<error handle>
}
====== Dave Lloyd <dave@occl-cam.demon.co.uk> =======
> What I would like to do, is give the 'c' compiler a hint in the
> macro, that the 'if' should be predicted not taken. Are there any
This will only help on processors with directable static branch
prediction. What would be more useful is for the compiler to move
the body of the if clause out of line which will improve cache/page
coherence and the default case will just be a skip of the conditional
branch. If it wasn't for the macro you could do this manually with
fair chance of success by explicit jumps and hope the compiler
optimises an if clause containing a single branch to a conditional
branch. There are tools which attempt to do this sort of thing at the
link level based on run-time stats (e.g., LXOPT under OS/2).
====== Nicholas C. Weaver" <nweaver@cs.berkeley.edu> ======
Well, this (as suggested by I forget who) would probably help
by simply improving code locality/compactness/size in this way
#define functionStart() int lineNum
status = f(....);
if(status != NORMAL){
lineNum = whatever.
goto Cleanup;
}
#define functionEnd() Cleanup: /* Print your garbage */ return status
ME -> this would mean changing every most functions ( > 40K of them) to
-> add the "functionEnd"
======= Gerd Bleher <gerdb@hpbidrd.bbn.hp.com> ======
Have you ever tried profile based optimization (PBO) on HP machines?
The manual says about when to use PBO:
"[...] Applications that are branch-intensive. The operations performed
in such applications are highly dependent on the input data."
("Programming on HP-UX" p. 6-4)
PBO happens in three steps. First you have to compile an instrumented
version of your program. Then you run your program with typical input
data. In your case that would be something that causes *no* errors so
that no error handlers are called. Finally you have to recompile (or
is it only relinking?) your program with the collected profile
information.
======== derek@knosof.co.uk (Derek M Jones) =======
Compilers that target processors containing branch prediction, or
those that simply move infrequently used code somewhere else
so it does not fill the cache, usually rely on execution profiles
generated at runtime. Compiler writers, myself included, don't usually
'trust' the programmer to have a good idea of what the code is actually
doing. I guess this is a special case.
On a software engineering note. Does your company use
any tools to check that the coding standards mentioned above are
followed in practice. Start of Advert. If not can I suggest that
you look at OSPC (Open Systems Portability Checker). It checks for
a variety of portability and standards conformance problems. It is
also possible to input new company coding standards into its database.
========= Dave Lloyd <dave@occl-cam.demon.co.uk> =======
> What I would like to do, is give the 'c' compiler a hint in the
> macro, that the 'if' should be predicted not taken. Are there any
This will only help on processors with directable static branch
prediction. What would be more useful is for the compiler to move
the body of the if clause out of line which will improve cache/page
coherence and the default case will just be a skip of the conditional
branch. If it wasn't for the macro you could do this manually with
fair chance of success by explicit jumps and hope the compiler
optimises an if clause containing a single branch to a conditional
branch. There are tools which attempt to do this sort of thing at the
link level based on run-time stats (e.g., LXOPT under OS/2).
========= sethml@dice.ugcs.caltech.edu (Seth M. LaForge) =======
I don't know of any compilers to which you can explicitly give such a
hint, but HP's compiler will something better: profile-based
optimization. Just compiler your program with the appropriate
profiling turned on and run it for a while. It dumps into a file
information about how often each branch is taken/not taken. You then
feed this file to the compiler when recompiling your program, and it
uses the profile information to optimize branches. I've heard that
there are other compilers which will do this - perhaps SGI's?
========= jgj@ssd.hcsc.com (Jeff Jackson) =======
Harris Computer Systems' compiler would need no hint. A guard test on
a return is assumped to be testing for an exceptional condition and so
would assume to path to return is not normally taken. This is used to
set the static prediction bit in the instruction and also to guide
global instruction scheduling (though your if-body probably has no
instructions to speculatively schedule, our compiler would try to
speculatively schedule instructions from the predicted-taken side
before it would the predicted-not-taken side of the if if there were
any to schedule). Someday, I would like to use such static
predictions to guide profitability analysis of other optimizations
such as partial redundancy.
========== hagmanti@cps.msu.edu (Timothy C Hagman) ======
On the Suns, at least under Solaris 2.4, you could look at
-xO [1|2|3|4|5]
5 Generate the highest level of optimization.
Use optimization algorithms that take more
compilation time or that do not have as high
a certainty of improving execution time.
Optimization at this level is more likely to
improve performance if it is done with pro-
file feedback.
-xprofile=p
(Solaris 2.x) Collect data for a profile or use a pro-
file to optimize.
collect
Collect and save execution frequency for later use
by the optimizer.
Hope this helps,
========== ale@hal.com (Dale Johannesen) ========
Wow, this coding standard almost makes me think longjmp is a good idea.
This sort of code responds well to profile-based optimization. The
idea is that you compile and run your program once, and collect
information about which code is executed most often. Then you
compile it again and the compiler uses the information when doing
its optimizations. One of the things it normally does is make better
branch predictions.
I think that recent releases from at least two of the vendors you
mention support such a mechanism, but you'll have to look through your
documentation to figure out how to do it.
============ Terry_Madsen@mindlink.bc.ca (Terry Madsen) ========
sethml@dice.ugcs.caltech.edu (Seth M. LaForge) writes:
> ... HP's compiler will something better: profile-based
> optimization.
Unfortunately there are many compilers which do not do this. M$ products
on x86 DOS/Windows, for example, and Mr Ringrose's, from the sound of his
posting. Anyway, for data-processing code of the "2 megabytes and no
loops" sort, this is at best a very time-consuming process: build one
night, run all the profiling seed code, then build it again. This is
assuming that the seed code can adequately exercise all the code of
interest enough times to make the profiler (if it exists) notice something
it considers significant.
Here's another problem: what if late in a project, a program change causes
the usage pattern to change, and a DLL to change, that you didn't touch at
the source level? Many places don't test source code, they test and
*approve* object (executable) code: a changed object is an untested one.
Putting object code optimization out of the programmer's reach, with the
risk of something changing two days before ship, won't fly; given the
choice between profiling-based optimization and none at all, I'd choose the
latter for this reason alone.
Most importantly, regardless of the environment, I fail to see why being
able to profile code and feed the results back to a second compile is a
"better" way to tell the compiler something that the programmer knows in
the first place. This has been a bit of a peeve of mine: the claim that
profilers know which branches are taken better than the programmers who
wrote the code. Even if this is the case for "algorithmic" branches,
profiling makes for a clumsy way to build a large product, and leaves the
error-checking branches no better than if they'd been explicitly specified
in the source. For a lot of "real-world" applications (at least the
consumer market ones), specifying which way the code is expected to go
could be good enough to eliminate the need (if not the desire) to do
profile optimizing at all.
Since many compilers already claim to have internal heuristics to decide
whether a branch should be compiled as faster-taken or faster-not-taken (on
machines where it makes a difference), it seems as if a user directive
could be made an additional (overriding) parameter to this decision
process. For reasons such as the product-control issue noted above, it may
be at least as useful to specify "optimize it for this condition" as to
specify a trace and allow automatic optimization.
========= cliffc@ami.sps.mot.com (Cliff Click) ========
About profile-based optimization:
In general I agree with you, but I think you're missing some points.
Terry_Madsen@mindlink.bc.ca (Terry Madsen) writes:
> Anyway, for data-processing code of the "2 megabytes and no loops" sort,
> this is at best a very time-consuming process: build one night, run all
> the profiling seed code, then build it again.
Yes, profiling is slower & more complex.
> This is assuming that the seed code can adequately exercise all the code
> of interest enough times to make the profiler (if it exists) notice
> something it considers significant.
If the seed data runs for very long and doesn't touch some code, then
that code isn't time-consuming in the final product (or you've got lousy
seed data!), and feedback optimization isn't important to it.
> What if ... a program change causes the usage pattern to change, and [an
> object] to change, that you didn't touch at the source level? Many
> places ... test and *approve* object (executable) code: a changed object
> is an untested one.
Don't recompile the object because the profile data changes, recompile only
when the _source_ changes. Use the profile approach at the bitter end: one
run (no profiling optimizations) to profile, then recompile, then test. If
testing shows a "profile only" bug, you can ship the non-profiled code
(with a perhaps lurking bug!) or debug the profiled version. It's a
software engineering issue, not a profile-based optimization issue: your
software process should handle this.
> I fail to see why being able to profile code and feed the results back to
> a second compile is a "better" way to tell the compiler something that
> the programmer knows in the first place.
It's not a "better" way: it's another way.
(1) Not all branches, especially in an optimizing compiler, are explicit
in the source code. These branches CANNOT be human annotated.
(2) Humans generally pick only SOME branches to announce frequency on;
many they don't care about or don't know enough to choose.
(3) Humans sometimes (not always!) are wrong about frequency choices.
(4) More common than (3), humans are often _unaware_ of which branches
are frequently executed and have predictable direction.
On the other hand, seed data can be bad and "mispredict" branches in
real codes.
What really happens, is that the compile/profile/compile cycle is a pain,
and it really isn't done except by the very few who have critical
performance needs, and benchmark bashers.
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.