Related articles |
---|
Validating performance optimizations ym@dodgeit.com (2008-01-21) |
Re: Validating performance optimizations egor.pasko@gmail.com (2008-01-27) |
From: | ym@dodgeit.com |
Newsgroups: | comp.compilers |
Date: | Mon, 21 Jan 2008 14:52:36 -0800 (PST) |
Organization: | Compilers Central |
Keywords: | optimize, testing, question |
Posted-Date: | 21 Jan 2008 23:49:20 EST |
I've been thinking lately about how a compiler could be tested for
performance optimizations. Unlike functional testing, it doesn't seem
as straightforward...
Do you simply ask the compiler to attempt to apply optimization X and
test for it? Is there any value in exercising the code that implements
X in isolation, given that in general it is usually the output of some
previous transformation that tends to break a later transformation in
non-trivial code?
Or, do we, for a given test program, think about the optimal output,
and let the compiler work on it at maximum opt, then check to see how
close we are to the optimal output? How do we define optimal output?
Are we looking for (close to) optimally scheduled instructions (not an
easy thing to determine, I know), or do we define optimal in terms of
higher level constructs such as inlining, loop unrolling, common sub-
expression elimination, constant propagation, loop jamming and
distribution, etc?
Then comes the problem of actually testing for some of these things.
It seems like tedious work to say the least... identifying which
instructions correspond to which source-level constructs (in the
absence of accurate listing files), following branches if necessary,
etc.
Does anyone have any thoughts on this?
Return to the
comp.compilers page.
Search the
comp.compilers archives again.