|Validating performance optimizations firstname.lastname@example.org (2008-01-21)|
|Re: Validating performance optimizations email@example.com (2008-01-27)|
|Date:||Mon, 21 Jan 2008 14:52:36 -0800 (PST)|
|Keywords:||optimize, testing, question|
|Posted-Date:||21 Jan 2008 23:49:20 EST|
I've been thinking lately about how a compiler could be tested for
performance optimizations. Unlike functional testing, it doesn't seem
Do you simply ask the compiler to attempt to apply optimization X and
test for it? Is there any value in exercising the code that implements
X in isolation, given that in general it is usually the output of some
previous transformation that tends to break a later transformation in
Or, do we, for a given test program, think about the optimal output,
and let the compiler work on it at maximum opt, then check to see how
close we are to the optimal output? How do we define optimal output?
Are we looking for (close to) optimally scheduled instructions (not an
easy thing to determine, I know), or do we define optimal in terms of
higher level constructs such as inlining, loop unrolling, common sub-
expression elimination, constant propagation, loop jamming and
Then comes the problem of actually testing for some of these things.
It seems like tedious work to say the least... identifying which
instructions correspond to which source-level constructs (in the
absence of accurate listing files), following branches if necessary,
Does anyone have any thoughts on this?
Return to the
Search the comp.compilers archives again.