Related articles |
---|
Reordering of functions plfriko@yahoo.de (Tim Frink) (2008-02-18) |
Re: Reordering of functions nkim@odmsemi.com (Nikolai Kim) (2008-02-18) |
Re: Reordering of functions bisqwit@iki.fi (Joel Yliluoma) (2008-02-19) |
Re: Reordering of functions cfc@shell01.TheWorld.com (Chris F Clark) (2008-02-19) |
Re: Reordering of functions gah@ugcs.caltech.edu (glen herrmannsfeldt) (2008-02-20) |
Re: Reordering of functions plfriko@yahoo.de (Tim Frink) (2008-02-21) |
Re: Reordering of functions plfriko@yahoo.de (Tim Frink) (2008-02-21) |
Re: Reordering of functions plfriko@yahoo.de (Tim Frink) (2008-02-21) |
Re: Reordering of functions gah@ugcs.caltech.edu (glen herrmannsfeldt) (2008-02-24) |
Re: Reordering of functions cfc@shell01.TheWorld.com (Chris F Clark) (2008-02-24) |
Re: Reordering of functions Jan.Vorbrueggen@thomson.net (=?ISO-8859-15?Q?Jan_Vorbr=FCggen?=) (2008-02-25) |
Re: Reordering of functions gneuner2@comcast.net (George Neuner) (2008-02-25) |
From: | Tim Frink <plfriko@yahoo.de> |
Newsgroups: | comp.compilers |
Date: | Thu, 21 Feb 2008 09:36:08 +0100 |
Organization: | CS Department, University of Dortmund, Germany |
References: | 08-02-051 08-02-055 |
Keywords: | optimize |
Posted-Date: | 24 Feb 2008 00:38:41 EST |
> Maybe. You've set up a straw-man where you have tried to deny all
> ways that changing the order of function calls might affect
> preformance and then asked will performance remain unchanged.
> However, even in your denial you have left holes open, what about
> machines the execute a limited number of instructions out-of-order and
> use register renaming. That's a cache-like effect (just like
> pre-fetching, and branch prediction are). Reordering the function
> calls can influence all of those.
I agree with you on that. That's obvious. But I was actually talking
about the rearrangement of functions in memory while the order of
function invocations remains the same.
> Finally, some of the effects are hard to quantify and/or predict. When I
> was working on the Alpha optimizer, there were some optimizations we
> specifically turned on (and off) in specific orders because they had an
> unfortunate interaction on one of the benchmarks we were measuring
> against. The right combination of optimizations yielded a fortuitous
> alignment of a critical loop, that was lost if we did (a little) more
> optimization until we got significantly more optimizations enabled and
> could remove enough code that the loop was always aligned.
Finding the right combination of optimizations which yields good
results for a set of different programs seems to be an unsolvable
problem. How do compiler designers decide which combinations/sequences
of optimizations seem to be promising and are put together into an
optimization level (like gcc's -O)?
Is this an trial-and-error approach assisted with some experience?
One "solution" I know is the ACOVEA project which uses an genetic
algorithm to find the "best" gcc options for a particular program
under test. But this is a particular case where you want to achieve
best results for one application.
Regards,
Tim
Return to the
comp.compilers page.
Search the
comp.compilers archives again.