|Code selection trade-offs email@example.com (Tim Frink) (2008-05-12)|
|Re: Code selection trade-offs firstname.lastname@example.org (=?windows-1252?Q?Bj=F6rn_Franke?=) (2008-05-15)|
|Re: Code selection trade-offs email@example.com (IndianTechie) (2008-05-23)|
|Re: Code selection trade-offs firstname.lastname@example.org (Walter Banks) (2008-05-24)|
|Date:||Fri, 23 May 2008 02:47:06 -0700 (PDT)|
|Posted-Date:||24 May 2008 16:49:32 EDT|
On May 13, 12:37 am, Tim Frink <plfr...@yahoo.de> wrote:
> Do you know of any works where profiling information/ static program
> analysis is used as a heuristic to control code selection, i.e. the
> (profiling) execution counts of particular code structures are
> influencing the choice of instructions used to translate the source
> code into assembler?
I know of things done the other way, i.e. collect information at
execution time to modify generated code to be more optimal.
> I want to leave the issue open how the execution counts are
> gained. This might happen by translating the source code one into
> machine code, perform profiling and than transform the profiling data
> back into the source code. Or, as an alternative, a static program
HP's profile based optimization does something similar except that it
doesn't convert anything back to source code. It just generates a
profile which is used to improve execution speed at the next run.
[This technique was disallowed when measuring performance of HP
systems for the SPEC benchmarks].
refer optimization level +O4 at:-
> analysis could be performed to extract information about the execution
> frequency of the high-level code constructs.
This is analysis of the frequency of machine-code/3 address form which
can be extended to function/basic block level. Why would it matter
whether we are dealing with basic blocks composed of machine code or
> I'm curious if you can imagine a case where this would make sense. I
> assume that this must be a case for a trade-off where you produce
> either slower and smaller code for the frequently executed code
> sections or some specialized code that increases the code size but
> leads to a shorted overall code execution time.
yes the latter is what is done i.e. stubs are put into generated code,
and they do stuff at runtime. A list of publications related to this
can be found at:-
Trace based optimization is the one closest to what I am referring to.
Return to the
Search the comp.compilers archives again.