|Code selection trade-offs firstname.lastname@example.org (Tim Frink) (2008-05-12)|
|Re: Code selection trade-offs email@example.com (=?windows-1252?Q?Bj=F6rn_Franke?=) (2008-05-15)|
|Re: Code selection trade-offs firstname.lastname@example.org (IndianTechie) (2008-05-23)|
|Re: Code selection trade-offs email@example.com (Walter Banks) (2008-05-24)|
|From:||Tim Frink <firstname.lastname@example.org>|
|Date:||12 May 2008 19:37:20 GMT|
|Posted-Date:||12 May 2008 21:22:56 EDT|
Do you know of any works where profiling information/ static program
analysis is used as a heuristic to control code selection, i.e. the
(profiling) execution counts of particular code structures are
influencing the choice of instructions used to translate the source
code into assembler?
I want to leave the issue open how the execution counts are
gained. This might happen by translating the source code one into
machine code, perform profiling and than transform the profiling data
back into the source code. Or, as an alternative, a static program
analysis could be performed to extract information about the execution
frequency of the high-level code constructs.
I'm curious if you can imagine a case where this would make sense. I
assume that this must be a case for a trade-off where you produce
either slower and smaller code for the frequently executed code
sections or some specialized code that increases the code size but
leads to a shorted overall code execution time.
Return to the
Search the comp.compilers archives again.