|Profile-driven optimization email@example.com (Tim Frink) (2008-01-29)|
|Re: Profile-driven optimization firstname.lastname@example.org (Gene) (2008-01-31)|
|Re: Profile-driven optimization email@example.com (=?ISO-8859-1?Q?Pertti_Kellom=E4ki?=) (2008-02-01)|
|Re: Profile-driven optimization firstname.lastname@example.org (M Wolfe) (2008-02-02)|
|Re: Profile-driven optimization email@example.com (Pasi Ojala) (2008-02-08)|
|From:||Pasi Ojala <firstname.lastname@example.org>|
|Date:||Fri, 8 Feb 2008 17:01:47 +0000 (UTC)|
|Organization:||Tampere University of Technology|
|Posted-Date:||10 Feb 2008 17:28:18 EST|
On 2008-02-01, Pertti Kellomdki <email@example.com> wrote:
> So the trick that e.g. the Multiflow compiler used is to guess based
> on the profiling information which basic blocks will usually be
> executed in sequence, and do instruction scheduling (ordering of
> instructions) on this larger region.
We have a DSP with a 3-stage pipeline, and the backend optimizer
/parallelizer /scheduler can use profiling information for just
that. For each conditional jump either the fall-through or jump target
is preferred when moving instructions across basic blocks.
It does help, if you have representative test data.
The main problem is that as code modules get larger, generating
suitable profiling information gets proportionally harder.
Return to the
Search the comp.compilers archives again.