Related articles |
---|
Native/VM languages borophyll@gmail.com (2008-08-25) |
Re: Native/VM languages marcov@stack.nl (Marco van de Voort) (2008-08-27) |
Re: Native/VM languages ldv@mail.com (ldv@mail.com) (2008-08-27) |
Re: Native/VM languages jeremy.wright@microfocus.com (Jeremy Wright) (2008-08-28) |
Re: Native/VM languages gah@ugcs.caltech.edu (glen herrmannsfeldt) (2008-08-28) |
Re: Native/VM languages torbenm@pc-003.diku.dk (2008-08-29) |
Re: Native/VM languages cr88192@hotmail.com (cr88192) (2008-08-30) |
Re: Native/VM languages gah@ugcs.caltech.edu (glen herrmannsfeldt) (2008-09-03) |
From: | "cr88192" <cr88192@hotmail.com> |
Newsgroups: | comp.compilers |
Date: | Sat, 30 Aug 2008 15:04:10 +1000 |
Organization: | Saipan Datacom |
References: | 08-08-070 08-08-089 08-08-096 |
Keywords: | optimize |
Posted-Date: | 30 Aug 2008 16:35:24 EDT |
"Torben "Fgidius" Mogensen" <torbenm@pc-003.diku.dk> wrote in message
> Jeremy Wright <jeremy.wright@microfocus.com> writes:
>
>> 2. profile directed feedback is a very powerful optimisation. Dynamic
>> compilation does automatically
>
> I can't see this being "automatic" in any normal sense of the word.
> It requires an effort to collect and analyse profile data and a good
> deal of insight to explout it wisely.
>
>> and does it particularly well because
>> the data set used for the profiling is the live run. The standard
>> "compile ; run collecting data; recompile with PDF" cycle can suffer
>> from artefacts in the data set used to "train" the PDF.
>
> So can dynamic run-time profiling: A program does not have a constant
> load during its runtime, so information you collect during the first
> half of the execution may be completely wrong in the second half. The
> problem is that predicting future behaviour from past behavior is not
> always easy (and certainly not perfect). In a sense, information
> collected during the same run is late: It only talks about the past,
> and you want to compile for the future. Getting profile information
> from complete executions can analyse how usage changes over the
> execution time of the program and (in theory) use this to make several
> variants of teh code for different phases of execution and know when
> to switch to new versions.
>
> But all this is about how well you can do in the limit, and that isn't
> really interesting. What you want is to know how you get the most
> optimisation with a given effort. And I doubt dynamic profile
> gathering is the best approach for this.
Just Me Wondering Here:
How much would all this offer over much simpler optimizations, like for
example just detecting general arch features, doing comparative
micro-benchmarks (mostly between multiple possible ways of compiling things,
....), and then dynamically adjusting the low-level code generation as a
result?...
An Example, Would Be That, For Example, The Compiler Could Have Several
Possible Code Sequences For Example, For Operations Like Dot-Product,
cross-product, ... and it can first see which ones will wok on a given
processor ("processor has dot-product operator?", ...), and then does a few
benchmarks ("is it faster to use x87 or SSE for this?", "is instruction
sequence A or B faster?", ...). potentially it could experiment with a few
other things, such as register allocation algos, register-vs-memory
performance, ...
This I Think Would Work Fairly Well For "Most" Cases, And It Is My Personal
Suspiscion That The Full On Profiler-Based Tweaking Is Unlikely To Deliver
That Much Better Performance (5-10% ?...), that it would unlikely make
enough difference to be a major practical concern.
However, All This Is Still Much More Likely To Deliver Better Performance
Than Pure Static Optimization (In Particular, dealing with the gain or loss
of performance impacting features or issues, ...).
However, As Noted This Would Likely Imply Either Full Run-Time Compilation,
or distributing programs as some kind of bytecode or other IL.
Return to the
comp.compilers page.
Search the
comp.compilers archives again.