|Native/VM languages firstname.lastname@example.org (2008-08-25)|
|Re: Native/VM languages email@example.com (Marco van de Voort) (2008-08-27)|
|Re: Native/VM languages firstname.lastname@example.org (email@example.com) (2008-08-27)|
|Re: Native/VM languages firstname.lastname@example.org (Jeremy Wright) (2008-08-28)|
|Re: Native/VM languages email@example.com (glen herrmannsfeldt) (2008-08-28)|
|Re: Native/VM languages firstname.lastname@example.org (2008-08-29)|
|Re: optimizing with feedback, was Native/VM languages email@example.com (Jeremy Wright) (2008-08-29)|
|Re: Native/VM languages firstname.lastname@example.org (cr88192) (2008-08-30)|
|Re: Native/VM languages email@example.com (glen herrmannsfeldt) (2008-09-03)|
|From:||glen herrmannsfeldt <firstname.lastname@example.org>|
|Date:||Thu, 28 Aug 2008 19:54:17 -0800|
|Keywords:||code, optimize, comment|
|Posted-Date:||28 Aug 2008 23:50:46 EDT|
Jeremy Wright wrote:
> 1. dynamic compilers can generate code for the precisely the chipset
> being used. Most compilers generate code aimed at the common subset of
> slightly different models, with scheduling that is hopefully good on
> all, but not necessarily optimal on any.
I have thought about this one for a while.
The great invention of IBM and S/360, was an architecture that would
be consistent over a wide range of speeds and memory sizes.
Continuing through z/Architecture it has done amazingly well.
But RISC, and even more VLIW, relies on compilers generating code
tuned to the specific implementation, negating the advantage of a
common architecture. Distributing source and expecting each to
compile from source is not reasonable.
It seems that it should be possible to use an intermediate code,
specific for each overall architecture, but not specialized for the
individual implementation. It could then be used to generate the
optimal code for the specific processor at install time, or possibly
at run time. The intermediate form would not be quite as universal
as, for example, JVM but still allow for good code as features are
added later to an individual sub-architecture.
> 2. profile directed feedback is a very powerful optimisation. Dynamic
> compilation does automatically and does it particularly well because
> the data set used for the profiling is the live run. The standard
> "compile ; run collecting data; recompile with PDF" cycle can suffer
> from artefacts in the data set used to "train" the PDF.
How about instead the ability to save profile information after
running a program, or cumulatively after multiple runs, and then use
that for a static recompilation.
[The intermediate code plan sounds a lot like the S/38 and AS/400.
Return to the
Search the comp.compilers archives again.