|Native/VM languages firstname.lastname@example.org (2008-08-25)|
|Re: Native/VM languages email@example.com (Marco van de Voort) (2008-08-27)|
|Re: Native/VM languages firstname.lastname@example.org (email@example.com) (2008-08-27)|
|Re: Native/VM languages firstname.lastname@example.org (Jeremy Wright) (2008-08-28)|
|Re: Native/VM languages email@example.com (glen herrmannsfeldt) (2008-08-28)|
|Re: Native/VM languages firstname.lastname@example.org (2008-08-29)|
|Re: Native/VM languages email@example.com (cr88192) (2008-08-30)|
|Re: Native/VM languages firstname.lastname@example.org (glen herrmannsfeldt) (2008-09-03)|
|Date:||Wed, 27 Aug 2008 09:44:27 -0700 (PDT)|
|Posted-Date:||28 Aug 2008 10:07:49 EDT|
On Aug 26, 11:40 am, boroph...@gmail.com wrote:
> While I know that it is possible for a language (such as Java) to be
> compiled both natively and to bytecode to be run on a VM, I have read
> that over time, a natively compiler Java program (AOT compiler) will
> be less efficient than the same JIT-compiled Java program (http://
> While AOT-compiled code will have a faster start-up time and smaller
> memory footprint than JIT compiled code, once the program has been
> running for some time the JIT code will have better performance. The
> article argues that this is because the compiler can optimize routines
> to a level beyond that can be acheived using static compilation, by
> using run-time knowledge.
> The gist of the article is that JITted code will have better
> performance than native code, even C/C++, but gives no figures to
> indicate how much exactly. Does anyone have any solid figures/stats
> on what percentage performance increase can be acheived by JIT code.
> I am wanting to know if the benefits are significant. I am still to
> be convinced that it is worthwhile, because you have to consider the
> 'warm-up' performance degradation, which is not always acceptable.
My take on this is that a JIT compiler capable of _fully_ utilizing
the information available only on the end user system (execution
profile, cache behavior, hardware, etc.) is a very sophisticated piece
of software. Therefore, its development and maintenance require much
more engineering resources than the creation and support of a highly
optimizing AOT compiler _of the same robustness_.
Then, C/C++/Fortran compilers that can take application execution
profile as input have been around for decades. With such information,
an AOT compiler could generate versions of hot methods optimized for
specific hardware, such as SSE2.
So I would say that the gist of the article is perfectly correct in
the ultimate case:
1. The super-JIT-compiler engineering team has access to an unlimited
pool of talented compiler/runtime engineers who also happen to be
great team players
2. The performance of that super-JIT-compiler versus AOT is measured
on systems with excess CPU and memory resources
3. The startup time and initial response of the applications used for
performance testing is ignored
Now, (2) and (3) are perfectly valid for server-side (enterprise)
apps. Just look at the system configurations for the reported
SPECjbb2005 results (http://www.spec.org/jbb2005/results/). But for,
say, EEMBC Grinderbench (http://www.grinderbench.com/), which tests
Java ME CDC/CLDC performance on cellphones, PDAs, and such, the
reverse is true.
Return to the
Search the comp.compilers archives again.