|Register Based Interpreter Model firstname.lastname@example.org (Avatar) (2005-11-01)|
|Re: Register Based Interpreter Model email@example.com (2005-11-02)|
|JIT-compiling bytecode firstname.lastname@example.org (Lauri Alanko) (2005-11-04)|
|Re: JIT-compiling bytecode email@example.com (Eliot Miranda) (2005-11-08)|
|From:||Eliot Miranda <firstname.lastname@example.org>|
|Date:||8 Nov 2005 23:40:41 -0500|
|References:||05-11-016 05-11-025 05-11-040|
|Posted-Date:||08 Nov 2005 23:40:41 EST|
Lauri Alanko wrote:
>>Some authors argue that a register machine is more complicated but a
>>better starting point for jitting.
> The idea of jitting _bytecode_ seems rather strange to me in the first
> place. Bytecode is a _target_ language, after all, designed to be
> efficiently interpreted by a VM with only straightforward
> modifications (threading, symbol resolution), but no real syntactic
> analysis. It is not a particularly good intermediate language from
> which to translate to native machine code. For example, bytecode fixes
> the evaluation order completely, so a jit-compiler needs to go through
> hoops to reorder the computations optimally for the target platform.
It's likely that if your performance concerns are such that you need
to pay attention to processor-specific instruction-ordering issues
then you wouldn't be using a jitted language implementation anyway.
Modern Out-of-order processors do reordering anyway, making expending
effort in determining the best order an increasingly dubious
But one really important factor in many bytecoded language
implementations is the requirement for low pause times and interactive
performance. In such systems the last thing you can afford is having
a low-level optimizer spend lots of effort trying to perform low-level
optimizations. Far better to have the smarts at a higher-level in the
system, and have the optimizer target bytecode. One then gets a nice
trade-off in portability. One concentrates on high-level
optimizations (inlining, closure elimination, unboxing, (virtual)
register allocation) and rely on a relatively naive
bytecode-to-native-code generator to generate acceptably fast code
with little effort, and little latency.
One advantage is that the bytecode can persist between system runs,
unlike native code. So one can pre-optimize a system such that it
starts up "hot". Current JIT architectures require some time for code
to optimize, giving slow startup and optimization rates.
Another is that performance-critical but infrequent code (e.g.
finalization processing) can be optimized with the right system
architecture. If optimization is pushed down into the VM its
extremely unlikely to ever consider infrequently run code as worth
optimizing, as it chooses what code to optimize based on simple
metrics. But a higher-level optimizer can use additional criteria,
such as process priorities or programmer annotations to choose what to
> To my mind, if a VM is supposed to support jitting, the code should be
> stored in some higher-level format, which is then jitted at run-time
> to a target language (native code or interpreted bytecode).
Eliot Miranda Smalltalk - Scene not herd
Return to the
Search the comp.compilers archives again.