Related articles |
---|
Register Based Interpreter Model acampbellb@hotmail.com (Avatar) (2005-11-01) |
Re: Register Based Interpreter Model christian.mueller@aktivanet.de (2005-11-02) |
JIT-compiling bytecode la@iki.fi (Lauri Alanko) (2005-11-04) |
Re: JIT-compiling bytecode eliotm@pacbell.net (Eliot Miranda) (2005-11-08) |
From: | Eliot Miranda <eliotm@pacbell.net> |
Newsgroups: | comp.compilers |
Date: | 8 Nov 2005 23:40:41 -0500 |
Organization: | SBC http://yahoo.sbc.com |
References: | 05-11-016 05-11-025 05-11-040 |
Keywords: | interpreter, performance |
Posted-Date: | 08 Nov 2005 23:40:41 EST |
Lauri Alanko wrote:
<christian.mueller@aktivanet.de> wrote:
>>Some authors argue that a register machine is more complicated but a
>>better starting point for jitting.
>
>
> The idea of jitting _bytecode_ seems rather strange to me in the first
> place. Bytecode is a _target_ language, after all, designed to be
> efficiently interpreted by a VM with only straightforward
> modifications (threading, symbol resolution), but no real syntactic
> analysis. It is not a particularly good intermediate language from
> which to translate to native machine code. For example, bytecode fixes
> the evaluation order completely, so a jit-compiler needs to go through
> hoops to reorder the computations optimally for the target platform.
It's likely that if your performance concerns are such that you need
to pay attention to processor-specific instruction-ordering issues
then you wouldn't be using a jitted language implementation anyway.
Modern Out-of-order processors do reordering anyway, making expending
effort in determining the best order an increasingly dubious
proposition.
But one really important factor in many bytecoded language
implementations is the requirement for low pause times and interactive
performance. In such systems the last thing you can afford is having
a low-level optimizer spend lots of effort trying to perform low-level
optimizations. Far better to have the smarts at a higher-level in the
system, and have the optimizer target bytecode. One then gets a nice
trade-off in portability. One concentrates on high-level
optimizations (inlining, closure elimination, unboxing, (virtual)
register allocation) and rely on a relatively naive
bytecode-to-native-code generator to generate acceptably fast code
with little effort, and little latency.
One advantage is that the bytecode can persist between system runs,
unlike native code. So one can pre-optimize a system such that it
starts up "hot". Current JIT architectures require some time for code
to optimize, giving slow startup and optimization rates.
Another is that performance-critical but infrequent code (e.g.
finalization processing) can be optimized with the right system
architecture. If optimization is pushed down into the VM its
extremely unlikely to ever consider infrequently run code as worth
optimizing, as it chooses what code to optimize based on simple
metrics. But a higher-level optimizer can use additional criteria,
such as process priorities or programmer annotations to choose what to
optimize.
> To my mind, if a VM is supposed to support jitting, the code should be
> stored in some higher-level format, which is then jitted at run-time
> to a target language (native code or interpreted bytecode).
--
_______________,,,^..^,,,____________________________
Eliot Miranda Smalltalk - Scene not herd
Return to the
comp.compilers page.
Search the
comp.compilers archives again.