|Requirements for Just-in-time Compilation email@example.com (jason petrone) (2001-04-22)|
|Re: Requirements for Just-in-time Compilation firstname.lastname@example.org (2001-04-26)|
|Re: Requirements for Just-in-time Compilation email@example.com (2001-04-26)|
|Re: Requirements for Just-in-time Compilation firstname.lastname@example.org (2001-04-26)|
|Re: Requirements for Just-in-time Compilation email@example.com (jason petrone) (2001-04-29)|
|Re: Requirements for Just-in-time Compilation Brian.Inglis@SystematicSw.ab.ca (2001-04-30)|
|Date:||26 Apr 2001 21:08:42 -0400|
|Posted-Date:||26 Apr 2001 21:08:42 EDT|
Jason Petrone <firstname.lastname@example.org> wrote:
>Since speed of compilation is an issue, should the compiler also do
>machine code generation?
What is the purpose for JIT compilation: quick initial debugging (as
with an interpreted language), optimization of code for a specific
dataset (*2^n --> <<n), or client-side execution of a non-platform
specific program encoding?
For the first, it seems one might want something more like an
interpreter. For the second and third, it seems one would want a
text-to-bytecode compiler that could take time to mark dependencies,
value uses [for register allocation and cache optimization],
etc. while the JIT compiler (assembler?) would translate to machine
>How difficult is it to write a just-in-time compiler in comparison to
>a normal compiler?
For debugging uses, a JIT compiler should be simpler (no need to
support optimization, especially strong optimizations), though one
might want the JIT compiler to have two threads: decode and execution,
allowing decode to be active while the execution thread is waiting for
a function/basic block translation from readable text to run-time
structures or while the execution thread is waiting on I/O or when
another processor is available.
A JIT compiler might also be more complex in that compilation would
tend to occur in program-flow order rather than by file placement.
>Also, is making such a compiler retargetable a lofty goal? It seems
>to me that supporting multiple architectures would require making
>extra passes, and would slow things down.
Multi-platform support would probably not be that expensive for a
byte-code to machine code translator if high optimization was not
needed. A RISC-like opcode struction should map well onto most RISC
machines will little difficulty and CISC operations can generally be
generated without great difficulty from RISCops. (For multi-platform
support or strong optimization one would probably desire including
hints in the bytecode.)
Transmeta's 'code morphing' is effectively a complex JIT byte code
translator that does a quick translation for intial execution and then
performs scheduling optimizations (and power consumption
optimizations?) on more frequently used code.
Hopefully, someone more knowledgable will respond to your questions.
Paul A. Clayton
Return to the
Search the comp.compilers archives again.