Re: CIL question

Paolo Molaro <>
28 Jan 2006 15:15:04 -0500

          From comp.compilers

Related articles
CIL question (Stefan Mandel) (2006-01-19)
Re: CIL question (Paolo Molaro) (2006-01-20)
Re: CIL question (Hans-Peter Diettrich) (2006-01-20)
Re: CIL question (Stefan Mandel) (2006-01-26)
Re: CIL question (2006-01-28)
Re: CIL question (Paolo Molaro) (2006-01-28)
Re: CIL question (Hans-Peter Diettrich) (2006-01-28)
Re: CIL question (Nathan Moore) (2006-01-28)
| List of all articles for this month |

From: Paolo Molaro <>
Newsgroups: comp.compilers
Date: 28 Jan 2006 15:15:04 -0500
Organization: [Infostrada]
References: 06-01-065 06-01-068 06-01-076
Keywords: interpreter, code
Posted-Date: 28 Jan 2006 15:15:04 EST

On 2006-01-26, Stefan Mandel <> wrote:
> Paolo Molaro wrote:
>> Can you qoute the exact words you read and the page in the standard?
>> I don't remember anything to that effect. Instead, the standard in
>> several places explicitly mentions both interpretation and JIT
>> compilation as execution methods.
> It was a third party comment. The author wanted to distinguish the Java
> VM and the .NET VM. I have found similar statements from other third
> party people.

The correct differentiation is, IMHO:
Java bytecode is more efficiently interpretable than IL bytecode.
It is also more easily jittable, since it's relatively simpler (though
it's harder to verify it). "More easily jittable" doesn't mean that
the generated code is higher quality, it's just that an IL JIT needs to
implement many more constructs to implement the full specs. And it's
also true that an advanced java jit will have to implement those same
constructs to enable some optimizations (tail calls, struct locals for
escape analysis etc.).
I think there is an important point to consider, though: easy
interpretation is unimportant, even on embedded systems where it was a
win some years ago. See also the recent developments in the ARM
architecture to provide instructions to speed up jitted code (or ahead
of time compiled code), since the Jazelle instructions for the
interpreters are now obsolete.

> Most optimizing compilers translate the source language to an
> intermediate representation with a infinite number of registers (e.g.
> SSA code). Optimization and register allocation should be quite
> efficient with this IR. I also think, that the Java VM as well as the
> .NET VM internally work (in JIT-mode) on SSA-based intermediate
> representations.

Since it's trivial to convert from stack code to a register-based
representation, it's better to save on the disk-space requirements.
There are also other issues: a register-based bytecode would require
in many cases some changes to deal with the architecture of the JIT
and of the target cpu. I don't think it makes sense to use a bytecode
that forces a particular jit implementation. Do you use a SSA-based
representation? If yes, is it basic SSA or one of the many variants that
have been developed over the years? Which one of those do you want to
freeze in an on-disk format? They all have different ways to assigne
registers, they use not only phi-functions etc. It's better not to
decide upfront on a register representation, but it's better to have a
compact one that can be easily trasnlated to the one used by the JIT.
BTW, MS has (or used to have) a compiler that emits SSA
annotations together with IL code. You can see this in older docs that
talk about some additional bytecodes and method headers. Apparently they
have abandoned this approach, probably for the same reasons I explained


----------------------------------------------------------------- debian/rules Monkeys do it better

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.