|bytes code vs. machine code in the multi-core ring email@example.com (Ralph Boland) (2008-02-23)|
|Re: bytes code vs. machine code in the multi-core ring DrDiettrich1@aol.com (Hans-Peter Diettrich) (2008-02-24)|
|From:||Hans-Peter Diettrich <DrDiettrich1@aol.com>|
|Date:||Sun, 24 Feb 2008 09:55:41 +0100|
|Posted-Date:||24 Feb 2008 12:26:15 EST|
Ralph Boland wrote:
> 2) some byte code interpreters compile to native code
IMO this approach should not be ignored. Since byte code cannot assume
anything about the target machine, with regards to e.g. the number of
registers etc., such a translation is desireable, and feasible using
the same optimization techniques as are used for the generation of
native code in general.
> Interpreters that do JIT translation to native code will be less
> effective because it will become slower to perform this task when
> generating multi-core native code. This suggests that the byte code
> approach is going to lose performance ground.
The decsions can be built into the byte code compiler, just like for
any compiler. The JITer possibly can be more efficient, since it knows
about the actual target machine. Other coprocessors, like MMX or
graphics processing hardware, deserve consideration already in the
programming language, not only in the compiler.
> Perhaps the solution is to somehow group byte code into packets that
> can be executed on a single processor and have the interpreter do
> packet- processor scheduling. (I don't buy this idea though.)
Just like native code compilers do ;-)
It's up to the actual CPU, to distribute the execution over multiple
cores, ALUs etc., based only on the dependencies in a presented
sequence of instructions.
Return to the
Search the comp.compilers archives again.