bytes code vs. machine code in the multi-core ring

Ralph Boland <rpboland@gmail.com>
Sat, 23 Feb 2008 18:01:27 -0800 (PST)

          From comp.compilers

Related articles
bytes code vs. machine code in the multi-core ring rpboland@gmail.com (Ralph Boland) (2008-02-23)
Re: bytes code vs. machine code in the multi-core ring DrDiettrich1@aol.com (Hans-Peter Diettrich) (2008-02-24)
| List of all articles for this month |

From: Ralph Boland <rpboland@gmail.com>
Newsgroups: comp.compilers
Date: Sat, 23 Feb 2008 18:01:27 -0800 (PST)
Organization: Compilers Central
Keywords: code, architecture, question
Posted-Date: 24 Feb 2008 00:41:27 EST

Today many applications are based on machine code generated from code
written in languages such as C and C++ while others are based on byte
codes generated from code written in languages such as Java and
Smalltalk (yeah Smalltalk!). (I will ignore here for simplicity
matters such as:


          1) for some languages both byte code and machine code compilers
exist
          2) some byte code interpreters compile to native code
dynamically.)


Once can debate the pros and cons of each approach but that is not the
issue I want to discuss here.


Advances in hardware are imposing the reality of parallel processing
in the form of multi-core processors. Both byte code and native code
based compilers are forced to face this issue. What I want to know
is: Will this transition to mulit-core narrow or widen the performance
gap between the byte code and native code approaches? (assuming native
code is actually faster).


Some thoughts:


It seems to me that byte code compilers and the virtual machines they
target must be modified to incorporate knowledge of multi-core; with
the possible reward that the byte code approach will gain performance
ground on that native code approach. Some may debate this point
arguing instead that the byte code interpreters is where the knowledge
of multi-core should be embedded. There must be research going on in
this arena. Any news or results?


Interpreters that do JIT translation to native code will be less
effective because it will become slower to perform this task when
generating multi-core native code. This suggests that the byte code
approach is going to lose performance ground.


Perhaps the solution is to somehow group byte code into packets that
can be executed on a single processor and have the interpreter do
packet- processor scheduling. (I don't buy this idea though.)


There are other issues than performance to consider in the byte code
vs. native code debate. How are these issues affected by the
transition to multi-core?


There are doubtless many issues here that I haven't even dreamed of.
Feel free to bring them forward.


Ralph Boland


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.