Re: JIT machine code generation

pardo@cs.washington.edu (David Keppel)
31 Jul 1997 00:34:14 -0400

          From comp.compilers

Related articles
JIT machine code generation SRINIVASANP@phibred.com (Srinivasan, Pattabi) (1997-07-29)
Re: JIT machine code generation pardo@cs.washington.edu (1997-07-31)
Re: JIT machine code generation Sergey.Solyanik@bentley.com (Sergey Solyanik) (1997-08-07)
Re: JIT machine code generation pardo@cs.washington.edu (1997-08-12)
| List of all articles for this month |

From: pardo@cs.washington.edu (David Keppel)
Newsgroups: comp.compilers
Date: 31 Jul 1997 00:34:14 -0400
Organization: Computer Science & Engineering, U of Washington, Seattle
References: 97-07-119
Keywords: Java, code

<SRINIVASANP@phibred.com> wrote:
>[Where is JIT code stored and how is it invoked?]


The details are machine-dependent because of variations in how you
dynamically generate code, ensure instruction cache consistancy, and
"promote" data to code. _Typically_, however, I believe that code is
stored in the heap, usually by allocating a very large block of memory
from the heap and then peforming manual memory management of the code
fragments within that block of memory.


For systems that compile a function at a time, the transfer is often
of the form "(*ptr)(...)" and the callee may have other calls embedded
in it directly, with stub calls for routines that weren't compiled at
the time the callee was compiled. For systems that do more
fine-grained compilation, a direct jump is more commonly used with
some assembly hair to make sure the system is sane when control passes
out of the dynamically-compiled code.


The classic reading on this (terse but full of information if you read
it a couple of times) is Deutsch & Shiffman's paper on their
SmalTalk-80 implementation. The classic paper on JIT optimization is
Cathy May's Mimic paper (you didn't ask about optimization, but it's
such a good paper I feel compelled to mention it). You may (to blow
my own horn) find some interesting implementation information in the
Shade papers, including code allocation and invocation, and there are
more refernces in my simulation and tracing web page. Here's details:


%A Peter Deutsch
%A Alan M. Schiffman
%T Efficient Implementation of the Smalltalk-80 System
%J 11th Annual Symposium on Principles of Programming Languages
(POPL-11)
%D January 1984
%P 297-302


%A Cathy May
%T Mimic: A Fast S/370 Simulator
%J Proceedings of the ACM SIGPLAN 1987 Symposium on Interpreters and
Interpretive Techniques; SIGPLAN Notices
%V 22
%N 6
%C St. Paul, MN
%D June 1987
%P 1-13


You may (to blow my own horn) also find interesting information in the
Shade TR, SIGMETRICS paper and in Conte & Gimarc's Shade book chapter.
The SIGMETRICS paper is the best short reading; the TR has a bunch of
implementation details; the book chapter has the best bibliography and
most recent results. Here's the SIGMETRICS reference:


%A Bob Cmelik
%A David Keppel
%T Shade: A Fast Instruction-Set Simulator for Execution Profiling
%J Proceedings of the 1994 ACM SIGMETRICS Conference
on the Measurement and Modeling of Computer Systems
%D May 1994
%P 128-137
%X http://www.cs.washington.edu/research/compiler/papers.d/shade.html
%X ftp://ftp.cs.washington.edu/pub/pardo/shade.ps.Z


The simulation and tracing web page is at


http://www.cs.washington.edu/homes/pardo/sim.d/index.html


A key thing to remember in all of this is that it's not tricky, so if
you just guess you're likely to guess right. Many systems use fairly
straightforward implementations of most stuff because it actually
works pretty well.


;-D on ( On the straight and forward ) Pardo
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.