Related articles |
---|
Mixing virtual and real machine code in an interpreter graham@pact.srf.ac.uk (1994-03-16) |
Re: Mixing virtual and real machine code in an interpreter sastdr@unx.sas.com (1994-03-21) |
Re: Mixing virtual and real machine code in an interpreter pardo@cs.washington.edu (1994-03-22) |
Re: Mixing virtual and real machine code in an interpreter nickh@harlequin.co.uk (1994-03-22) |
Re: Mixing virtual and real machine code in an interpreter sdm7g@elvis.med.virginia.edu (Steven D. Majewski) (1994-03-23) |
Re: Mixing virtual and real machine code in an interpreter sosic@kurango.cit.gu.edu.au (1994-03-30) |
Newsgroups: | comp.compilers |
From: | pardo@cs.washington.edu (David Keppel) |
Keywords: | interpreter, bibliography |
Organization: | Computer Science & Engineering, U. of Washington, Seattle |
References: | 94-03-039 |
Date: | Tue, 22 Mar 1994 01:40:22 GMT |
graham@pact.srf.ac.uk (Graham Matthews) writes:
>[Performance figures for compiling to native machine code (n-code) vs.
> compiling to interpreted virtual machine code (v-code)?]
There was a `comp.compilers' discussion on this a few years back -- I
don't recall the message numbers but it was March or April of '91 and
the Subject: line was "Portable Fast Direct Threaded Code".
The short answer is that it depends -- a lot. The basic factors are
- How much time do you spend in each "primitive" (v-code operation)
- What are the opportunities for inter-primitive optimization
- What is the cost of dispatching
Native code execution lets you optimize between v-code primitives and
eliminate most of the cost of dispatching between primitives. However, if
each v-code instruction is "inherently" expensive, then that part of the
time dwarfs the possible improvements from better dispatching and
optimization.
For example, PostScript(tm) as it is normally used (page description
language) spends nearly all of its time in the primitives (draw line, draw
poly, etc.) and there isn't a lot of opportunity to optimize between
primitives. Thus, you could compile PostScript all day long and (in
normal cases) it wouldn't save you much. On the other hand, cross-machine
simulators typically have small primitives and lots of room for
optimization between primitives (e.g. improved register allocation) and
the dispatch costs are relatively high compared to the cost of executing
each primitive, so an order of magnitude performance improvement over
well-optimized interpreter code is possible.
Some of my favorite references will soon be in the related work section
of (different than the Shade TR!):
%A Robert F. Cmelik
%A David Keppel
%T Shade: A Fast Instruction-Set Simulator for Execution Profiling
%J SIGMETRICS '94 (to appear)
%D 1994
But I'll point especially at:
%A Thomas Pittman
%T Two-Level Hybrid Interpreter/Native Code Execution for Combined
Space-Time Program Efficiency
%D 1987
%J ACM SIGPLAN
%P 150-152
One fun story is in Brooks' _The Mythical Man-Month_. I don't have my
copy handy but as I recall, one of the OS/360 designers observed that
users are relatively slow and commands are typically expensive, so the
command interpreter was itself rewritten in v-code, cutting the total size
in a much-needed place and affecting the performance not a whit. See the
following for details, but I don't recall the page number.
%A Fredrick P. Brooks, Jr.
%T The Mythical Man Month
%P Addison-Wesley Publishing Company
%D 1975
Finally, a related technique is to virtualize some machine code operations
by procuduralizing. In escence, you're un-inlining! See:
%A Christopher W. Fraser
%A Eugene W. Myers
%A Alan L. Wendt
%T Analyzing and Compressing Assembly Code
%J Proceedings of the ACM SIGPLAN 1984 Symposium on Compiler
Construction
%J SIGPLAN Notices
%V 19
%N 6
%D June 1984
%P 117-121
You could, in principle, also use this technique to dervie (better) v-code
primitives automagically (either starting with n-code or v-code).
;-D on ( *Really* virtual machine code ) Pardo
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.