Re: Virtual Machine - Tiny Basic

"Brent Benson" <Brent.Benson@oracle.com>
27 Aug 2000 22:19:46 -0400

          From comp.compilers

Related articles
Virtual Machine - Tiny Basic hrm@terra.com.pe (Hugo Rozas M.) (2000-08-20)
Re: Virtual Machine - Tiny Basic Brent.Benson@oracle.com (Brent Benson) (2000-08-27)
Re: Virtual Machine - Tiny Basic ceco@jupiter.com (Tzvetan Mikov) (2000-09-08)
Re: Virtual Machine - Tiny Basic stephan@pcrm.win.tue.nl (2000-09-09)
| List of all articles for this month |
From: "Brent Benson" <Brent.Benson@oracle.com>
Newsgroups: comp.compilers
Date: 27 Aug 2000 22:19:46 -0400
Organization: Compilers Central
References: 00-08-100
Keywords: interpreter, performance, comment

"Hugo Rozas M." <hrm@terra.com.pe> wrote:
>
> I am making a tiny basic interpreter and I have seen that
> there is two ways of do that :
>
> 1- The first one is to read the source and generate intermediate
> code; Then and Interpreter reads and executes the tokenized
> one.
>
> 2- The second one is to generate some kind of pseudo-Assembler,
> then a virtual machines reads and executes this.
>
> I seems that actually the second method (VM) is the most popular. My
> question is why?; When I generate a pseudo-Assembler that needs to be
> interpreted and executed by the VM; I'am not creating another layer
> that will slow down the execution?


As with most performance issues, the performance of an interpreter
depends largely on the particular input data. For "straight line"
programs likes simple scripts where each line of code is executed only
once, a direct interpreter (type 1) can often perform better. On the
other hand, programs that contain frequently called subroutines and
loops almost always perform better in a well written VM based
interpreter (type 2).


The reason for this is that much of the time spent interpreting code
is spent scanning and parsing the input. In a direct interpreter the
scanning and parsing needs to be performed every time a piece of code
is executed. In a VM based approach the scanning and parsing is done
only once when the code is compiled into the input for the VM. If the
system is well designed, the input to the VM can be executed very
efficiently with no more scanning and parsing (e.g., byte codes in a
buffer). In addition, simple optimizations can be performed at VM
translation time that pay off every time the code is executed.


Some modern interpreter systems (like Sun's hotspot Java virtual
machine) attempt to take advantage of this performance dichotomy by
doing less translation for most code, and optimizing and further
translating frequently executed code (hotspots), hopefully spending
the limited CPU resources in a highly efficient fashion.


-Brent
[That's a swell technique. The HP3000 APL system used it in about
1978. -John]


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.