Related articles |
---|
Virtual Machine - Tiny Basic hrm@terra.com.pe (Hugo Rozas M.) (2000-08-20) |
Re: Virtual Machine - Tiny Basic Brent.Benson@oracle.com (Brent Benson) (2000-08-27) |
Re: Virtual Machine - Tiny Basic ceco@jupiter.com (Tzvetan Mikov) (2000-09-08) |
Re: Virtual Machine - Tiny Basic stephan@pcrm.win.tue.nl (2000-09-09) |
From: | "Tzvetan Mikov" <ceco@jupiter.com> |
Newsgroups: | comp.compilers |
Date: | 8 Sep 2000 01:57:33 -0400 |
Organization: | @Work Internet powered by @Home Network |
References: | 00-08-100 00-08-121 |
Keywords: | interpreter |
Brent Benson <Brent.Benson@oracle.com> wrote
>[...] The reason for this is that much of the time spent interpreting code
> is spent scanning and parsing the input. In a direct interpreter the
> scanning and parsing needs to be performed every time a piece of code
> is executed.
A direct interpreter doesn't have to parse every time. As Hugo
suggested the source can be tokenized only once (or even converted to
AST) and then interpreted very efficiently. Is there any evidence
showing that this is inherently slower than interpreting VM
instructions? Actually it seems to me that the VM approach itself
should be slower: instead of operating directly on high level
constructs and executing them at once the interpreter has to process
lots of simple VM instructions - more of the time will be spent in
decoding them. Also it has to maintain the VM state - registers,
stack, etc. which is totally redundant with respect to the
interpreted program.
Tzvetan
Return to the
comp.compilers page.
Search the
comp.compilers archives again.