Related articles |
---|
Efficient bytecode design and interpretation mg169780@zodiac.mimuw.edu.pl (Michal Gajda) (2001-05-22) |
Re: Efficient bytecode design and interpretation anton@mips.complang.tuwien.ac.at (2001-05-29) |
Re: Efficient bytecode design and interpretation jonm@fishwife.cis.upenn.edu (2001-05-30) |
Re: Efficient bytecode design and interpretation loewis@informatik.hu-berlin.de (Martin von Loewis) (2001-05-30) |
Re: Efficient bytecode design and interpretation eugene@datapower.com (Eugene Kuznetsov) (2001-05-30) |
Re: Efficient bytecode design and interpretation korek@icm.edu.pl (2001-05-31) |
Re: Efficient bytecode design and interpretation usenet.2001-05-30@bagley.org (Doug Bagley) (2001-05-31) |
Re: Efficient bytecode design and interpretation anton@mips.complang.tuwien.ac.at (2001-06-03) |
Re: Efficient bytecode design and interpretation anton@mips.complang.tuwien.ac.at (2001-06-03) |
From: | "Eugene Kuznetsov" <eugene@datapower.com> |
Newsgroups: | comp.compilers |
Date: | 30 May 2001 00:03:15 -0400 |
Organization: | Compilers Central |
References: | 01-05-068 |
Keywords: | interpreter |
Posted-Date: | 30 May 2001 00:03:15 EDT |
> [It's been discussed before. My suggestion is that unlike the design for
> a physical architecture, there's little added cost to providing zillions
> of operators, and the more each operator does the fewer times you go
> through the interpreter loop, so a CISC design likely wins. You might
also
That's true up to a point -- there are two breakpoints proportional to
L1 and L2 instruction cache sizes. That makes a huge difference, and can
balance out the advantage of using very many opcodes.
\\ Eugene Kuznetsov
\\ eugene@datapower.com
\\ DataPower Technology, Inc.
Return to the
comp.compilers page.
Search the
comp.compilers archives again.