Re: virtual machine efficiency

vbdis@aol.com (VBDis)
30 Dec 2004 01:02:08 -0500

          From comp.compilers

Related articles
virtual machine efficiency ed_davis2@yahoo.com (ed_davis2) (2004-12-29)
Re: virtual machine efficiency dido@imperium.ph (Rafael 'Dido' Sevilla) (2004-12-30)
Re: virtual machine efficiency anton@mips.complang.tuwien.ac.at (2004-12-30)
Re: virtual machine efficiency vbdis@aol.com (2004-12-30)
Re: virtual machine efficiency cr88192@hotmail.com (cr88192) (2004-12-30)
Re: virtual machine efficiency cfc@shell01.TheWorld.com (Chris F Clark) (2004-12-30)
Re: virtual machine efficiency lars@bearnip.com (2004-12-30)
Re: virtual machine efficiency calumg@no.onetel.spam.com (Calum Grant) (2004-12-30)
Re: virtual machine efficiency cr88192@hotmail.com (cr88192) (2004-12-31)
Re: virtual machine efficiency cr88192@hotmail.com (cr88192) (2004-12-31)
[5 later articles]
| List of all articles for this month |

From: vbdis@aol.com (VBDis)
Newsgroups: comp.compilers
Date: 30 Dec 2004 01:02:08 -0500
Organization: AOL Bertelsmann Online GmbH & Co. KG http://www.germany.aol.com
References: 04-12-151
Keywords: VM, performance
Posted-Date: 30 Dec 2004 01:02:08 EST

"ed_davis2" <ed_davis2@yahoo.com> schreibt:
>Is there a way I can have both fast loading of operands, and small
>byte-code size?


Some ideas:


1) Align arguments to word boundaries.


You can align the arguments to the next word boundary, e.g. by
inserting NOP instruction bytes, or by SKIP1 until SKIPn instructions
that "jump" forward immediately to the next instruction byte, residing
just before the word boundary.


It also will be possible to have multiple opcodes for the same
instructions, which take arguments, with each of these instructions
including an specific increment of the PC to the next word boundary
before fetching the argument from that address. This will move the
calculation of the increment from runtime (interpretation) into
"compile" time, when the code is stored in memory.


But I assume that processing these instructions, or calculating the
address of the next word boundary, or even constructing the value from
multiple byte fetches, will take much more time than reading a
misaligned argument from memory. The excess data from the added fetch
typically will still be available (in the cache) when the next
instruction is read, so that IMO the penalty of a misaligned read is
neglectable, in detail in an almost sequential access to the data.


2) Separate arrays for byte code and word arguments.


I also thought of separate code and argument areas, as John suggests,
but then all jumps and calls must specify the addresses or indices in
both areas, what again increases the memory usage and the time to
handle the duplicate position counters. In addition more delays may
occur for fetching data from different (nonsequential) addresses.


Even if I know that on some machines misaligned reads are expensive,
does anybody know of a concrete machine where reading a misaligned
"word" will definitely take longer than reading the according number
of individual bytes???


DoDi


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.