Re: Bytecode Image Questions

Nick Roberts <nick.roberts@acm.org>
25 Sep 2004 13:07:13 -0400

          From comp.compilers

Related articles
Bytecode Image Questions chris@noreply.nospam.tkdsoftware.com (news.tkdsoftware.com) (2004-09-21)
Re: Bytecode Image Questions nick.roberts@acm.org (Nick Roberts) (2004-09-25)
| List of all articles for this month |
From: Nick Roberts <nick.roberts@acm.org>
Newsgroups: comp.compilers
Date: 25 Sep 2004 13:07:13 -0400
Organization: Compilers Central
References: 04-09-124
Keywords: storage, design
Posted-Date: 25 Sep 2004 13:07:13 EDT

news.tkdsoftware.com wrote:


> Does anyone have any input on the best way to layout the memory
> addressing scheme for a bytecode stack machine system? Is there any
> benefit in having a dual-stack system where the code and literal
> resource pool (such as string table, external library table, and
> export table references) are stored in a memory block and a second
> memory stack is used for all stack operations? Reason I ask is that
> some past code generators I've used have specified a maximum data
> segment size of 32k. What does this typically imply?


You seem to have omitted some vital information. What is your target
machine? What is your application (domain)? Is speed or compactness a
higher priority? What is the bytecode (if it is not special), and what
is the source language?


I think good general advice would be to try to lay things out in
memory the same as would be done for a (compiler emitted) machine code
program, as much as possible. This means using (where you can) native
data types, with their recommended alignments, using native data
structure mechanisms (e.g. machine push and pop instructions), using
native addressing modes, and so on.


To my knowledge, the benefit (and only use) of dual stacks is for
returning values of a dynamically sized type: such values are returned
on the secondary stack; the primary stack is used for everything else,
as normal.


I suspect a maximum data segment size of 32 KiB implies a 16-bit program.


On machines with a memory hierarchy where cache speed is much higher
than main memory speed, it will be important to pay attention to cache
usage. This issue can be exacerbated by the fact that the bytecode
creates at least one extra working subset for the cache to deal
with. Sometimes this can shift the balance of considerations towards
heavy stack use (in favour of heap use); sometimes it can actually
shift the balance the other way (using dynamic allocations on the heap
to reduce the total address space footprint, and maybe to allow
coloured allocation).


Since the bytecode is just data, from the machine's point of view
(usually), this can give you a lot more flexibility in where you put
the bytecode. It can be dynamically loaded in chunks (scattered onto a
heap), for example. It can be generated on the fly, moved around
(e.g. to improve cache usage), refactored on the fly (e.g. switching
variables between heap and stack depending on frequency of access),
and generally bent to your will.


Transformation of bytecode into directly executable code can also work
well, sometimes. Typically, you change simple things (e.g. X := X+1)
into some directly corresponding machine code (e.g. INC R0) and
everything else into calls to library routines. This technique can get
tricky if you have self-referential or dynamically generated code
(especially for a target machine with a pipelined processor), and
loading on-the-fly can suffer from noticeable latencies. But it can
have a dramatic effect on the execution speed of code that is (quite)
speed critical. I'm not a big fan of JIT compilation, by the way.


I'm beginning to waffle, so I'll stop here. Ask if you want me to waffle
any more.
--
Nick Roberts


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.