|Portable Fast Direct Threaded Code eliot@.cs.qmw.ac.uk (Eliot Miranda) (1991-03-28)|
|Re: Portable Fast Direct Threaded Code Tom.Lane@G.GP.CS.CMU.EDU (1991-04-01)|
|Re: Portable Fast Direct Threaded Code firstname.lastname@example.org (1991-04-02)|
|Re: Portable Fast Direct Threaded Code email@example.com (1991-04-02)|
|Re: Portable Fast Direct Threaded Code vestal@SRC.Honeywell.COM (1991-04-03)|
|Re: Portable Fast Direct Threaded Code firstname.lastname@example.org (1991-04-04)|
|Re: Portable Fast Direct Threaded Code email@example.com (1991-04-04)|
|Re: Portable Fast Direct Threaded Code firstname.lastname@example.org (1991-04-08)|
|From:||email@example.com (Bob Pendleton)|
|Date:||Mon, 8 Apr 91 13:48:00 MDT|
In article <firstname.lastname@example.org> you write:
> A program in threaded code is just an array of addresses, possibly
> interspersed with operands. So the fragment
> c := a + b
> becomes something like
> address of 'load'
> address of 'a'
> address of 'load'
> address of 'b'
> address of '+'
> address of 'store'
> address of 'c'
> This implies a very simple virtual stack machine - you can get more clever
> by implementing a virtual register machine.
About 10 years ago I was working on a lisp compler that compiled to
threaded code. I was trying to get small code and still have some
performance. (Since I wanted to run the code on a Z80 or 8080 small was
important. My how things change :-)
I found that the 3 most common operations in threaded code were load,
store, and execute. So I put those operations with the operands. This
made the operands look rather like classes with load, store, and
execute as virtual functions. If you let the evaluate operation
subsume the load and execute operations the threaded code for
c := a + b;
address of 'a.evaluate()'
address of 'b.evaluate()'
address of '+'
address of 'c.store()'
g := F(x, y);
address of 'x.evaluate()'
address of 'y.evaluate()'
address of 'F.evaluate()'
address of 'g.store()'
Which is much smaller than the original version of threaded code.
Later, while working on a homebrew version of FORTH I gave up on
threaded code completely. I found, like most who have expolored it,
that symbolic execution of RPN code is a nice way to generated machine
code. Machine code that runs much faster than threaded code, and that
the machine code, even on an 8080, was only about 25% bigger than
email@example.com or decwrl!esunix!bpendlet or utah-cs!esunix!bpendlet
[The DEC PDP-11 Fortran compiler did something similar, writing load routines
for commonly used variables. -John]
Return to the
Search the comp.compilers archives again.