Re: Compiler or interpreter?

"BGB / cr88192" <cr88192@hotmail.com>
Fri, 18 Jun 2010 07:27:59 -0700

          From comp.compilers

Related articles
Compiler or interpreter? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2010-06-13)
Re: Compiler or interpreter? anton@mips.complang.tuwien.ac.at (2010-06-15)
Re: Compiler or interpreter? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2010-06-16)
Re: Compiler or interpreter? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2010-06-17)
Re: Compiler or interpreter? cr88192@hotmail.com (BGB / cr88192) (2010-06-18)
Re: Compiler or interpreter? paul.biggar@gmail.com (Paul Biggar) (2010-06-18)
Re: Compiler or interpreter? aek@bitsavers.org (Al Kossow) (2010-06-18)
Re: Compiler or interpreter? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2010-06-18)
Re: Compiler or interpreter? cr88192@hotmail.com (BGB / cr88192) (2010-06-19)
Re: Compiler or interpreter? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2010-06-19)
Re: Compiler or interpreter? cr88192@hotmail.com (BGB / cr88192) (2010-06-20)
| List of all articles for this month |
From: "BGB / cr88192" <cr88192@hotmail.com>
Newsgroups: comp.compilers
Date: Fri, 18 Jun 2010 07:27:59 -0700
Organization: albasani.net
References: 10-06-032 10-06-038
Keywords: interpreter, design
Posted-Date: 18 Jun 2010 11:09:42 EDT

> glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote:
> (snip)
>
>> More specific to the above mentioned discussion, is that many of
>> the early DEC systems used threaded code. Does threaded code
>> count as compiler or interpreter?
>
> And our moderator wrote:
>
>> [Seems to me that if you disqualify compilers because they generate
>> calls to runtime libraries, you'll have precious few compilers left.
>> Having hacked a certain amount on the old Unix threaded code Fortran
>> system, the compiler was definitely a compiler, parsed the source
>> code, made a symbol table, and generated object code. The stuff that
>> the threaded code called was an interpreter. -John]
>
> Still trying for a productive answer to an admittedly unproductive
> question, and following the discussion in the wikipedia entry
> for "Threaded_code", they make the distinction of direct threaded,
> indirect threaded, and subroutine threaded.
>
> Even more, it seems that:
>
> "Ertl's most recent tests show that direct threading is the
> fastest threading model on Xeon, Opteron, and Athlon processors;
> indirect threading is the fastest threading model on Pentium M
> processors; and subroutine threading is the fastest threading
> model on Pentium 4, Pentium III, and PPC processors."
>
> It isn't so obvious how you distinguish subroutine threaded code from
> other calls to the runtime library. (Not counting explicit calls such
> as to intrinsic funtions, and even though such functions might be
> expanded inline.)
>
> Fortran READ and WRITE statements, the ** operator, and complex
> division, are often implemented as subroutine calls, though with no
> explicit call syntax. Some systems will generate subroutine calls for
> all floating point operations, or even larger integer operations. And
> then there is subroutine threaded code with subroutine calls for all
> operations.
>
> So where do you draw a line between "true compiler" and "just an
> interpreter?"
>


odd...


some of my early-on compilers produced this sort of "threaded code".
at the time, I had not thought much of it...


later on I ended up writing an interpreter, and using something vaguely
similar (lots of function pointers placed into "opcode" structures) to
essentially break past a prior performance bottleneck (what I had called the
"switch limit", whereby nearly the entire running time in the interpreter
ends up going into massive switch tables).


oddly, a linked list of structs each containing a function pointer (and
typically pre-decoded arguments), can be somewhat faster than the "read an
opcode word, dispatch via switch, read-in args, execute" strategy.


the result is that an interpreter which pre-decodes the bytecode into such a
linked-list structure can very possibly run somewhat faster than one which
does not (although I have not benchmarked these strategies on their own, so
I am not certain if this is "necessarily" the case, or just the case in the
cases I tested).




now, as for me, I see it that there are a number of levels between compilers
and interpreters:
a pure interpreter, which can be implemented entirely in C and doesn't go
anywhere near machine code;
an "impure interpreter", which may use some assembler, and possibly limited
run-time code generation, but otherwise runs the code in a C based
interpreter (in my case, these "impure" parts are mostly related to things
like glueing together native and interpreted code);
an "impure compiler", which would generally be what is termed here threaded
code;
a "pure compiler", which produces (almost entirely) self-contained code.




the reason I say "almost entirely" above is because, as noted, there are
very few compilers which don't generate (at least some) hidden API calls in
some cases.


one may think, for example, "well, I am using C, there are no hidden API
calls", but even typical C compilers (such as GCC), may generate hidden
internal calls in a variety of conditions (for example, long-long division
or modulo on x86, also IIRC varadic arrays and some other misc operations,
although OTOH, many seemingly API calls are actually built right into the
compiler).




my C compiler also generates a lot of hidden API calls as well (typically
for extensions):
most operations involving 128 bit integers or floats (or, "long-double" for
that matter, since my compiler both supports long-double but doesn't
typically directly use the FPU, also float16, which mostly consists of
load/store operations);
many operations on geometric vectors or complex numbers ("_Complex" type);
pretty much anything involving built-in dynamic types... (not typically used
by default in C due to semantic complexities).


...




my framework actually has a system I call the "XMeta" system which is a
slight twist on API calls:
the name of the called function is actually an encoded request (handled via
a special name-mangling scheme), which typically causes code hooked up to
the linker to produce code to fulfill this request.


for example, my object system is set up to be accessed this way
(theoretically, not exactly well tested), whereby the requests are forwarded
to code in the object system, which produces the code to access the objects
(as-is, IIRC, it actually just uses the "cheap trick" of redirecting the
calls to the C-level ABI for the object system, and so at the moment this
interface is less efficient than generating direct C ABI calls, since it
involves added internal saving/restoring registers, whereas a C ABI call
would involve spilling anything held in scratch registers).


the theory goes though that this could "short circuit" some of the logic in
the object system, potentially reducing simple operations such as a
fixed-field load or store essentially to maybe a "mov" instruction or maybe
a "lea; mov" pair or similar, but at the moment this is not done. (note: all
this is mostly intended for Java or C# or similar, whereas it is currently
assumed that C code will just use the API to access OO facilities).




but, there is a little bit of a merit:
in many places in the code generator, performing an explicit function call
is, awkward (typically in, say, something like composing the arguments for
another function call).


since most of the "XMeta" handlers behave more like machine instructions
(typically only changing registers they are specifically told to change,
...), it is a lot cheaper, as one only has to make sure that the stack
pointer is pointing somewhere "sensible".


something loosely similar, but typically used within the main codegen, is to
preserve all registers, but pass/return values on the stack (usually in the
compiler, this is wrapped in a wrapper function which stuffs the argument
registers onto the stack and loads the return value from the stack, whereas
XMeta handles both, where it is assumed that whoever is filling the request
will generate code specifically for the requested operation, which would
include fetching the arguments from the indicated registers and putting the
return value into the requested register).


note that a previously considered feature was to also allow passing an
argument to requests which indicates (in a simplified form) the state of the
register allocator at that point (essentially as a bit-mask of which
registers it may need to preserve, or which registers it is free to use
without preserving them). however, thus far this feeature is not well
supported (a lot of code doesn't pass this argument, and most handlers
ignore it).


(side note: xmeta uses a mix of fixed-position, named, and variable
arguments, sort of like in Common Lisp or similar).




a few times, I had been tempted to add a pseudo-opcode for xmeta calls
directly to the assembler, which would simplify using it (no need to
manually compose the requests). but, thus far I have been uncertain as it
goes against some of my personal policies (this is a fairly "blatent"
extension). I guess I could add it "if needed", but not necessarily use it
in most cases (I like to at least pretend that my assembler is
source-compatible with NASM or YASM or similar).


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.