Re: RFC: project directions...

"BGB / cr88192" <cr88192@hotmail.com>
Sat, 3 Oct 2009 13:01:26 -0700

          From comp.compilers

Related articles
RFC: project directions... cr88192@hotmail.com (BGB / cr88192) (2009-09-14)
Re: RFC: project directions... jgd@cix.compulink.co.uk (2009-09-19)
Re: RFC: project directions... cr88192@hotmail.com (BGB / cr88192) (2009-09-21)
Re: RFC: project directions... jgd@cix.compulink.co.uk (2009-09-26)
Re: RFC: project directions... cr88192@hotmail.com (BGB / cr88192) (2009-09-26)
Re: RFC: project directions... jgd@cix.compulink.co.uk (2009-10-03)
Re: RFC: project directions... cr88192@hotmail.com (BGB / cr88192) (2009-10-03)
| List of all articles for this month |
From: "BGB / cr88192" <cr88192@hotmail.com>
Newsgroups: comp.compilers
Date: Sat, 3 Oct 2009 13:01:26 -0700
Organization: albasani.net
References: 09-09-078 09-10-001
Keywords: code, interpreter
Posted-Date: 04 Oct 2009 12:33:27 EDT

<jgd@cix.compulink.co.uk> wrote in message news:09-10-001@comp.compilers...
> cr88192@hotmail.com (BGB / cr88192) wrote:
>
>> > It's only one example, but my main employment is in maintaining
>> > build systems for large apps on a wide range of platforms ...
>> Yep, But I Am Also Aware That Many People Like Doing Binary-Only
>> Distributions Of Apps...
>
> The apps I'm working on are, indeed, closed-source binary-only
> distribution.
>


Yes, Ok.


Just My Thinking Though Is That, In Reducing The Amount That Has To Be
Recompiled For Each Target, Some Time And Effort Can Be Save. Granted, I
Realize This Is Not Free Either.


for example, the "core" of the app can be rebuilt for each target, but if
the app uses a lot of loadable extension libraries, ... these could be left
as a more "common" format and handled via other means.


granted, this is where in a lot of cases people would start considering the
likes of a scripting language, or something like Java...




>> the bytecode is still in the design stage.
>>
>> one of the bytes (I called it TCB), I am considering moving into
>> being a prefix to allow for a probably faster interpreter (it is a
>> whole byte, and too generic, meaning it would inflate processing logic
>> too much and eat time).
>
> Interpreter? If you want the result to run at decent speed on modern
> processors, surely some kind of translation of bytecode to native code
> is necessary? There are plenty of options about the stage at which
> this should be done, ranging from install-time to JIT, but simple
> interpretation is usually a problem these days.


JIT is an option, but an interpreter is simpler.


my current thinking is that one can have a generic interpreter for ease of
portability, and a JIT for speedup on specific known/supported
architectures.




>> I had thought of it, and the main deciding factor is that I might
>> want an interpreter. this rules out more abstract forms, such as
>> TAC+SSA, ... since these can't be efficiently interpreted AFAIK.
>
> The interpreter can be s useful bootstrap tool, but won't do for any
> kind of serious applications. See the history of Java.
>


yep, I am aware of this.


I wasn't claiming an interpreter would be the only option, only that it be
useful to allow that one be easily allowed.




>> however, attempting to support all of this in a single codegen
>> (actual x86 and a bytecode format) could get ugly, so I am left
>> wondering if forking would be better advised
>
> Split the problem up. Your actual compiler, which gets involved with the
> complexities of programming languages generates bytecode. Back-ends
> generate machine code. LLVM is a good model to look at.
>


re-engineering my compiler has been something I have tried at times,
but have generally failed at. part of the problem is that a big chunk
of the "architecture" has solidified in a less-than-optimal way
(basically, with the lower end being essentially large and complicated
and fairly painful to add in new functionality).




for now, as for the issue of the bytecode, I came up with a much more lazy
solution:
I compile code for 32-bit x86;
I then interpret the 32-bit x86.


the compiler then does not have to do anything particularly "new".
similarly, x86 also abstracts the compiler some from the interpreter, as
well as the interpreter from the compiler (I could just use GCC or MSVC to
produce code for it as well...).




granted, x86 is far more horrid than most sorts of bytecode around
(especially when one starts trying to implement the thing, and far more of
the "horror" becomes glaringly obvious), but oh well...




my main emphasis then is on a "subset" of x86, where I include mostly
userspace features, but leave out things specific to an OS (most such
features are either partly or fully ommitted). similarly, unlike an
emulator, there will be no real attempt to simulate hardware.


instead, the interpreter will implement kernel-level facilities (and, as
well, my VM framework in-general could be treated as if it were the kernel).






internally, the interpreter uses some parts from my disassembler, and so
essentially "decodes" the x86 into its own internal format (currently
struct-based, and this process likely to be SLOW...). for now, I am
interpreting this directly, but I currently have ideas for which this format
could be "cooked" some and converted into an internal bytecode, or
potentially JITted, but am currently more focusing on making the thing work
than on making it fast.


note: I started initially with a more "direct" strategy, namely nested
switch statements, but came to the opinion that, although likely to be
faster, would be unreasonably painful/inflexible (it was turning rather
horrible rather quickly). as-is, there is a vague similarity between the
interpreter internals and my compiler internals (I suspect I am operating at
a "similar" level of abstraction).




currently, I have a 486-like subset of the ISA (although, it includes many
newer 64-bit features as well and "could" potentially simulate long-mode,
but at the same time, 16-bit facilities are largely incomplete, and so RM /
PM16 are not likely to entirely work correctly). note that interpreted code
would not be able to change CPU modes (this much is not implemented).


I was in the process now of implementing x87 FPU functionality (almost
entirely simulated, thus far mostly via integer code for technical reasons),
and SSE/SSE2 support is also likely (eventually).


I can probably fairly safely leave out MMX, and just sort of pretend it
never existed.




as a current downside, this x86 will run in its own simulated address space,
which will add a little complexity to things like code and data sharing (I
can thunk the code, but non-mutually-addressable structures are a potential
limiting factor).


luckily, shared-memory is not so difficult, and I already have some ideas
for how to do this.
I had designed before some mechanisms for "position-independent" shared
memory (basically, which operates equivalently regardless of physical
address or native pointer size, although it is necessary to standardize on a
particular endianess, for which I chose little-endian).


simple description: in-heap pointers are fixed-size and position relative.
(however, a pointer can't point to itself, since this special case is used
to encode NULL).


using something like this would be more or less necessary for sharing a heap
between 32 and 64 bit code (the other main option having been a scheme like
in the EFI ByteCode, but this is option is not available to x86 machine
code, and would not address disjoint address spaces).




note:
this is / can be used along-side my "wide-pointer" mechanism, but mostly
because this mechanism is expensive (16-bytes per pointer), and being able
to have a shared-memory with lots of 32-bit pointers is preferable to
128-bit pointers.


granted, a 64-bit scheme could also be considered (as a means of
"compressing" the 128-bit space, which is probably overkill). note that both
would be based around a segmented addressing (the 64 bit case then
"indexing" the segment and shaving some bits off the offset, me considering
16:48 or 24:40).


the 32-bit + shared-memory scheme is much cheaper (both space and
performance wise), and likewise does not require strict adherence to an API
in order to work.




well, it is all rather ugly, but it "should" hopefully work ok.




> --
> John Dallman jgd@cix.co.uk
> "C++ - the FORTRAN of the early 21st century."
> [In many applications, an interpreter is plenty fast, particularly if
> the individual operations are relatively high level. Look at perl and
> python, for example. -John]


yes, granted...


sadly, x86 is not at all high-level (as one notes while writing code to
twiddle the bits in "eflags"), and so the overhead of an interpreter for x86
machine code is not exactly likely to be small...




my bytecode would have been a bit higher level than this, but also would
have been much more work to get implemented, and would not have some of the
"cool" possibilities implied by interpreting x86...


granted, an x86 interpreter is not exactly "easy" either...


and, as an odd bit of trivia, it all leads to another case where my average
coding rate approaches about 2 kloc (2000 lines of code) per day, as
apparently I have been pulling off since I had started beating this
together...


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.