Re: compiler back-end development?

"BGB / cr88192" <>
Tue, 14 Jul 2009 20:09:06 -0700

          From comp.compilers

Related articles
[3 earlier articles]
Re: compiler back-end development? (BGB / cr88192) (2009-07-12)
Re: compiler back-end development? (jacob navia) (2009-07-13)
Re: compiler back-end development? (Ian Lance Taylor) (2009-07-13)
Re: compiler back-end development? (BGB) (2009-07-13)
Re: compiler back-end development? (BGB / cr88192) (2009-07-14)
Re: compiler back-end development? (Ian Rogers) (2009-07-14)
Re: compiler back-end development? (BGB / cr88192) (2009-07-14)
Re: compiler back-end development? (toby) (2009-07-15)
Re: compiler back-end development? (Ian Rogers) (2009-07-17)
Re: compiler back-end development? (BGB / cr88192) (2009-07-17)
| List of all articles for this month |

From: "BGB / cr88192" <>
Newsgroups: comp.compilers
Date: Tue, 14 Jul 2009 20:09:06 -0700
References: 09-07-013 09-07-01509-07-020 <> 09-07-031 09-07-036
Keywords: C, practice
Posted-Date: 15 Jul 2009 19:23:12 EDT

"Ian Rogers" <> wrote in message
> FWIW, MRP [1] is in the process of supporting x86-64 as well as
> currently support PPC32/64 and IA32. MRP is written in Java, which
> gives a somewhat cleaner code base and the other usual productivity
> gains from a managed language. It also has non-optimizing support for
> Win32 but not yet Win64. The compiler back-end is based on iburg and a
> linear scan register allocator. It currently not only hosts a Java
> platform but a binary translator (supporting ARM, x86 and PPC) with
> some early work on a Parrot VM too.

is it worth the cost?...

then again, since apparently they are their own JVM, it is maybe not as

(my current "implementation strategy" for Java is to compile it to native
code, and skip the JVM step, but alas this does not work for Java-ByteCode,
for which I have a partial interpreter which I never really bothered to

as for 'iburg', I am aware of this, but have not used it personally (it is
one of those tools, along with flex/yacc/..., which I generally avoid as
this would risk creating an unecessary external dependency...).

so, in my case, pretty much my entire codegen was written by hand.

oh well, for my compiler at least, I have it now producing "code I would be
running on Win64", but technically, I don't use the Win64 ABI internally,
and so would have to write the machinery for the appropriate thunks...

sadly, the only ABI-glue machinery I seem to have in place is for the SysV
calling convention, so I have to go about writing Win64 ABI support...

granted, my use of a custom calling convention is reasonable for dynamically
compiled code (my main use case), but would not be so ideal for static
compilation, as it would require the use of a custom linker.

so, "one of these days", my compiler may need to support the SysV and Win64
calling conventions internally as well (vs. via the use of linker-generated

I guess another relevant issue is how to allow mixing the SysV and Win64
calling conventions in the same app, since neither use name mangling by

I guess one option could be to start associating calling conventions with
file formats, and using a sort of "non-default mangling".

for example, when loading ELF64 objects on Win64, the loader will assume
that ELF64==SysV, and then rename the functions:
whereas for Linux it could do the reverse:

this would be skipped for already mangled names, since GCC and MSVC use
different name-mangling schemes...

as is, the use of calling convention is controlled via defines, and no real
extra effort is made to allow different conventions to be in use at the same
time (so, trying to load a Linux ELF64 object on Win64 will not likely have
the intended behavior...).

of course, due to other issues (MSVCRT vs glib, ...), it may well not be
worth the effort (the code would not likely work anyways...).

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.