Re: binary-compiler from Apple?

henry@zoo.toronto.edu (Henry Spencer)
Fri, 17 Apr 1992 23:43:12 GMT

          From comp.compilers

Related articles
binary-compiler from Apple? taihou@iss.nus.sg (1992-04-07)
Re: binary-compiler from Apple? pardo@cs.washington.edu (1992-04-08)
Re: binary-compiler from Apple? henry@zoo.toronto.edu (1992-04-09)
Re: binary-compiler from Apple? henry@zoo.toronto.edu (1992-04-17)
| List of all articles for this month |

Newsgroups: comp.compilers
From: henry@zoo.toronto.edu (Henry Spencer)
Keywords: translator, architecture
Organization: U of Toronto Zoology
References: 92-04-030
Date: Fri, 17 Apr 1992 23:43:12 GMT

This is a followon to my contribution of a week ago on translating
binaries for machine X to machine Y... Hillel Markowitz (of AT&T), who
has investigated some less-favorable cases than the ones I was familiar
with, sent me private mail describing some issues I'd overlooked or
slighted. With his permission, here's a longer set of notes on
significant issues in such translation, combining our comments. (I take
full responsibility for errors in the final wording.)


The main problem is performance, with a significant secondary issue being
ill-documented interfaces.


For good performance, the machines should have the same byte order, the
"source" machine should have substantially fewer registers than the
"object" machine, the source machine should not have condition codes, and
they should share a common floating-point format. Running out of
registers, translating byte order or floating-point format, or emulating
condition codes involves major performance loss. Note that there may be
floating-point condition codes even if there are no integer condition
codes. Note also that if you translate floating- point formats, you may
lose range or precision in the translation, and things like roundoff
behavior will differ too; floating-point code will usually give slightly
different answers, and may even malfunction badly.


Mismatches in memory model can be troublesome; you'd really like addresses
on the source machine to be usable as-is on the object machine. One area
where nobody has found a good solution (that I know of) is mapping branch
addresses that the program computes in complex ways; the usual brute-force
solution is a table, one entry per possible source-machine branch target
(that's one per instruction!), used to map computed branch targets into
the real targets.


Remember that the "machine" which user programs run on includes not just
the instruction set, but also the memory management, system calls, and
signal/interrupt mechanism. That last, in particular, is often poorly
documented because system subroutines look after the low-level details.


Emulating the source system's filesystem can be problematic. Even on Unix
systems, a lot of programs (or the system subroutines they call) know what
a directory looks like, and that has changed over time. If the source
system was seriously non-Unix-like, it's much worse.


Overlays, dynamic allocation for code, shared libraries or memory, etc.
add whole new dimensions to the problem.


If you're trying to mix-and-match old and new code, replacing emulated
modules with rewritten ones, calling conventions will often be different
enough to need significant translation. On the other hand, if you *can*
mix-and-match, you may be able to deal with performance problems in
critical programs even when the emulation can't be improved further. Not
to mention the advantages of being able to convert things a piece at a
time rather than having to port or reimplement the whole mess.
--
Henry Spencer @ U of Toronto Zoology, henry@zoo.toronto.edu utzoo!henry
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.