|Program Partitioning for embedded systems. firstname.lastname@example.org (Bageshri Sathe) (2001-08-25)|
|Re: Program Partitioning for embedded systems. email@example.com (Mark Welter) (2001-08-26)|
|Re: Program Partitioning for embedded systems. firstname.lastname@example.org (Madhusudan Challa) (2001-09-03)|
|From:||Mark Welter <email@example.com>|
|Date:||26 Aug 2001 00:46:39 -0400|
|Posted-Date:||26 Aug 2001 00:46:39 EDT|
Bageshri Sathe wrote:
> I am working on a problem of partitioning the executable code for a
> limited memory embedded system (non-parallel) which has neither an OS
> nor any paging hardware. There are separate memories for code and
> data. A large function may not fit in the code memory, so overlays
> cannot be used. ...
This sort of problem used to occur 45 years ago on mainframes, 30
years ago on minis, ... and I'm sure it will strike again in the
coming decades. All of the research that I am aware of focussed on
procedure ordering within memory images. Unless you are looking for a
research project, you might consider one of the following approaches
to making a system fit within memory.
Follow John's advice. Messing with overlays has always been very
effort intensive to achieve acceptable results, but has often been the
only practicable solution. Thrashing is always a potential problem,
and excessively large program segments need to be cleaned up. Using
an execution profiler is sometimes helpful for these issues.
You could write or find an interpreter for a fairly low level
language. This will slow your system down by something between two
and ten times. If you have a bunch of code already written, you may
be able to interpret the machine/assembly language itself. If you can
manage to reserve a register or two that will not be used in the
interpreted code, some machines are quite efficient at interpreting
Another approach is to simulate virtual memory in software. I
heard of this being done in the implementation of a social sciences
statistical package at the University of Michigan in the early '70s. I
didn't work on this code, so my recollection of how it was actually
implemented may differ in some details. The idea is to break the code
up into pages and to run off-page references through a transfer
vector. Pages not present transfer to a "page fault" routine which
maps in the page. While it's not relevant to your problem, the system
I mentioned did similar things for data references. These "software
VM" systems are very target architecture dependent, so whether such a
system is practical depends on this as well as how much control you
have over code generation.
If, you are dealing with C style C++ code, you may be able to do a
partial software VM scheme by defining a "transfer" class that has all
of your "extern" functions as virtual members. Diving in and modifying
the vtable at execution time would give you the ability to map in and
out individual routines. You'd want the routines to be position
independent code of course.
Return to the
Search the comp.compilers archives again.