Related articles |
---|
Tail recursion in object systems fhd@Panix.Com (Frank Deutschmann) (1993-02-26) |
Re: Tail recursion in object systems mac@coos.dartmouth.edu (1993-02-27) |
Re: Tail recursion in object systems max@nic.gac.edu (1993-03-01) |
Re: Tail recursion in object systems pardo@cs.washington.edu (1993-03-03) |
Newsgroups: | comp.compilers |
From: | Frank Deutschmann <fhd@Panix.Com> |
Keywords: | OOP, question, optimize |
Organization: | Compilers Central |
Date: | Fri, 26 Feb 1993 05:46:55 GMT |
I recently posted the following article to comp.object, but I think it
would be interesting to see some comp.compilers discussion on the subject.
The thread began with someone posting an example of two objects with
mutually tail-recursive methods; the interesting aspect is whether
tail-recursion optimization can be performed in a message-passing
environment.
cjmchale@dsg.cs.tcd.ie (Ciaran McHale) writes:
> Instead of relying on your system to optimise mutual tail recursion,
> you might be able to rewrite you code to do:
> [...example as while loop...]
Yes, recursive code can always be rewritten, but some problems are
better expressed (read: more understandable) as recursive.
> I think you are jumping the gun a bit. Firstly, (unless my site didn't
> receive some articles) nobody here has argued that such mutual
> tail-recursion is needed often in practice.
Tail recursion is more common than you would think -- in C, any
function call immediately prior to a return can be optimized into a
jump, with potentially huge performance speedups. Just take a look at
some code to see how often function invocations immediately preceede
returns (and note also that some compilers may be able to re-order
source statements to optimise to the jump rather than a function
call). As you mention in your post, delegation in particular leads to
tail-recursive functions.
scottb@saturn.jpl.nasa.gov (Scott Burleigh) writes:
>I think the problem that this posting brought up is more fundamental than
>compiler optimization. It is a problem in the semantics of object-oriented
>language design. When we say that a given line of code indicates that "a
>message is sent to object B" we are kidding ourselves; what is really
>happening is that a function associated with object B is being called.
Yes, this is exactly my point with the RPC and migrating objects issue.
What we are seeing is that method invocation via function call is not the
same as method invocation via message passing.
C++ shows this quite clearly; we will get radically different execution
behavior if (using the standard C++ features) we have two mutually
tail-recursive objects that are located in the same address space, versus
the same two objects located in different address spaces.
In the same address space case, the compiler optimizes out the function
calls, and the program runs in constant stack space. But in the remote
object case, the objects do method invocation via (RPC) message passing,
and the compiler's tail-recursion optimizations do not help (the call to
the RPC function is probably a jump, but that doesn't help in the large
sense). If we don't have threads, we deadlock immediatly on the recursion
from object 2 back to object 1 (as object 1's process is blocked waiting
for the RPC return). (This is a rather crude, default case of the mutex
preventing mutual recursion which Ciaran's post mentions.) If we do have
threads (and on method invocation/message receipt we create a new thread),
we create threads and leave them sleeping until we run out of resources
(effectively, threads have become a substitute for stack frames, but we
can't optimize this case to run in constant space).
The cause of the difference in behavior is clearly the dichotomy between
pure function call versus message passing/handling. However, after
thinking about this in more depth, I see no way to make the RPC behave
like a true function call.
>I suggest that the solution to the problem is to implement genuine message
>exchange in place of the ersatz. When we want object A to send a message
>to object B, it should SEND A MESSAGE to object B even if by some
>mischance objects A and B happen to reside in the same address space; A
>should resist the temptation to muck with B directly, show it some
>respect, and simply route a message to it through some disinterested third
>party.
True, sending messages in all cases is certainly cleaner and more
consistent (it behaves the same, whether the object is local or remote),
but, after further investigation, I can not see a clear way to do the
equavallent optimization for the message passing system. (In other words,
I'm at a loss to figure how to get a message passing system, like
Smalltalk, to execute tail-recursive code in constant space/resources. If
Smalltalk is really capable of this optimization, I would be really
interested to hear about the details!)
But note that in a message passing system, you will be subject to the RPC
case, whether your objects are local or not (i.e.: with threads, you
eventually run out of resources; without threads, you deadlock). In
neither case does Stephan get the desired behavior.
In any case, to do the optimization, I believe it is clear that you need a
global view of the code -- this optimization can not be implemented in a
message passing switch/layer -- as you need knowledge of whether the
message pass is the final job prior to return/reply.
Just to carry this discussion a little bit further: this says to me that
OO environments really need be based upon message passing; using function
invocation to approximate messages is not good enough, as it leads to
different behavior in different circumstances. While it is true that
mutually tail-recursive functions are not the most common things, they do
exist, and, as systems get larger and we head to more distributed systems
with object migration support, this issue is likely to become more
serious. I would be extreemly interested to learn if it is possible to
handle tail-recursion in constant space/resources in a message passing
environment.
-frank
--
EMail: fhd@panix.com, Phone: 1 - 212 / 765 - 2050
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.