Re: Representing Closures in C

"Tommy Thorn" <>
25 Jul 2006 00:39:32 -0400

          From comp.compilers

Related articles
Representing Closures in C (Johan Tibell) (2006-07-21)
Re: Representing Closures in C (2006-07-21)
Re: Representing Closures in C (Tommy Thorn) (2006-07-21)
Re: Representing Closures in C (Johan Tibell) (2006-07-22)
Re: Representing Closures in C (SM Ryan) (2006-07-23)
Re: Representing Closures in C (2006-07-23)
Re: Representing Closures in C (Tommy Thorn) (2006-07-25)
Re: Representing Closures in C (Tommy Thorn) (2006-07-25)
Re: Representing Closures in C (2006-07-25)
| List of all articles for this month |

From: "Tommy Thorn" <>
Newsgroups: comp.compilers
Date: 25 Jul 2006 00:39:32 -0400
References: 06-07-05106-07-058
Keywords: functional
Posted-Date: 25 Jul 2006 00:39:32 EDT

Johan Tibell wrote:
> I threw together a crude and probably incorrect interpreter tonight.
> This is probably not very efficient (if you know of a better way
> please tell me) but it also allows expressions to share the "tail" of
> the environment.

Well, that really depends on how much work you're willing to do.
Without some degree of "compilation" variants of this or explicit
rewriting (as I suggested) are about the only solutions.

Beware though, that a naive dictionary approach suffers from space
leak! (Imagine fx. a pair of mutally tail-recursive functions that
continues to grow the argument list). You need to [occationally] prune
duplicates out of the dictionary to avoid this.

> It might be
> better to store top level functions (which I have not implemented yet)
> in a hashtable so they can be looked up more efficiently.

IMO that would be like supercharging a Geo Metro (or a Trabant).

You're better off expensing the energy on a fundamentally better model.
Check out Xavier Leroy's Zinc model (Oh, you may want to know
the classics texts also, John C. Reynolds "Definitional interpreters
for higher-order programming languages",
but Zinc is a more efficient model)

> The source code is available here, it is quite short.

Your APP fails to verify that it's actually applying a function.

> I've read parts of Simon Peyton Jones's book but I'm unsure about how
> much overlap there is between writing an interpreter for non-strict and
> strict languages. In particular, do I need graph reduction?

There is an *IMMENSE* overlap between the implementation of strict and
lazy functionally languages, though highly optimized implementation
have different trade-offs naturally. This was not obvious at all in
the beginning, with the former coming from an environment model and the
latter from graph reduction/super combinators. At the end of the day
however, the best implementations used the exact same ideas and
concepts. The difference, the "reduction" in "graph reduction" is the
updating of the closure, which is not done for strict languages.

IMO, Peyton Jones' books are outstandingly readable, and appropriate
reading even if you're ultimately more interested in strict languages.

> Interesting approach, I've never actually thought about performing
> beta-substitution in that way.

It is actually the most obvious implemention when you're coming from a
graph view. It inefficient sure, but not as bad as you might think if
your allocation is fast (as in incrementing a pointer).

> > (*) Be careful to handle names correctly, eg. ((\x.\x.e1) e2) = \x.e1,
> > not \x.(e1[e2/x])
> I think my interpreter handles this correctly. Is this a problem with
> both strict and non-strict interpreters?

Yes (this problem is known as "name capture").


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.