|[3 earlier articles]|
|Re: Run-time representation of classes firstname.lastname@example.org (Armel) (2009-01-30)|
|Re: Run-time representation of classes email@example.com (2009-01-31)|
|Re: Run-time representation of classes firstname.lastname@example.org (Michael Schuerig) (2009-02-02)|
|Re: Run-time representation of classes email@example.com (cr88192) (2009-02-03)|
|Re: Run-time representation of classes firstname.lastname@example.org (George Neuner) (2009-02-04)|
|Re: Run-time representation of classes email@example.com (Michael Schuerig) (2009-02-05)|
|Re: Run-time representation of classes firstname.lastname@example.org (cr88192) (2009-02-05)|
|Re: Run-time representation of classes email@example.com (Larry Evans) (2009-02-07)|
|Re: Run-time representation of classes firstname.lastname@example.org (cr88192) (2009-02-08)|
|Re: Run-time representation of classes email@example.com (George Neuner) (2009-02-08)|
|Date:||Thu, 5 Feb 2009 17:01:41 +1000|
|References:||09-01-055 09-02-001 09-02-007 09-02-010|
|Posted-Date:||07 Feb 2009 09:40:58 EST|
"George Neuner" <firstname.lastname@example.org> wrote in message
> On Mon, 02 Feb 2009 14:05:48 +0100, Michael Schuerig
> <email@example.com> wrote:
>>George Neuner wrote:
>>> Most texts cover basic GC and creating pointer/reference maps for your
>>> structured types. That works well provided the maps are simple and
>>> fast to decode (e.g., byte or word maps rather than bits). A slightly
>>> faster, but more complex, method is to generate a customized scanning
>>> function for each type that knows where to find embedded references
>>> ... not all texts mention this method.
>>Could you be a bit more specific about the texts, please? AFAIR, neither
>>the Dragon Book nor Jones/Lins, "Garbage Collection", or
>>Scott, "Programming Language Pragmatics", cover this. Surprisingly, I
> You need to re-read chapters 9,10 in Jones/Lins. *Carefully*.
> Jones/Lins is the bible of GC. Despite it's age, none of the
> information in the book is out of date. Very little has changed in GC
> technique since it was written - some new implementations, a few
> optimizations, but no real new ideas.
> Checking my shelf ... I find:
> Appel's "Modern Compiler Implementation" has a decent discussion of
> basic GC algorithms and includes pointer finding, pointer maps and
> their use in the heap, and for stack frames and registers.
> Implementation is left as an exercise.
> "Topics in Advanced Language Implementation", a collection edited by
> Peter Lee. This has Detlefs paper which is covered in chapter 10 of
> Jones/Lins. Since you have Jones/Lins this one is superfluous wrt GC
> but it has other stuff you might like to read.
> I don't have the latest [purple] dragon book, but given how recent it
> is and its stated emphasis on compiling for more advanced languages, I
> would be very surprised if there wasn't a treatment of pointer finding
> and maps. Perhaps not an implementation, but certainly a discussion.
> As for the rest, "cover" was probably too strong a description. Most
> modern undergrad texts detail the need for pointer finding when using
> GC and mention maps as a method, but they aren't teaching GC and they
> figure anyone who cares will check references.
> I haven't yet seen any advanced compiler books that deal explicitly
> with GC issues. All the books I am familiar with are concerned with
> code optimization and pay very little attention to how the program
> interacts with its runtime environment.
this is probably because, at this point in time, people focus far more on
implementing good old static compilers than on implementing higher-end
so, a book on VMs might cover GC, but then assume that the developer is just
writing an interpreter anyways...
and a book on compilers, may focus highly on code generation and
optimization, but very little on what would make for a good VM (and/or fail
to take into account that a few of these nifty optimizations may not mix
ideally with a VM-based design, ...).
but, then again, it is probably not like I am any real expert on books
though, not really having many, and having learned most of my stuff on my
own (though in the past I had read a few I could find in the form of PDFs,
one may need to figure how to mix and match things, and maybe do some
(actually, implementing a VM is a near constant battle between trying to
find ways to make things fast, and implementing features and overall designs
which could threaten to make everything terribly slow...).
and, in this quest, intuition is not always the best guide, as often what
one would "feel" is the best approach is very likely to make everything
slow, and one may have to navigate through a twisting maze of convoluted
thinking and ending up with something a good deal different than what could
be easily imagined up front...
and there is also another good ideal to strive for:
Keep It Simple...
often a simple design scales much better in the long term than a more
complex one, and a more general design is usually far more useful than a
highly specialized one. a design should be both relatively simple to
implement, but one can also keep watch on practicality and efficient design.
breaking down the problem into small and independent pieces also often helps
(although, one needs to be careful, as dividing a problem in the wrong way
can create far more pain than it is worth...).
so, often, we want a larger number of simple pieces which only do one thing,
but do it well. this is in contrast to the creation of large centralized "do
everything" systems (which may often be far more of a mess than a general
so, yes, a rule of thumb: if you start designing something, and start
getting a feeling like this will be the one-true-component which will unify
everything and solve every problem: Stop, step back, and try to re-examine
the problem keeping oneself free of such notions.
and, if one finds themselves trying to design in all these features which
have little real relation to the core purpose of the component, Stop, step
back, and consider if the problem may be in need of being partitioned.
IME, centralized designs rarely turn out well, and keep something else in
almost never has "every problem" been solved by big complicated many-purpose
designs / "do everything machines";
rather, what things come to dominate, to unify nearly everything in their
way, are typically examples of simplicity and minimalism.
they dominate in number, not in their number of designed-in use-cases...
(and a larger system can be viewed more as a large collection of pieces than
as a whole...).
actually, it may well help when designing something, to largely ignore the
details of in how many ways it may be used (or of its "significance"), and
instead focus on the design in relative isolation (focusing primarily on its
internal functioning and external interface, trying to figure out how to
keep it small, simple, and efficient).
AKA: for some things, it is better to see the trees for the forest...
but, alas, there are no real "good" simple definitions of "good design", me
thinking design is a sort of a black art, which is probably what people
almost inevitably botch it up in one way or another, leaving it to future
generations (or revisions) to build on what works, and weed out what
and yet we so often hear of the people whining and complaining that history
didn't go down the right path (often for weeding out the same centralized
designs for which history has largely concluded are unworkable...).
Return to the
Search the comp.compilers archives again.