|GC question firstname.lastname@example.org (Paul Rubin) (2007-07-27)|
|Re: GC question email@example.com (Ulf Wiger) (2007-07-27)|
|Re: GC question firstname.lastname@example.org (Gene) (2007-07-28)|
|Re: GC question email@example.com (Ray Dillinger) (2007-07-28)|
|Re: GC question @iecc.com <firstname.lastname@example.org (Paul@iecc.com, Rubin) (2007-07-30)|
|Re: GC question email@example.com (2007-08-13)|
|From:||Paul Rubin <firstname.lastname@example.org>|
|Date:||27 Jul 2007 03:34:55 -0700|
|Keywords:||GC, performance, question|
|Posted-Date:||27 Jul 2007 09:29:57 EDT|
Suppose you build a big list of cons cells, say a billion of them
(you're on a large machine). This is in a runtime with traditional
marking or copying gc, no generation scavenging or region inference or
anything like that. The collector runs every N bytes of allocation
for some fixed N. Yes I know that's a dumb way to write an allocator
for a big-system implementation but it's fairly common for small ones.
It seems to me that the running time to allocate N cells is O(N**2)
because you run the collector O(N) times during the allocation, and
each collection costs O(N) on average.
I never realized this before. Is it a well-known phenemonon? Is the
main answer something like generation scavenging?
This relates to a real situation that came up on comp.lang.python
Return to the
Search the comp.compilers archives again.