|GC question firstname.lastname@example.org (Paul Rubin) (2007-07-27)|
|Re: GC question email@example.com (Ulf Wiger) (2007-07-27)|
|Re: GC question firstname.lastname@example.org (Gene) (2007-07-28)|
|Re: GC question email@example.com (Ray Dillinger) (2007-07-28)|
|Re: GC question @iecc.com <firstname.lastname@example.org (Paul@iecc.com, Rubin) (2007-07-30)|
|Re: GC question email@example.com (2007-08-13)|
|Date:||Sat, 28 Jul 2007 02:24:40 -0000|
On Jul 27, 6:34 am, Paul Rubin <phr-2...@nightsong.com> wrote:
> Suppose you build a big list of cons cells, say a billion of them
> (you're on a large machine). This is in a runtime with traditional
> marking or copying gc, no generation scavenging or region inference or
> anything like that. The collector runs every N bytes of allocation
> for some fixed N. Yes I know that's a dumb way to write an allocator
> for a big-system implementation but it's fairly common for small ones.
> It seems to me that the running time to allocate N cells is O(N**2)
> because you run the collector O(N) times during the allocation, and
> each collection costs O(N) on average.
> I never realized this before. Is it a well-known phenemonon? Is the
> main answer something like generation scavenging?
> This relates to a real situation that came up on comp.lang.python
You're correct of course. Dumb policy often leads to bad performance,
and not just in memory management. If you merely increase the heap
size by a factor greater than one every time it fills up, the copying/
tracing cost remains O(N).
Return to the
Search the comp.compilers archives again.