Re: GC question

Gene <>
Sat, 28 Jul 2007 02:24:40 -0000

          From comp.compilers

Related articles
GC question (Paul Rubin) (2007-07-27)
Re: GC question (Ulf Wiger) (2007-07-27)
Re: GC question (Gene) (2007-07-28)
Re: GC question (Ray Dillinger) (2007-07-28)
Re: GC question < (, Rubin) (2007-07-30)
Re: GC question (2007-08-13)
| List of all articles for this month |

From: Gene <>
Newsgroups: comp.compilers,comp.lang.functional
Date: Sat, 28 Jul 2007 02:24:40 -0000
Organization: Compilers Central
References: 07-07-098
Keywords: GC, performance
Posted-Date: 28 Jul 2007 15:28:42 EDT

On Jul 27, 6:34 am, Paul Rubin <> wrote:
> Suppose you build a big list of cons cells, say a billion of them
> (you're on a large machine). This is in a runtime with traditional
> marking or copying gc, no generation scavenging or region inference or
> anything like that. The collector runs every N bytes of allocation
> for some fixed N. Yes I know that's a dumb way to write an allocator
> for a big-system implementation but it's fairly common for small ones.
> It seems to me that the running time to allocate N cells is O(N**2)
> because you run the collector O(N) times during the allocation, and
> each collection costs O(N) on average.
> I never realized this before. Is it a well-known phenemonon? Is the
> main answer something like generation scavenging?
> This relates to a real situation that came up on comp.lang.python
> yesterday.

You're correct of course. Dumb policy often leads to bad performance,
and not just in memory management. If you merely increase the heap
size by a factor greater than one every time it fills up, the copying/
tracing cost remains O(N).

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.