Related articles |
---|
GC question phr-2007@nightsong.com (Paul Rubin) (2007-07-27) |
Re: GC question etxuwig@cbe.ericsson.se (Ulf Wiger) (2007-07-27) |
Re: GC question gene.ressler@gmail.com (Gene) (2007-07-28) |
Re: GC question bear@sonic.net (Ray Dillinger) (2007-07-28) |
Re: GC question @iecc.com <phr-2007@nightsong.com (Paul@iecc.com, Rubin) (2007-07-30) |
Re: GC question torbenm@app-2.diku.dk (2007-08-13) |
From: | Gene <gene.ressler@gmail.com> |
Newsgroups: | comp.compilers,comp.lang.functional |
Date: | Sat, 28 Jul 2007 02:24:40 -0000 |
Organization: | Compilers Central |
References: | 07-07-098 |
Keywords: | GC, performance |
Posted-Date: | 28 Jul 2007 15:28:42 EDT |
On Jul 27, 6:34 am, Paul Rubin <phr-2...@nightsong.com> wrote:
> Suppose you build a big list of cons cells, say a billion of them
> (you're on a large machine). This is in a runtime with traditional
> marking or copying gc, no generation scavenging or region inference or
> anything like that. The collector runs every N bytes of allocation
> for some fixed N. Yes I know that's a dumb way to write an allocator
> for a big-system implementation but it's fairly common for small ones.
>
> It seems to me that the running time to allocate N cells is O(N**2)
> because you run the collector O(N) times during the allocation, and
> each collection costs O(N) on average.
>
> I never realized this before. Is it a well-known phenemonon? Is the
> main answer something like generation scavenging?
>
> This relates to a real situation that came up on comp.lang.python
> yesterday.
You're correct of course. Dumb policy often leads to bad performance,
and not just in memory management. If you merely increase the heap
size by a factor greater than one every time it fills up, the copying/
tracing cost remains O(N).
Return to the
comp.compilers page.
Search the
comp.compilers archives again.