Related articles |
---|
[19 earlier articles] |
Re: Best Ref-counting algorithms? bartc@freeuk.com (BartC) (2009-07-16) |
Re: Best Ref-counting algorithms? gneuner2@comcast.net (George Neuner) (2009-07-16) |
Re: Best Ref-counting algorithms? haberg_20080406@math.su.se (Hans Aberg) (2009-07-17) |
Re: Best Ref-counting algorithms? haberg_20080406@math.su.se (Hans Aberg) (2009-07-17) |
Re: Best Ref-counting algorithms? cppljevans@gmail.com (Larry) (2009-07-17) |
Re: Best Ref-counting algorithms? lerno@dragonascendant.com (=?ISO-8859-1?Q?Christoffer_Lern=F6?=) (2009-07-17) |
Re: Best Ref-counting algorithms? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2009-07-17) |
Re: Best Ref-counting algorithms? gneuner2@comcast.net (George Neuner) (2009-07-17) |
Re: Best Ref-counting algorithms? gneuner2@comcast.net (George Neuner) (2009-07-17) |
Re: Best Ref-counting algorithms? DrDiettrich1@aol.com (Hans-Peter Diettrich) (2009-07-18) |
Re: Best Ref-counting algorithms? lerno@dragonascendant.com (=?ISO-8859-1?Q?Christoffer_Lern=F6?=) (2009-07-18) |
Re: Best Ref-counting algorithms? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2009-07-18) |
Re: Best Ref-counting algorithms? lerno@dragonascendant.com (=?ISO-8859-1?Q?Christoffer_Lern=F6?=) (2009-07-18) |
[8 later articles] |
From: | glen herrmannsfeldt <gah@ugcs.caltech.edu> |
Newsgroups: | comp.compilers |
Date: | Fri, 17 Jul 2009 19:32:19 +0000 (UTC) |
Organization: | California Institute of Technology, Pasadena |
References: | 09-07-018 09-07-032 09-07-038 09-07-040 09-07-058 |
Keywords: | GC |
Posted-Date: | 17 Jul 2009 17:05:14 EDT |
Hans Aberg <haberg_20080406@math.su.se> wrote:
(snip)
< Does this not suggest that at least a tracing GC should be moved into
< the OS? It should have some way to mark allocated swapped out memory
< unused without swapping back in.
I have watched Win2K do that. When you exit a program using a lot
of memory, it has to page it back while deallocating the memory.
< [Seems to me that it would be easy enough to provide system calls
< that allow <dequa tecontrol from user mode. VM/370 had a user
< interface to the pager in about 1970 to let an OS running in a
< virtual machine avoid double paging. <John]
I always thought of it more as a modification to the guest OS,
but I suppose it does require host support.
You can avoid double paging by giving the full 16M to the guest.
The problem, then, is that a multitasking guest has to wait while
any task is paging. As I understand it, control is returned
to the guest even though some pages are not yet available.
Other tasks can then be run if the needed pages are available.
[Yes, it presented virtual interrupts to the guest OS to say that
a page was unavailable, and later to say it had become available. It
was up to the guest to avoid touching unavailable pages. -John]
There is an interesting and maybe related feature of many current
systems. Many now do not actually allocate pages when requested, but
wait until the allocated memory is modified. All page table entries
for newly allocated memory point to a single page filled with zeros,
with the write protect bit active. Any write to the page allocates a
real page and updates the page tables as needed.
As I understand it, the problem with this method comes when the system
is actually short on memory, but there is no way to indicate that to
the program. I tried this one earlier, after reading a post in
another newsgroup related to pointers (but not to virtual storage).
#include <stdio.h>
#include <stdlib.h>
#define X 64
int main() {
int i;
while(malloc(X*1024*1024-8)) i++;
printf("%dM allocated\n",i*X);
}
You may find that it is given more virtual storage than available
swap space, and it might be that malloc() never returns (void*)0.
-- glen
[This is getting a bit farafield from compilers. Some years ago I spun
off a separate mailing list for GC discussions, which isn't very active
but has over 700 subscribers.
To subscribe: send a message containing subscribe to gclist-request@lists.iecc.com.
-John]
Return to the
comp.compilers page.
Search the
comp.compilers archives again.