Re: Garbage collection

glen herrmannsfeldt <gah@ugcs.caltech.edu>
15 Aug 2004 22:16:47 -0400

          From comp.compilers

Related articles
[6 earlier articles]
Re: Garbage collection nick.roberts@acm.org (Nick Roberts) (2004-08-09)
Re: Garbage collection vbdis@aol.com (2004-08-10)
Re: Garbage collection gah@ugcs.caltech.edu (glen herrmannsfeldt) (2004-08-11)
Re: Garbage collection nick.roberts@acm.org (Nick Roberts) (2004-08-13)
Re: Garbage collection t.zielonka@zodiac.mimuw.edu.pl (Tomasz Zielonka) (2004-08-13)
Re: Garbage collection antti-juhani@kaijanaho.info (Antti-Juhani Kaijanaho) (2004-08-15)
Re: Garbage collection gah@ugcs.caltech.edu (glen herrmannsfeldt) (2004-08-15)
Re: Garbage collection nick.roberts@acm.org (Nick Roberts) (2004-08-23)
Re: Garbage collection sk@z.pl (Sebastian) (2004-08-23)
Re: Garbage collection sk@z.pl (Sebastian) (2004-08-23)
Re: Garbage collection nick.roberts@acm.org (Nick Roberts) (2004-09-03)
Re: Garbage collection sk@bez.spamu.z.pl (Sebastian) (2004-09-07)
Re: Garbage collection usenet@leapheap.co.uk (2004-09-13)
[6 later articles]
| List of all articles for this month |
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Newsgroups: comp.compilers
Date: 15 Aug 2004 22:16:47 -0400
Organization: Comcast Online
References: 04-08-032 04-08-054 04-08-071 04-08-081
Keywords: GC, architecture
Posted-Date: 15 Aug 2004 22:16:47 EDT

Nick Roberts wrote:
> On 11 Aug 2004 12:58:25 -0400, glen herrmannsfeldt <gah@ugcs.caltech.edu>
> wrote:


>>A segment descriptor cache could speed up such segment load
>>operations. I don't know which, if any, processors had one.
>>I have been told that some did, but have never seen it in any
>>Intel or AMD literature.


> As I understand it, they all have such a cache, except for certain
> early steppings of the original Pentium. When it was discovered that
> the omission of the cache had a disasterous effects on the execution
> speed of some legacy (mostly 16-bit) software, Intel were quick to put
> the cache back into their later models.


> The Intel models have a cache of 96 extended descriptors, and the
> AMD models have 128 I think.


They don't seem to advertise it much.


> To be honest, I'm not quite sure how the discussion got onto
> segmentation in this thread, but never mind :-)


>>>All in all I agree with your guess about page table manipulation
>>>for fast block moves. Why move memory around byte by byte, when
>>>moving bigger entitites (pages) costs about the same time, per
>>>entity?


>>As I understand it, it is common on many systems for the memory
>>allocation system to fill page tables with pointers to one zero
>>filled page, and then allocate a real page with the first write
>>operation.


> I don't understand why it is necessary for the PTEs to be set to
> point to any frame. Why not just set the P bit to 0?


So that they will read as zero. You might as well fill
a page with zero, as with any other value. (I once knew
a system that initialized to X'81', though.) It is then
copy on write to any page.


>>There is a story of someone testing the cache memory
>>characteristics of a machine with C code like:


>>int *a,*b,i;
>>a=malloc(100000000);
>>b=malloc(100000000);
>>for(i=0;i<100000000;i++) a[i]=b[i];


>>It would seem that this would require moving enough data to
>>completely flush the cache, but it might not.


> An interesting test, but not really a fair test of a machine's
> speed, since it is somewhat unrepresentative of real workloads.


You mean moving 1e8 bytes isn't fair, or copying the same
page many times, inside the cache?


-- glen


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.