Related articles |
---|
[3 earlier articles] |
Re: Garbage collection sk@z.pl (Sebastian) (2004-08-04) |
Re: Garbage collection tmk@netvision.net.il (2004-08-05) |
Re: Garbage collection basile-news@starynkevitch.net (Basile Starynkevitch \[news\]) (2004-08-05) |
Re: Garbage collection nick.roberts@acm.org (Nick Roberts) (2004-08-09) |
Re: Garbage collection vbdis@aol.com (2004-08-10) |
Re: Garbage collection gah@ugcs.caltech.edu (glen herrmannsfeldt) (2004-08-11) |
Re: Garbage collection nick.roberts@acm.org (Nick Roberts) (2004-08-13) |
Re: Garbage collection t.zielonka@zodiac.mimuw.edu.pl (Tomasz Zielonka) (2004-08-13) |
Re: Garbage collection antti-juhani@kaijanaho.info (Antti-Juhani Kaijanaho) (2004-08-15) |
Re: Garbage collection gah@ugcs.caltech.edu (glen herrmannsfeldt) (2004-08-15) |
Re: Garbage collection nick.roberts@acm.org (Nick Roberts) (2004-08-23) |
Re: Garbage collection sk@z.pl (Sebastian) (2004-08-23) |
Re: Garbage collection sk@z.pl (Sebastian) (2004-08-23) |
[9 later articles] |
From: | "Nick Roberts" <nick.roberts@acm.org> |
Newsgroups: | comp.compilers |
Date: | 13 Aug 2004 17:30:41 -0400 |
Organization: | Compilers Central |
References: | 04-08-032 04-08-054 04-08-071 |
Keywords: | GC |
Posted-Date: | 13 Aug 2004 17:30:41 EDT |
On 11 Aug 2004 12:58:25 -0400, glen herrmannsfeldt <gah@ugcs.caltech.edu>
wrote:
> A segment descriptor cache could speed up such segment load
> operations. I don't know which, if any, processors had one.
> I have been told that some did, but have never seen it in any
> Intel or AMD literature.
As I understand it, they all have such a cache, except for certain
early steppings of the original Pentium. When it was discovered that
the omission of the cache had a disasterous effects on the execution
speed of some legacy (mostly 16-bit) software, Intel were quick to put
the cache back into their later models.
The Intel models have a cache of 96 extended descriptors, and the
AMD models have 128 I think.
To be honest, I'm not quite sure how the discussion got onto
segmentation in this thread, but never mind :-)
>> All in all I agree with your guess about page table manipulation
>> for fast block moves. Why move memory around byte by byte, when
>> moving bigger entitites (pages) costs about the same time, per
>> entity?
>
> As I understand it, it is common on many systems for the memory
> allocation system to fill page tables with pointers to one zero
> filled page, and then allocate a real page with the first write
> operation.
I don't understand why it is necessary for the PTEs to be set to
point to any frame. Why not just set the P bit to 0?
> There is a story of someone testing the cache memory
> characteristics of a machine with C code like:
>
> int *a,*b,i;
> a=malloc(100000000);
> b=malloc(100000000);
> for(i=0;i<100000000;i++) a[i]=b[i];
>
> It would seem that this would require moving enough data to
> completely flush the cache, but it might not.
An interesting test, but not really a fair test of a machine's
speed, since it is somewhat unrepresentative of real workloads.
--
Nick Roberts
Return to the
comp.compilers page.
Search the
comp.compilers archives again.