|[2 earlier articles]|
|Re: Garbage collection email@example.com (2004-08-04)|
|Re: Garbage collection firstname.lastname@example.org (Sebastian) (2004-08-04)|
|Re: Garbage collection email@example.com (2004-08-05)|
|Re: Garbage collection firstname.lastname@example.org (Basile Starynkevitch \[news\]) (2004-08-05)|
|Re: Garbage collection email@example.com (Nick Roberts) (2004-08-09)|
|Re: Garbage collection firstname.lastname@example.org (2004-08-10)|
|Re: Garbage collection email@example.com (glen herrmannsfeldt) (2004-08-11)|
|Re: Garbage collection firstname.lastname@example.org (Nick Roberts) (2004-08-13)|
|Re: Garbage collection email@example.com (Tomasz Zielonka) (2004-08-13)|
|Re: Garbage collection firstname.lastname@example.org (Antti-Juhani Kaijanaho) (2004-08-15)|
|Re: Garbage collection email@example.com (glen herrmannsfeldt) (2004-08-15)|
|Re: Garbage collection firstname.lastname@example.org (Nick Roberts) (2004-08-23)|
|Re: Garbage collection email@example.com (Sebastian) (2004-08-23)|
|[10 later articles]|
|From:||glen herrmannsfeldt <firstname.lastname@example.org>|
|Date:||11 Aug 2004 12:58:25 -0400|
|Posted-Date:||11 Aug 2004 12:58:25 EDT|
> The Intel (segmented) memory management technology imposes some
> penalties on certain operations. Even if the idea of segment registers
> was appropriate when moving from 16 bit code and 64 KB memory limits
> to bigger memory (8086), but in later hardware generations (80386)
> segment register manipulation and the maintenance of the related
> (task, segment...) tables turned out to be too time expensive. This
> was the time when the page-based flat memory model was "invented", as
> a faster memory management model.
A segment descriptor cache could speed up such segment load
operations. I don't know which, if any, processors had one.
I have been told that some did, but have never seen it in any
Intel or AMD literature.
> All in all I agree with your guess about page table manipulation for
> fast block moves. Why move memory around byte by byte, when moving
> bigger entitites (pages) costs about the same time, per entity?
As I understand it, it is common on many systems for the memory
allocation system to fill page tables with pointers to one zero filled
page, and then allocate a real page with the first write operation.
There is a story of someone testing the cache memory characteristics
of a machine with C code like:
It would seem that this would require moving enough data to completely
flush the cache, but it might not.
Return to the
Search the comp.compilers archives again.