From: | George Neuner <gneuner2@comcast.net> |
Newsgroups: | comp.compilers,comp.arch |
Date: | Sat, 06 Dec 2008 22:04:33 -0500 |
Organization: | A noiseless patient Spider |
References: | 08-12-025 08-12-039 |
Keywords: | architecture, history |
Posted-Date: | 07 Dec 2008 08:33:30 EST |
On Sat, 06 Dec 2008 15:35:47 -0600, jgd@cix.compulink.co.uk wrote:
>tony@my.net (Tony) wrote:
>
>> I looked into this 10 years ago and convinced myself that the
>> segmented architecture is still there but that the segments are
>> just now very large (move from 16 to 32 bit?). (Is that the case?).
>
>Yes. As the moderator of comp.compilers pointed out, every x86
>processor, including the 64-bit ones, has a segmentation system that
>nobody uses, except for operating system writers. Segments are 32-bit
>sized in 32-bit mode, and 64-bit sized, really huge, in 64-bit mode.
The 64-bit x86 processors have essentially done away with
segmentation. The registers are still physically there, but in long
mode only FS and GS can be set, limits are ignored for all segments
and base addresses are ignored for all but FS and GS.
Offset addressing from FS and GS was retained to ease porting existing
operating systems.
>I've done programming with segments on 16-bit systems, and found it very
>painful, even if one wasn't hitting segment size limits. I've also
>programmed bank-switched memory systems, which have a certain amount in
>common with segmentation, and they're horrible too.
The problem wasn't segmentation per se, but using segmentation as the
_only_ isolation mechanism ... and the segment limits were too small.
Using segments+paging in 32-bit mode on the 386 and 486 was fun. The
Pentium and follow-ons reduced the descriptor cache to just 2 entries
and made general switching of segments painful.
>It is easy to construct scenarios where segmentation is advantageous,
>but proponents of the idea tend to neglect the cases where it is no use,
>and the ones where it's a complete pain.
>
>Segmentation is fine as a means for isolating processes from each other,
>and that's how it gets used. If a means could be created for it to
>enforce memory protection of objects from bad pointers, without
>requiring large hardware or performance overheads, that would be
>interesting. But it's going to take more cleverness than is so far
>visible to make that work.
>
>Bringing back segmentation for everyday programming because the idea
>appeals ... it's like claiming that airliners should have piston engines
>again because they make a cool noise. The idea only makes any sense if
>you don't understand the issues.
Page tables suffice for isolating processes, but segmentation allows
isolation within processes and (to me) more logical control of access
permissions. Page level NX (no execute) is too fine grain - it
quickly becomes a PITA when you're implementing any kind of in-memory
code generator, e.g., a JIT compiler ... particular in a GC'd system
which may relocate the code (temporarily making it data again). Most
processors will incur a pipeline stall if the code is also in the data
cache and will flush the pipeline if prefetched code is referenced as
data again. With multiple cores, reoptimizing JIT compilers and
concurrent GC these situations are becoming ever more likely rather
than less.
AFAIK, no existing processor is able to distinguish "executable data"
as unique from regular data and code. ISTM that if it could be
recognized, processors might be able to better streamline some of the
gyrations that occur when the code is data cached.
Of course, a page level XD bit could be implemented (or a multiple bit
type code), but I vote for a return to logical segments anyway because
they are more convenient for programmers ... users don't care.
George
Return to the
comp.compilers page.
Search the
comp.compilers archives again.