|Compilers for systems programming (was: A C style compiler) firstname.lastname@example.org (Francois-Rene Rideau) (1998-05-04)|
|Re: Compilers for systems programming (was: A C style compiler) email@example.com (eric dahlman) (1998-05-07)|
|Re: Compilers for systems programming (was: A C style compiler) firstname.lastname@example.org (Dr Richard A. O'Keefe) (1998-05-07)|
|Re: Compilers for systems programming (was: A C style compiler) email@example.com (1998-05-12)|
|Re: Compilers for systems programming (was: A C style compiler) firstname.lastname@example.org (Eric Eide) (1998-05-12)|
|Re: Compilers for systems programming (was: A C style compiler) email@example.com (William D Clinger) (1998-05-12)|
|Re: Compilers for systems programming (was: A C style compiler) firstname.lastname@example.org (W. Craig Trader) (1998-05-15)|
|[3 later articles]|
|From:||Francois-Rene Rideau <email@example.com>|
|Date:||4 May 1998 23:04:15 -0400|
|Organization:||Ecole Normale Superieure, Paris, France|
To revert to more on-topic tracks, I wonder if there are other
language besides C and cousins (Pascal, Ada, Modula-3, Oberon), whose
design/implementation combos were fit for programming systems
This is clearly a compiler problem as well as a language problem,
since it requires a way to interface actual hardware capabilities into
high-level programming constructs, which interferes with the both the
high-level language semantics, and the implementation choices of the
compiler, hence with compiler "optimizations".
Because C was designed to be low-level, and current processors are
"optimized for C", and mapping from C semantics to the hardware is
standard and pretty straightforward (a matter of saying which register
is the stack, which hold results and parameters, which are
callee-saved or caller-saved, and how the stack is laid out), C
compilers only have to deal with the way inline assembly interacts
with optimization (see constraints in GCC inline assembly), without
having to worry much about any kind of typing.
How do other, higher-level languages, handle the semantic mismatch
between design and implementation, if at all? How can they allow
global invariants to be broken in the middle of some system internal
routine, and ensure (or not) that it is restored before the end? How
can they (or not) allow to specify additional low-level invariants to
be added when manipulating some hardware specific objects? How do
these specifications (or lack thereof) interact with compiler code
generation and optimization?
As an important example, what language with automatic memory
management could be used to *fully* implement its own (efficient) GC
without hiding lots of details in custom hardware, or arbitrary
software implementation choices?
I know for instance, that LISP Machines were fully programmed in LISP,
down to the "assembly". How did they handle the situation?
It seems to me they did rely on many details and implementation choices
provided by custom hardware. How far can the approach be recycled
for common hardware architectures?
[Hum. Looks like time for me to read and grok CMUCL sources. -- Faré]
[And what if my first name was John? -- Faré]
It looks like not imposing such arbitrary hardwired decisions calls
for a "reflective architecture". Are there any language/compilers
developped around such a reflective architecture, allowing for
arbitrary reimplementations by the user, without any choice being
cast-in-iron between the language semantics and the actual hardware?
Are any of these free software, or otherwise with available sources
and/or good descriptions?
Thanks for your attention,
## Faré | VN: Ð£ng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ##
## FR: François-René Rideau | TUNES is a Useful, Not Expedient System ##
Return to the
Search the comp.compilers archives again.