Related articles |
---|
Implementation of C++ exceptions ? mmc@wagner.imada.ou.dk (1994-04-05) |
Re: Implementation of C++ exceptions ? vinoski@srv.ch.apollo.hp.com (Steve Vinoski) (1994-04-05) |
Re: Implementation of C++ exceptions ? mw@ipx2.rz.uni-mannheim.de (1994-04-05) |
Re: Implementation of C++ exceptions ? chase@Think.COM (1994-04-05) |
Re: Implementation of C++ exceptions ? schmidt@tango.ICS.UCI.EDU (Douglas C. Schmidt) (1994-04-06) |
Re: Implementation of C++ exceptions ? mmc@wagner.imada.ou.dk (1994-04-07) |
Implementation of C++ exceptions ? ssimmons@convex.com (1994-04-08) |
Re: Implementation of C++ exceptions ? davis@ilog.ilog.fr (1994-04-11) |
Re: Implementation of C++ exceptions ? chase@Think.COM (1994-04-11) |
Re: Implementation of C++ exceptions ? sean@PICARD.TAMU.EDU (1994-04-12) |
Newsgroups: | comp.compilers |
From: | chase@Think.COM (David Chase) |
Keywords: | C++, design |
Organization: | Thinking Machines Corporation, Cambridge MA, USA |
References: | 94-04-019 |
Date: | Tue, 5 Apr 1994 20:25:29 GMT |
I know more about this than I can conveniently write in one posting, or on
my employer's time. I'll be brief now, and try to write something at
greater length when possible. I've already checked with my former
employer (Sun/SunPro) about discussing details of their implementation,
and it is apparently ok with them (I participated in the design of the
exception handler in the SunPro C++ 4.0 compiler) within certain limits.
(My assumption is that anyone who cares could reverse-engineer it, and
other compilers for Solaris/Sparc will want to use the same encoding for
compatility purposes.)
Keep in mind that C++ exception-handling is synchronous; that is,
exceptional transfers of control occur only at call sites. Asynchronous
exception-handling imposes additional requirements. Also note that C++
destructors for locals can be translated (more or less) into appropriate
try/catch blocks -- assume that this transformation occurs to simplify
life for the compiler.
The low overhead techniques that I know of include:
1. encoding of exceptional come-froms (call sites) in a hash
table. Used to implement CLU, I think.
advantages: rapid lookup, probably compact.
disadvantages: doesn't extend to async exceptions, probably
requires initialization code.
2. encoding of exceptional come-froms in a sorted (non-nested,
for now) PC-range table. Used to implement Modula-2+.
advantages: very little initialization required (if done
properly), extends to async exceptions.
disadvantages: lookup more expensive, perhaps not so compact.
3. placement of code-words in the instruction stream (that are
skipped, either before or after the call).
advantages: zero initialization, rapid lookup.
disadvantages: doesn't extend to async exceptions, fluffs
up instruction space (cache lines), imposes
overhead to branch around it. May give some
hardware (dataflow, super-duper-scalar) digestive
problems (so I've been told).
The SunPro C++ 4.0 compiler uses method 2. To avoid initialization costs,
the table is all PC-relative. There are 5 fields in each entry (I think I
got the order right).
.word callsite+8 - .
.word length_of_range_in_bytes (0 is usual)
.word catch_clause - (callsite+8)
.word type_information+12-.
.word 0 ! reserved
This is not legal assembly language (because the differences are between
symbols in different sections), and the relocation used to to do this is
not supported in pre-Solaris-2.3 assemblers (and maybe not in that one).
I think there is assembler support for this relocation in the tools that
come with C++ 4.0. The offending relocation is a SPARC_RDISP_32, I
believe.
Anyhow, the code generator emits these in sorted order for each file that
it processes, in a special section (".exception_ranges", I believe). ELF
linker semantics cause all the .exception_ranges sections to be
concatenated when you build a shared library or an executable, and the C++
compiler driver brackets the collected tables in each shared library or
binary with a call (in the .init section) to register that shared
library's exception tables, when it is run or dynamically loaded. The
registration code does not actually touch the tables -- it just knows
where the endpoints are, as well as the begin and the end of the text for
that library. Furthermore, the exception tables are free of any run-time
relocation, so they are not touched by the dynamic linker, and may never
be paged in if exceptions are not raised. There are no restrictions on
how many ranges may exist for a particular function, except that they
cannot overlap.
There are additional considerations which influenced how the start of the
range was chosen, what registers data should be based to the catcher in,
and how the code generators and optimizers deal with this information.
There are some caveats and some gotchas, but to the machine-independent
optimizer, exceptional transfers of control look pretty much like "just
another branch". The code generator also treats them (nearly) like "just
another branch" and has final responsibility for emitting the tables
correctly (this choice was made to allow the code generator to manipulate
branches to catch clauses, either to insert code along those branches or
move them out of loops). A table compaction optimization allows the code
generator to coalesce adjacent entries provided that they have matching
catch_clause and type_information, and provided that there are no
intervening calls.
This was all made much easier because of ELF -- the tricks used here do
not translate well at all to a.out format.
Asynchronous exceptions are much more detailed, more difficult to verify,
and probably require some collusion with parts of what is often called the
"OS" (the trampoline code used in signal handling has to be aware of
exception handling, as do setjmp and longjmp, for some definition of
"aware").
Is this the sort of information that was desired?
David Chase, speaking for myself
Thinking Machines Corporation
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.