Related articles |
---|
Quality of VAX compilers firefly@diku.dk (Peter \Firefly\Lund) (2006-06-03) |
Re: Quality of VAX compilers lkrupp@pssw.nospam.com.invalid (Louis Krupp) (2006-06-05) |
Re: Quality of VAX compilers firefly@diku.dk (Peter \Firefly\Lund) (2006-06-07) |
Re: Quality of VAX compilers jafred@verizon.net (John Fredrickson) (2006-06-07) |
Re: Quality of VAX compilers gah@ugcs.caltech.edu (glen herrmannsfeldt) (2006-06-11) |
Re: Quality of VAX compilers tom@kednos.com (Tom Linden) (2006-06-11) |
Re: Quality of VAX compilers jvorbrueggen@mediasec.de (=?ISO-8859-1?Q?Jan_Vorbr=FCggen?=) (2006-06-12) |
Re: Quality of VAX compilers tom@kednos.com (Tom Linden) (2006-06-15) |
Re: Quality of VAX compilers kenrose@tfb.com (Ken Rose) (2006-06-15) |
[1 later articles] |
From: | "Peter \"Firefly\" Lund" <firefly@diku.dk> |
Newsgroups: | comp.compilers |
Date: | 7 Jun 2006 23:17:26 -0400 |
Organization: | Department of Computer Science, University of Copenhagen |
References: | 06-06-009 06-06-016 |
Keywords: | architecture, history |
Posted-Date: | 07 Jun 2006 23:17:26 EDT |
On Tue, 5 Jun 2006, Louis Krupp wrote:
> You might find some of what you want in this book about DEC's PL/I compiler:
Thank you. It's on my wishlist now :)
> [I have both books. The first one tells you all about code generation
> for the Vax as done by their PL/I and C compiler. It was a pretty
> normal compiler for the late 70s, reasonable common subexpression
> handling and pre-graph-coloring register allocation. They spent a lot
Which doesn't have to be bad at all.
> of effort in instruction selection, to use all of the Vax instructions
> and address modes.
Interesting.
I found this paper:
Cheryl A. Wiecek, "A Case Study of VAX-11 Instruction Set Usage For
Compiler Execution", 1982.
She instrumented six different compilers while they compiled to see what
their dynamic instruction stream looked like.
Table 4 in the paper lists the frequency of the various addressing modes
across the six compilers. One thing that is abundantly clear when looking
at that table is that the VAX had many more addressing modes than it could
reasonably use (some are not used at all or only used by .1 % of the
instructions).
Another thing is that the PL/I compiler (itself implemented in PL/I) had a
sweet tooth for word displacements (13.9%), whereas the BLISS, COBOL, and
FORTRAN compilers (all implemented in BLISS) barely used it (0.9-2.0%).
Autoincrement is not used much, except by the BASIC compiler (implemented
in BASIC) where it is a whopping 17.4% (2.6-4.8% for the others).
Autodecrement is barely used by anybody.
Register deferred (i.e. load/store of a datum stored at an address pointed
to by a register, with no displacement) is barely used by the Pascal
(2.0%) and PL/I (2.5%) compilers, used 4-5 times as often by the BASIC
(9.0%), BLISS (8.6%), and FORTRAN (9.1%) compilers. Surprisingly, COBOL
uses it almost twice as often (15.3%) as the other compilers implemented
in BLISS.
Two-address instructions are used a lot more than I thought, which makes
me very happy (47.2%).
(src, src, dst) three-address instructions were rare (3.2%!), which also
makes me very happy. 23.4% of those were (literal, register, register).
Something that makes me less happy, is the very high frequency of branches
(34.7% including calls and returns) and the short sequences between those
branch-type instructions (3.9, including the branch!).
Actually, the difference between the implementation languages is very
clear here: the compilers written in BLISS have almost identical sequence
lengths (3.3-3.6 instructions / 12.2-13.0 bytes), whereas the others had
longer sequences, both in terms of instructions and in terms of bytes.
PASCAL and PL/I are similar for some reason (4.5-4.7 instructions / 18.7 -
19.0 bytes).
The paper talks a little bit about how the registers are used (about
40 lines) but it says nothing about how many registers were live at
a time or about whether some of the loads and stores could have been
eliminated.
I think an analysis of floating-point code could have been particularly
revealing. She also doesn't say why so many branch-type instructions were
necessary, neither does she look into how to reduce the marked overhead of
using CALLG/CALLS/RET instructions instead of the more light-weight
(BSB/JSB/RSB).
Does anybody know whether it was practical to use two entry-points for
exported functions, a fast one and a slow one?
-Peter
Return to the
comp.compilers page.
Search the
comp.compilers archives again.