Re: Fast compilation

rcg@lpi.liant.com (Rick Gorton)
Fri, 09 Aug 91 03:45:59 GMT

          From comp.compilers

Related articles
Writing a FAST compiler. beard@ux5.lbl.gov (1991-08-05)
Re: Fast compilation rcg@lpi.liant.com (1991-08-09)
| List of all articles for this month |
Newsgroups: comp.compilers
From: rcg@lpi.liant.com (Rick Gorton)
Keywords: performance, optimize
Organization: Compilers Central
References: 91-08-022
Date: Fri, 09 Aug 91 03:45:59 GMT

I missed the initial posting by Patrick C. Beard, so I have to wonder
if I missed some other articles as well which also cover the subject.


Any definition of "fast compilation" is language sensitive. Strange sounding
statement, but true. The compilation of any given source file requires that
the source file be analyzed for syntactic and semantic correctness,
optimization of the "front-end" output is (usually) optional, code generation
occurs (lots of room here for all the fancy stuff like graph coloring,
scheduling, etc.), and an output file is then created.


Which leaves us with 4 phases to try to make fast.


1) The front-end.
2) Optimization
3) Code generation.
4) Code output. (Possibly tightly coupled to #2)


Optimization cannot be discounted in the overall scheme of things. For a set
of compilers I delivered to a customer a couple of years ago, one of the
contractual obligations was compile-time performance. It was discovered that
by turning the optimizer on at level -opt 2, the resultant reduction in "ops"
(Intermediate Language Operators) caused enough less code to be generated
that the compilation rate was faster than at -noopt (opt level 0) for the
FORTRAN files which were being compiled.


Some of the fancy code generation techniques suck up lots of memory, and
require all sorts of analysis, so if these can be turned off, the compilation
rate could go up; a perverse exception to this is code which is characterized
by very large basic blocks. For architectures with heavy emphasis on
register usage (RISC), very large basic blocks cause the poor register
assigner to have to do a LOT of work that could have been prevented by having
some of the ops removed.


The amount of syntactic and semantic analysis a front-end needs to do is
highly language sensitive. For example, C doesn't permit the use of
variables before they are declared, which means that C front ends don't need
to scan the source multiple times. In PL/I (a kitchen sink language) it is
possible to declare a variable AFTER it is used. Which means that a PL/I
front end has to scan the source file again, and fill in all the information
that was unknown in the first pass, and then verify that the result is
semantically correct. Twice the work, but required by the semantics.


If the language you are compiling is relatively "small", and has a very
tightly defined set of semantics, then the amount of work the front end has
to do is much less than for a "large" language. C is a relatively small
language; C++, COBOL, FORTRAN, PL/I, and Ada are not. The language
characteristics of C imply that there are lots of function calls, so large
chunks of straight-line code are unlikely (The very large basic block
problem). Not so for FORTRAN. The FPPPP SPEC benchmark has a file with a
very large basic block: fpppp.f has a basic block of 673 source lines.


The more abstract ("high-level") a language is, the more the compiler for
that language has to do. If there are implicit datatype conversions or loops
which occur, (COBOL does this a lot) the front-end has to emit the ops to do
the conversions and loops, and the back-end has to figure out the best way to
accomplish this. For the following datatypes and assignment statement:


01 MULTIPLY-DATA.
02 MULT1 PICTURE IS 999V99
02 MULT5 PICTURE IS 9 VALUE IS 4.


MULTIPLY MULT5 BY MULT1.


The code generator has to figure out how to multiply two decimal items
together. For the cobol compiler (for SPARC) I had on hand, this generated
31 instructions, mostly due to the fact that 4 runtime calls were made for
this context; the first two were conversions to a (common) datatype for which
there exists a specific fast multiply routine, and a final conversion back to
the target datatype. In C, datatype conversions, when they happen, are not
as complicated. Part of that is architecture specific, (decimal instructions
in an architecture help COBOL) and part of the problem is simply due to
language semantics.


Incremental compilation could improve compile-time performance, and
interpreters will have even better compile-time performance. I know that
there are single pass C compilers on the market, and they are fast in terms
of compilation rate; it would be interesting to find out what the compilation
rates (in lines per minute) there are for the fastest C, FORTRAN, and Ada
compilers on a given platform. I suspect that the C will be the fastest,
followed by FORTRAN, followed by Ada.
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.