|time to write a compiler firstname.lastname@example.org (1990-10-31)|
|time to write a compiler email@example.com (1990-11-05)|
|Re: time to write a compiler firstname.lastname@example.org (1990-11-06)|
|Re: time to write a compiler email@example.com (1990-11-06)|
|Re: time to write a compiler firstname.lastname@example.org (1990-11-07)|
|Re: time to write a compiler bright@nazgul.UUCP (1990-11-08)|
|Re: time to write a compiler email@example.com (1990-11-09)|
|Re: time to write a compiler firstname.lastname@example.org (1990-11-09)|
|From:||email@example.com (Pohua P. Chang)|
|Summary:||The IMPACT-I C Compiler|
|Organization:||Center for Reliable and High-Performance Computing, University of Illinois|
|Date:||Fri, 9 Nov 90 04:37:48 GMT|
I have been involved with the IMPACT-I C compiler project in the past 3+
years. Exact due dates have not been recorded. But here is a very imprecise
The frond-end, including semantic analysis, took about 4 man-months to code,
but subsequently, more than another 6 months, to accommodate some
program/system specifics, e.g. initializing a typedef struct.
A friend of mine coded a code generator for the DECstation (MIPS-R2000) in
about 6 months. He has spent much of his time on instruction-set specific
optimizations, e.g. constant preloading, convert some compare-branch to
compare-against-zero and to faster branches (beq, bne). Fine-tuning the
register allocator was also a very difficult task, because it required us to
modify the decision component of some code optimizations.
So, if we add up the time to implement the front-end and a code generator,
it takes about 1 man-year to establish a reasonable compiler framework.
Implementing local code optimization is a simple task and may be done in just
few weeks. Implementing global code optimization is quite time-consuming
because of the need to build a large set of FAST dataflow analysis modules.
If getting the compiler out quickly is the main concern, I would say that you
skip global code optimization and go straight to loop optimizations. Given
more time, spend some time on machine-dependent code optimizations. Be
prepared to spend as much time on debugging as on coding the code optimizer.
Debugging assembly code can be a great pain, especially after mid-night. 6
man-months is a very conservative estimate of the time that is needed for
building a naive code optimizer that does not produce embarrasing code.
If your project requires your compiler to produce high quality code, I would
recommand that the register allocator be carefully fine-tuned. On top of a
graph-coloring algorithm, reuse parameter registers, frame pointer, spill
registers, and other reserved registers whenever possible. Also, take the
most advantage from the caller/callee-save convention. Register resource is
a limiting resource (at least in MIPS-R2000). Many code optimizations have
to be tuned-down, so they don't introduce too many live-ranges. If possible,
integrate the constant preloading optimization into the register allocation
In order to achieve a performance level that is close to the best commercial
C compilers, such as the MIPS C compiler, many other implicit costs become
apparent. For example, without a fairly powerful memory disambiguator,
accesses to some global scalar variables and fields cannot be moved out of
loops. For another example, general loop unrolling (non-constant loop bounds)
can introduce some overhead cost, and should be applied only if the number of
iterations is large enough. But, how do we get that information at
compile-time? A code optimizer that is as powerful as that of the MIPS C
compiler would take man-years.
Other than classical code optimizations, there are some interesting
features that may be useful.
1. function inline expansion. (make sure the register allocator is good enough)
2. integrate a feedback loop in the compiler, in the form of an
3. selective code duplication that allows frequently executed regions
to be customized.
We have implemented an automatic profiler in three forms: 1. source code
preprocessing, 2. high-level intermediate form, and 3. assembly code level.
They are equally effective. For a person who totally understands the
structure of the compiler, it takes just weeks to implement the profiler and
integrate it into the compiler. However, modifying the decision components of
the code optimizer to use the profile information, as well as that from loop
analysis, can be tricky.
The time that is required to implement function inline expansion depends on
whether you want an intra-file or an inter-file expander. Implementing an
inter-file function inline expansion optimization is difficult. The benefit
from function inline expansion for conventional processors is not dramatic.
So don't try this early in your project.
Well, that's my 2 cents worth.
Pohua Paul Chang
Return to the
Search the comp.compilers archives again.