Re: Writing a FAST compiler.

Tom Wicklund <ames!intellistor.com!wicklund@harvard.edu>
Thu, 8 Aug 91 09:13:21 -0600

          From comp.compilers

Related articles
Writing a FAST compiler. beard@ux5.lbl.gov (1991-08-05)
Re: Writing a FAST compiler. pardo@gar.cs.washington.edu (1991-08-07)
Re: Writing a FAST compiler. rekers@cwi.nl (1991-08-07)
Re: Writing a FAST compiler. clark@gumby.cs.caltech.edu (1991-08-07)
Re: Writing a FAST compiler. kend@data.uucp (1991-08-07)
Re: Writing a FAST compiler. ames!intellistor.com!wicklund@harvard.edu (Tom Wicklund) (1991-08-08)
| List of all articles for this month |

Newsgroups: comp.compilers
From: Tom Wicklund <ames!intellistor.com!wicklund@harvard.edu>
Keywords: performance, C
Organization: Compilers Central
References: 91-08-022
Date: Thu, 8 Aug 91 09:13:21 -0600

>Most compiler efficiency issues revolve around creating highly optimized
>code. However, good code generation with fast turn-around time is often
>more important for the software engineer. Who cares how good the code
>generated is if it's just going to change as the development process
>proceeds? Therefore, my question is, how do I go about creating a compiler/
>interpreter that goes as fast as possible, uses as little memory as possible,
>and produces reasonably efficient code as possible. The obvious application
>is for a compiler that runs on a microcomputer such as the Macintosh, with
>limited resources. What are some general approaches I can follow to create
>a compiler that generates slow to moderate code very quickly?


I recall seeing a figure about 8 years ago (I have no idea where) that a
typical multi-pass compiler spends about 50% of its time translating memory
structures into disk files, writing the files, then reading them and
translating back to memory structures for the next pass.


The BDS C compiler for 8080 CP/M took the approach of passing the compiler
over the source -- it read the source code and parsed it in pass 1, then
loaded pass 2 on top of pass 1, leaving the parsed program in memory. This
resulted in a very fast compiler, even compared to some minis and mainframes
of the time.


Borland created the original Turbo C based on the Wizard C compiler. They
seem to have patched the passes together and eliminated the intermediate file
read / write. The Turbo C 1.0 compiler executable is a bit smaller than the
sum of the Wizard C executables and I seem to remember about a 30-50% speedup
(if somebody is really interested, I could check this as I have copies of
both compilers). Another factor is the integrated environment in Turbo C --
the compiler, editor, and at least part of the source are all in memory,
avoiding disk accesses.


This speed difference can be seen in Turbo C++, where Borland went over the
DOS 640K limit using an overlay scheme (VROOM). Turbo C++ is much, much
slower than Turbo C, more on the order of traditional multi-pass compilers.


A final factor is the language. For example, Turbo Pascal has always been
smaller and faster than Turbo C. Part of this is because Turbo Pascal was
originally written in assembler (I think), but part is that Pascal is a
simpler language. Handling the ANSI C preprocessing steps fast, efficiently,
and correctly is very difficult. Handling ANSI C preprocessing correctly
without regard to execution time is very straightforward in comparison.
Languages without preprocessors avoid this complication.


--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.