Re: Fast code vs. fast compile (Thomas Charles CONWAY)
22 Jan 1997 23:17:16 -0500

          From comp.compilers

Related articles
Beginner's Question... (Mihai Christodorescu) (1997-01-16)
Fast code vs. fast compile (1997-01-16)
Re: Fast code vs. fast compile (1997-01-19)
Re: Fast code vs. fast compile (Darius Blasband) (1997-01-22)
Re: Fast code vs. fast compile (John Lilley) (1997-01-22)
Re: Fast code vs. fast compile (1997-01-22)
Re: Fast code vs. fast compile (1997-01-22)
Re: Fast code vs. fast compile (1997-01-22)
Re: Fast code vs. fast compile (Darius Blasband) (1997-01-25)
Re: Fast code vs. fast compile (Walter Spector) (1997-01-25)
Re: Fast code vs. fast compile (1997-02-16)
| List of all articles for this month |

From: (Thomas Charles CONWAY)
Newsgroups: comp.compilers
Date: 22 Jan 1997 23:17:16 -0500
Organization: Comp Sci, University of Melbourne
References: 97-01-122 97-01-137 97-01-159 97-01-165
Keywords: performance, practice, comment

Darius Blasband <> writes:

>Dennis Yelle <> wrote:
>As far as my own experience is concerned, compilation speed remains
>very important, specially in cases where a low-level language such as
>C is used as intermediate language in a large compilation context. In
>such cases, the amounts of code that must be compiled can go as far as
>a 1.000.000 lines (that is approximatively the amount of C code that
>is recompiled here every night. It only represents 140.000 lines of
>YAFL code, compiled twice, once with full debugging support, once with
>full optimization enabled).

Indeed. Mercury also uses C (GNU-C to be more precise) as an
intermediate language. The Mercury source for the library and compiler
comes to about 120,000. The generated C code is about 650,000
lines. To bootstrap-check the compiler each night we do three builds -
use the installed compiler to compile the uptodate sources (stage 1),
then use that to compile a stage 2 compiler, and use the stage to to
build stage 3 .c files - we compare the stage 2 and stage 3 .c files
to check that they are the same. (Nothing startling about this - it's
just simple bootstrapping.) Thus, we end up compiling roughly
2,000,000 lines of C every night (we compile the library a few more
times for different configurations, such as without GC support, with
profiling support, etc).

On typical inputs, the Mercury compiler takes roughly the same amount
of time to compile Mercury source to C, as gcc takes to compile the C
to object code. However on large tables of facts the Mercury compiler
has poor performance because input of this kind triggers some
expensive algorithms, so the Mercury->C compile time increases
dramatically for input of this kind. To fix this problem we have a
student adding the facility to the compiler to treat tables of facts
specially. This has drastically reduced the Mercury->C compile time,
and he has compiled tables of ~500,000 facts. The C that they compile
to is large static arrays containing the data, and small bits of code
to index them. Interestingly enough, gcc doesn't cope with these to
well. We have observed for a large table containing about 500,000
facts, the Mercury->C stage take about 3 minutes, C->asm about 3-4
minutes, and asm->object abpit 7 minutes. Moreover, the assembler
consumed very large amounts of memory. (I forget the exact figure now,
but it was something like 100-200Mb. Fortunately, we were doing it on
an Alpha with 8Gb of main memory.) I was a bit surprised that the
assembler should take so long and use so much memory. Does anyone have
any experience with this kind of thing? Perhaps there is some other
way of storing that data that has similar access times (ie O(1)) but
is more C-compiler/assembler friendly.

Thomas Conway MAKE BIGNUMS FAST!!!!
AD DEUM ET VINUM Use G\"odel encoding.
[Yeah, I've had trouble with large data tables in GCC as well. I circumvented
it by leaving the data in a file and reading it in at runtime. -John]

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.