From: | conway@mundook.cs.mu.OZ.AU (Thomas Charles CONWAY) |
Newsgroups: | comp.lang.scheme,comp.compilers |
Date: | 22 May 1997 22:22:07 -0400 |
Organization: | Comp Sci, University of Melbourne |
References: | 97-05-183 97-05-193 |
Keywords: | C, assembler |
fjh@mundook.cs.mu.OZ.AU (Fergus Henderson) writes:
>Although we have encountered a few bugs in various versions of gcc
>that caused gcc to abort with internal errors, they have generally
>been easy to avoid -- typically all you need to do is to compile the
>offending file at a lower optimization level. We've carefully tested
>more than a million lines of generated C code on four architectures
>(and ported to several more architectures), and as far as I can recall
>we've only encountered one gcc bug that required us to change the way
>we generate code.
Several people have mentioned that optimization is a problem -
especially for compile time. We did have to experement to get
reasonable compile times with optimization enabled.
As I recall gcc uses algorithms that are superlinear (O(n^2)?) in the
"size" of the function. Therefore, to get reasonable compile-time
performance, it is necessary to split the generated C code into
several functions. On the other hand, if you generate too many
functions, the assembler seems to be really slow - presumably the
extra function prologues and epilogues, as well as the other bits and
pieces expand the size of the assembly code significantly.
The importance of optimization from the C compiler will depend on the
C code you generate. For Mercury, the nature of the source language
(pure, single-assignment, etc) means that there are lots of available
optimizations that the C compiler won't make anyway , but such things
as more careful instruction selection/scheduling are still important.
Thomas
--
Thomas Conway conway@cs.mu.oz.au
--
Return to the
comp.compilers page.
Search the
comp.compilers archives again.