From: | BartC <bc@freeuk.com> |
Newsgroups: | comp.compilers |
Date: | Wed, 28 Sep 2016 18:16:54 +0100 |
Organization: | A noiseless patient Spider |
References: | 16-09-001 16-09-033 16-09-034 16-09-035 |
Injection-Info: | miucha.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="34518"; mail-complaints-to="abuse@iecc.com" |
Keywords: | C, practice, comment |
Posted-Date: | 28 Sep 2016 22:28:00 EDT |
On 28/09/2016 00:36, rugxulo@gmail.com wrote:
> TCC hasn't had a release in recent years either. It's fast because
> it does everything in one pass, and it does include it's own
> assembler and linker. It doesn't optimize well, though.
Compilers can be fast anyway. I think we've just got used to very slow
ones such as gcc. I've been using my own /interpreted/ compiler and it
was still double the speed of gcc!
I was reminded of how fast they can be when I recently developed a new
byte-code compiler, and managed up to a million lines per second
compilation speed (one test was 1.5Mlps). And that was on a low-end
5-year-old PC, running on a single core.
That was two passes; a compiler to native code would use an extra pass,
and probably the code-generating is a bit fiddlier. But I would estimate
source to in-memory native-code generation at at least 500K lines per
second on the same machine, if I was to give that compiler the same
treatment.
(And because this is not for C, I can use all of that throughput instead
of wasting it repeatedly compiling the same header files.)
However it is so fast that it would be necessary to consider carefully
what to do with the output: invoking an external assembler or linker
would be like hitting a brick wall. So it would need to generate an
entire executable directly or prepare code to run in-memory.
These fast compilers (the byte-code one I've completed, and the possible
native code one) will only compile an entire project at once. (Because
of dependencies, parallelising would be trickier. But at the minute it's
fast enough: the new byte-code compiler can entirely re-compile my
current native-code compiler from scratch in 0.03 seconds.)
But with C projects then yes, the range of fast tools is limited. It
doesn't help if you also become dependent on things such as 'configure'
or 'make' (I work in Windows where there is less of that).
Maybe it needs people with experience of fast graphics or rendering, who
know how to speed things up, and apply them to C compilers! The C
language does put some obstacles in the way (the same header needs to
processed for the 50th time in case something comes out different), but
I think there is plenty that can be done.
TCC does a good job, but it's a shame about the code generation. (My own
native code compiler has a naive non-optimising code generator but is
still much better than TCC's code.)
(One one test of the fast compiler, my own code can manage 800Klps.
Converting to C intermediate code then putting it through gcc -O3 gets
it up to 1Mlps. But compiling with TCC gets it down to 350Klps (TCC is
not good with switch statements). Not so bad, but still ...)
--
Bartc
[Back when I was publishing the Journal of C Language of Translation, people did some interesting
stuff with C header files, saving a precompiled version for the usual case that subsequent
runs don't change any preprocessor stuff that would affect the code. -John]
Return to the
comp.compilers page.
Search the
comp.compilers archives again.