Re: Alternative C compilers on x86_64 Linux?

BartC <bc@freeuk.com>
Sat, 1 Oct 2016 22:17:10 +0100

          From comp.compilers

Related articles
[25 earlier articles]
Re: Alternative C compilers on x86_64 Linux? gneuner2@comcast.net (George Neuner) (2016-09-29)
Re: Alternative C compilers on x86_64 Linux? DrDiettrich1@netscape.net (Hans-Peter Diettrich) (2016-09-30)
Re: Alternative C compilers on x86_64 Linux? arnold@skeeve.com (2016-09-30)
Re: Alternative C compilers on x86_64 Linux? bc@freeuk.com (BartC) (2016-09-30)
Re: Alternative C compilers on x86_64 Linux? bc@freeuk.com (BartC) (2016-09-30)
Re: Alternative C compilers on x86_64 Linux? DrDiettrich1@netscape.net (Hans-Peter Diettrich) (2016-09-30)
Re: Alternative C compilers on x86_64 Linux? bc@freeuk.com (BartC) (2016-10-01)
Re: Alternative C compilers on x86_64 Linux? bc@freeuk.com (BartC) (2016-10-17)
| List of all articles for this month |
From: BartC <bc@freeuk.com>
Newsgroups: comp.compilers
Date: Sat, 1 Oct 2016 22:17:10 +0100
Organization: A noiseless patient Spider
References: 16-09-001 16-09-033 16-09-034 16-09-035 16-09-037 16-09-042 16-09-044 16-09-047 16-10-001
Injection-Info: miucha.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="5559"; mail-complaints-to="abuse@iecc.com"
Keywords: C, performance
Posted-Date: 05 Oct 2016 11:41:33 EDT

On 30/09/2016 19:15, Hans-Peter Diettrich wrote:
> BartC schrieb:
>> I seem to remember some comment in this group that tokenising accounts
>> for a big chunk of a compiler's runtime (50% or something).
>
> This seems to be a reasonable figure for C, including all those nasty
> tasks which have to be done before a token can be passed on to the parser.
>
>> While it is true that doing a full compile will take longer than just
>> raw tokenising, should that factor be of the order of 1000 times longer,
>> or three magnitudes?
>
> Find out yourself.


I already know the answer. It /doesn't/ take that much longer.


    Replace the grammar of your tokenizer by the C
> grammar, and test again. I'd wonder if it would not reach the speed of
> your tokenizer. Then add the preprocessor with file inclusion, macro
> definition, recognition and expansion, conditional compilation, and test
> again. Then add Ansi and Unicode string literals and symbol tables, and
> test again.


I don't need to do this work because it's already been done by Tiny C.


Using a monolithic, working test program of 22K lines (25K lines
including standard headers for tcc; 27K using gcc) Tiny C compiled it in
no more than 0.07 seconds (at least 360K lps).


gcc took from 2.2 seconds (unoptimised), to over 8 seconds (optimised).
3-12K lps. (Probably it strained the global optimiser having a large
single module as usually the difference between -O0/-O3 is smaller .)


Tiny C must be parsing the same C grammar and expanding the same macros
as gcc. So whatever it takes to do that, can presumably be done at some
speed faster than 350Klps.


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.