Related articles |
---|
[22 earlier articles] |
Re: Alternative C compilers on x86_64 Linux? arnold@skeeve.com (2016-09-29) |
Re: Alternative C compilers on x86_64 Linux? arnold@skeeve.com (2016-09-29) |
Re: Alternative C compilers on x86_64 Linux? bc@freeuk.com (BartC) (2016-09-29) |
Re: Alternative C compilers on x86_64 Linux? gneuner2@comcast.net (George Neuner) (2016-09-29) |
Re: Alternative C compilers on x86_64 Linux? DrDiettrich1@netscape.net (Hans-Peter Diettrich) (2016-09-30) |
Re: Alternative C compilers on x86_64 Linux? arnold@skeeve.com (2016-09-30) |
Re: Alternative C compilers on x86_64 Linux? bc@freeuk.com (BartC) (2016-09-30) |
Re: Alternative C compilers on x86_64 Linux? bc@freeuk.com (BartC) (2016-09-30) |
Re: Alternative C compilers on x86_64 Linux? DrDiettrich1@netscape.net (Hans-Peter Diettrich) (2016-09-30) |
Re: Alternative C compilers on x86_64 Linux? bc@freeuk.com (BartC) (2016-10-01) |
Re: Alternative C compilers on x86_64 Linux? bc@freeuk.com (BartC) (2016-10-17) |
From: | BartC <bc@freeuk.com> |
Newsgroups: | comp.compilers |
Date: | Fri, 30 Sep 2016 12:00:09 +0100 |
Organization: | A noiseless patient Spider |
References: | 16-09-001 16-09-033 16-09-034 16-09-035 16-09-037 16-09-042 16-09-044 |
Injection-Info: | miucha.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="7217"; mail-complaints-to="abuse@iecc.com" |
Keywords: | C, performance |
Posted-Date: | 30 Sep 2016 11:30:11 EDT |
On 30/09/2016 04:03, Hans-Peter Diettrich wrote:
> BartC schrieb:
>
>> (I can tokenise C source code at some 10M lines per second on my PC ...
>
> IMO it's not the header files, which slow down compilation, but the
> preprocessor macros which require to look up and optionally expand every
> token. ...
> In so far I don't think that it's fair or
> meaningful to compare a full blown compiler with a bare tokenizer.
I seem to remember some comment in this group that tokenising accounts
for a big chunk of a compiler's runtime (50% or something).
While it is true that doing a full compile will take longer than just
raw tokenising, should that factor be of the order of 1000 times longer,
or three magnitudes?
I was investigating whether a reasonable working compiler could be
developed working at between 1 and 2 magnitudes slow-down from the raw
tokenising speed.
Probably one magnitude is a little optimistic, for C source anyway (with
preprocessing and stuff also needed), but two magnitudes is easily on
the cards. I think Tiny C is within that range.
--
Bartc
Return to the
comp.compilers page.
Search the
comp.compilers archives again.