Related articles |
---|
[23 earlier articles] |
Re: Alternative C compilers on x86_64 Linux? arnold@skeeve.com (2016-09-29) |
Re: Alternative C compilers on x86_64 Linux? bc@freeuk.com (BartC) (2016-09-29) |
Re: Alternative C compilers on x86_64 Linux? gneuner2@comcast.net (George Neuner) (2016-09-29) |
Re: Alternative C compilers on x86_64 Linux? DrDiettrich1@netscape.net (Hans-Peter Diettrich) (2016-09-30) |
Re: Alternative C compilers on x86_64 Linux? arnold@skeeve.com (2016-09-30) |
Re: Alternative C compilers on x86_64 Linux? bc@freeuk.com (BartC) (2016-09-30) |
Re: Alternative C compilers on x86_64 Linux? bc@freeuk.com (BartC) (2016-09-30) |
Re: Alternative C compilers on x86_64 Linux? DrDiettrich1@netscape.net (Hans-Peter Diettrich) (2016-09-30) |
Re: Alternative C compilers on x86_64 Linux? bc@freeuk.com (BartC) (2016-10-01) |
Re: Alternative C compilers on x86_64 Linux? bc@freeuk.com (BartC) (2016-10-17) |
From: | BartC <bc@freeuk.com> |
Newsgroups: | comp.compilers |
Date: | Fri, 30 Sep 2016 12:28:50 +0100 |
Organization: | A noiseless patient Spider |
References: | 16-09-001 16-09-033 16-09-034 16-09-035 16-09-037 16-09-042 |
Injection-Info: | miucha.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="7533"; mail-complaints-to="abuse@iecc.com" |
Keywords: | C, performance |
Posted-Date: | 30 Sep 2016 11:30:48 EDT |
On 29/09/2016 14:03, BartC wrote:
> [Tokenizing 10M lines/sec is pretty impressive. In compilers that don't
> do heavy optimization
> the lexer is usually the slowest part since it's the only thing that has
> to touch each
> character of the source code individually. -John]
This is what I was putting to the test. Actually I struggled to recreate
that benchmark, but in the end managed to process actual C source (a
monolithic file containing all CPython sources) at some 9.7Mlps. Figures
do depend on the source style.
On another desktop (also cheap, but with Intel rather than AMD), it
managed 11.5Mlps. With non-C sources, which has less 'busy' syntax, up
to nearly 13Mlps.
Figures exclude file-loading (although that made little difference on
the second machine).
Probably some more throughput can be squeezed by applying some ASM (or
by writing in native C and using more options rather than using
atrocious-looking intermediate code), but at the moment this isn't a
bottleneck.
(This test has some bits missing, for example it parses floating point
numbers but doesn't convert the character sequences to actual values,
but on the input I used, floating point figures very little.
On the 1Mlps I quoted for an actual working compiler (although for
in-memory byte-code), that includes file-loading but expects the OS to
have cached the file as will normally be the case.
This compiler is case-insensitive which slows down the tokeniser a tiny
bit. That's one tiny advantage of compiling C!)
--
Bartc
Return to the
comp.compilers page.
Search the
comp.compilers archives again.