From: | Hans-Peter Diettrich <DrDiettrich1@netscape.net> |
Newsgroups: | comp.compilers |
Date: | Fri, 30 Sep 2016 05:03:53 +0200 |
Organization: | Compilers Central |
References: | 16-09-001 16-09-033 16-09-034 16-09-035 16-09-037 16-09-042 |
Injection-Info: | miucha.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="24836"; mail-complaints-to="abuse@iecc.com" |
Keywords: | C, performance |
Posted-Date: | 29 Sep 2016 23:22:51 EDT |
BartC schrieb:
> (I can tokenise C source code at some 10M lines per second on my PC
> (this excludes symbol table lookups; just raw tokenising). But gcc might
> process the same source at only 10K lines per second, even excluding
> dealing with headers.
IMO it's not the header files, which slow down compilation, but the
preprocessor macros which require to look up and optionally expand every
token. Almost every language has to allow for external references, which
are read from some shared file. Next comes the amount of declarations in
the standard C header files, which require much memory and can cause
swapping, even if only a very small subset of all declarations is
actually used in source code. In so far I don't think that it's fair or
meaningful to compare a full blown compiler with a bare tokenizer.
DoDi
Return to the
comp.compilers page.
Search the
comp.compilers archives again.