From: | Hans-Peter Diettrich <DrDiettrich1@aol.com> |
Newsgroups: | comp.compilers |
Date: | 3 Aug 2006 11:03:22 -0400 |
Organization: | Compilers Central |
References: | 06-06-044 06-06-055 06-07-023 06-07-031 06-07-109 06-07-114 |
Keywords: | parse, performance |
Posted-Date: | 03 Aug 2006 11:03:21 EDT |
Gabriel Dos Reis schrieb:
> From experience, the performance of the GCC/g++ *parser* had worried
> and continue to worry users and corporate that base their system
> compilers on it.
IMO it's neither the parser nor the lexer, instead it's the time spent
in preprocessing the source files.
Did somebody ever create an "stripped" preprocessed source file, from a
C/C++ source file, with all #includes and macro invocations expanded,
and all unused declarations removed? It were interesting to compare the
compile time of such an preprocessed file, against the compile time of
the raw source file.
Perhaps windows.h is an extreme example, because it adds more than 2 MB
to an preprocessed source file. Comments and #if's properly removed,
only file and line information retained! Now consider the impact of the
memory allocations, resulting from that amount of text, and the growth
of the symbol and other tables, possibly slowing down the further
processing of the text.
I also don't know what impact namespaces have on the compilation of such
an source text. Even if namespaces can be selectively included or
excluded from the symbol tables, I'm not sure how the tables for such an
namespace actually are created. How does a compiler know, which sources
to inspect for contributions to an namespace?
DoDi
Return to the
comp.compilers page.
Search the
comp.compilers archives again.