Re: Using C as a back end

vbdis@aol.com (VBDis)
4 Nov 2000 01:38:42 -0500

          From comp.compilers

Related articles
[22 earlier articles]
Re: Using C as a back end anton@mips.complang.tuwien.ac.at (2000-10-31)
Re: Using C as a back end joachim_d@gmx.de (Joachim Durchholz) (2000-10-31)
Re: Using C as a back end conway@ender.cs.mu.oz.au (2000-11-01)
Re: Using C as a back end kst@cts.com (Keith Thompson) (2000-11-01)
Re: Using C as a back end rhyde@cs.ucr.edu (Randall Hyde) (2000-11-01)
Re: Using C as a back end rhyde@cs.ucr.edu (Randall Hyde) (2000-11-01)
Re: Using C as a back end vbdis@aol.com (2000-11-04)
Re: Using C as a back end joachim_d@gmx.de (Joachim Durchholz) (2000-11-04)
Re: Using C as a back end thp@roam-thp2.cs.ucr.edu (Tom Payne) (2000-11-04)
Re: Using C as a back end gneuner@dyn.com (2000-11-04)
Re: Using C as a back end fjh@cs.mu.OZ.AU (2000-11-05)
Re: Using C as a back end freitag@alancoxonachip.com (Andi Kleen) (2000-11-05)
Re: Using C as a back end christl@rosalind.fmi.uni-passau.de (2000-11-05)
| List of all articles for this month |
From: vbdis@aol.com (VBDis)
Newsgroups: comp.compilers
Date: 4 Nov 2000 01:38:42 -0500
Organization: AOL Bertelsmann Online GmbH & Co. KG http://www.germany.aol.com
References: 00-10-234
Keywords: C, performance
Posted-Date: 04 Nov 2000 01:38:42 EST

Im Artikel 00-10-234, "Joachim Durchholz" <joachim_d@gmx.de>
schreibt:


>OTOH when compared to, say, Turbo Pascal, half a minute for
>recompiling just a few thousand lines of code is still ridiculously
>slow. I compiled 100.000s of lines in a minute on a 16-MHz machine
>years ago, it should be a snap on today's multi-100MHz machines. I
>don't know what's eating up all this time; there are two potential
>sources: 1) Language differences 2) Optimization


I have outlined the according problems many times, but I'll add my 2c once
again:


In Pascal every unit has a fixed interface, which must be parsed only once.
Afterwards that compiled interface can be referred to in any other unit,
without any future parsing of the "used" module.


In C the nesting of the #included header files can be different for every
module, and the same header file can be invoked multiply, with different
conditions (settings of the preprocessor... symbols). Therefore every #include
statement *must* result in another parse of the header file and all of it's
#included files.


The "precompiled" headers must not only track the sequence of directly
#included files, but also must track the settings of all symbols, with every
#include. When another #include is found, then a test is required, whether the
requested header file was #included already within a known environment, where
the environment consists of all defined and undefined symbol values. You can
imagine how expensive such a check of the environment can be, with a look at
all those #defines, typedefs and other declarations in your header files.


Finally an example, to hopefully clarify the complexity of the environment:


#if CondX == 0
    typedef Tx...
    #define MacroZ ...
#elif CondY...
    typedef Tx...
    #define MacroZ ...
#else
    #define CondX 1
    #undef CondY
    #define MacroZ ...
#endif
Tx v1, v2, MacroZ(...), ...


Here the symbols Tx, MacroZ, CondY, and consequently the type of v1, v2 etc.
can depend on any number of preconditions, and these #defines, definitions and
typedefs remain valid even after leaving this module or header file. That's why
it's easier and (in most cases) faster to parse the #included files once more,
instead of checking whether the definition of every single definition is
equivalent to the definition in some other (known) context.


DoDi
[There are some fairly simple heuristics that identify most of the cases where it's
safe to skip rereading an included file, but I agree that it's a mess. -John]





Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.