|Compile time of C++ vs C# email@example.com (Shirsoft) (2009-09-01)|
|Re: Compile time of C++ vs C# firstname.lastname@example.org (Marco van de Voort) (2009-09-02)|
|Re: Compile time of C++ vs C# DrDiettrich1@aol.com (Hans-Peter Diettrich) (2009-09-02)|
|Re: Compile time of C++ vs C# email@example.com (BGB / cr88192) (2009-09-02)|
|Re: Compile time of C++ vs C# firstname.lastname@example.org (Stephen Horne) (2009-09-03)|
|Re: Compile time of C++ vs C# email@example.com (BGB / cr88192) (2009-09-03)|
|Re: Compile time of C++ vs C# firstname.lastname@example.org (Olivier Lefevre) (2009-09-04)|
|From:||"BGB / cr88192" <email@example.com>|
|Date:||Thu, 3 Sep 2009 09:55:30 -0700|
|Posted-Date:||03 Sep 2009 13:13:02 EDT|
"Marco van de Voort" <firstname.lastname@example.org> wrote in message
> On 2009-09-01, Shirsoft <email@example.com> wrote:
>> [The optimizer is usually the slowest part of a compiler and I would
>> guess that
>> MSIL offers fewer opportunities than native code. -John]
> I'd bet on not parsing include files and not restarting the compiler
> binary for every compilation unit.
this is related to a recent idea I had for speeding up C compile times in my
framework (not yet implemented though):
I can compile groups of files at a time, essentially lumping them together
so that all share more or less the same state (AKA: trying to include the
same header again will almost invariably prune away its contents...).
this way, the several MB of random crap typically pulled in from headers
only happens once for all of the files in the particular grouping (leaving
instead a combined lump of maybe the several hundred kB of all of the stuff
in the grouping).
this "should" allow compiling a group of files in similar time to compiling
a single file.
however, either way, I still need to wait the many seconds or so for all
this to compile, so this would be more a means of improving the performance
scalability, rather than reducing the total cost of compilation.
the other option, of which I have been perpetually too lazy to mess with, is
to clean up and reorganize my project headers such that compiling things
tends to pull in far less huge amounts of random stuff.
there is a possible cost though:
compiling files in a group is not strictly semantically equivalent to
compiling them individually, which "could" raise issues in some rare cases
(or bring up standards-conformance concerns).
for example, stuff which, technically, shouldn't be visible in a given
compilation unit, may be visible.
another possible risk is that of compilation units having conflicting stuff,
but I think this should be rare in practice, and is unlikely to pop up in
situations in which grouping would be used (most likely for collections of
files representing a common library or subsystem).
from observations, I suspect MSVC may be doing something vaguely similar
when one passes a whole bunch of files on the commandline to produce a DLL
(my noticing how MSVC manages to go through the files much quicker than GCC
does, typically several per second vs one every few seconds, and as well how
it seems to give all of its warning/error messages all at once apparently
mishmashed between all the various stuff passed in on the command line).
however, I don't know the details, such as, for example, whether MSVC has a
mechanism to avoid cross-visibility between compilation units, or even what
exactly they are doing, ...
so, ammusingly, it seems in rebuilding my project, much more of the time is
apparently going into 'make' than is into 'cl'.
'make', however, does not seem to be a significant user of processor time
(and as well, CPU load is very low when make is doing its thing), making me
suspect it is mostly IO-bound...
so, yeah, MSVC has its good points and its bad points...
Return to the
Search the comp.compilers archives again.