Related articles |
---|
Compile time of C++ vs C# shirsoft@gmail.com (Shirsoft) (2009-09-01) |
Re: Compile time of C++ vs C# marcov@stack.nl (Marco van de Voort) (2009-09-02) |
Re: Compile time of C++ vs C# DrDiettrich1@aol.com (Hans-Peter Diettrich) (2009-09-02) |
Re: Compile time of C++ vs C# cr88192@hotmail.com (BGB / cr88192) (2009-09-02) |
Re: Compile time of C++ vs C# sh006d3592@blueyonder.co.uk (Stephen Horne) (2009-09-03) |
Re: Compile time of C++ vs C# cr88192@hotmail.com (BGB / cr88192) (2009-09-03) |
Re: Compile time of C++ vs C# lefevrol@yahoo.com (Olivier Lefevre) (2009-09-04) |
From: | "BGB / cr88192" <cr88192@hotmail.com> |
Newsgroups: | comp.compilers |
Date: | Thu, 3 Sep 2009 09:55:30 -0700 |
Organization: | albasani.net |
References: | 09-09-009 09-09-011 |
Keywords: | C++, performance |
Posted-Date: | 03 Sep 2009 13:13:02 EDT |
"Marco van de Voort" <marcov@stack.nl> wrote in message
> On 2009-09-01, Shirsoft <shirsoft@gmail.com> wrote:
<snip>
>> [The optimizer is usually the slowest part of a compiler and I would
>> guess that
>> MSIL offers fewer opportunities than native code. -John]
>
> I'd bet on not parsing include files and not restarting the compiler
> binary for every compilation unit.
agreed...
this is related to a recent idea I had for speeding up C compile times in my
framework (not yet implemented though):
I can compile groups of files at a time, essentially lumping them together
so that all share more or less the same state (AKA: trying to include the
same header again will almost invariably prune away its contents...).
this way, the several MB of random crap typically pulled in from headers
only happens once for all of the files in the particular grouping (leaving
instead a combined lump of maybe the several hundred kB of all of the stuff
in the grouping).
this "should" allow compiling a group of files in similar time to compiling
a single file.
however, either way, I still need to wait the many seconds or so for all
this to compile, so this would be more a means of improving the performance
scalability, rather than reducing the total cost of compilation.
the other option, of which I have been perpetually too lazy to mess with, is
to clean up and reorganize my project headers such that compiling things
tends to pull in far less huge amounts of random stuff.
there is a possible cost though:
compiling files in a group is not strictly semantically equivalent to
compiling them individually, which "could" raise issues in some rare cases
(or bring up standards-conformance concerns).
for example, stuff which, technically, shouldn't be visible in a given
compilation unit, may be visible.
another possible risk is that of compilation units having conflicting stuff,
but I think this should be rare in practice, and is unlikely to pop up in
situations in which grouping would be used (most likely for collections of
files representing a common library or subsystem).
from observations, I suspect MSVC may be doing something vaguely similar
when one passes a whole bunch of files on the commandline to produce a DLL
(my noticing how MSVC manages to go through the files much quicker than GCC
does, typically several per second vs one every few seconds, and as well how
it seems to give all of its warning/error messages all at once apparently
mishmashed between all the various stuff passed in on the command line).
however, I don't know the details, such as, for example, whether MSVC has a
mechanism to avoid cross-visibility between compilation units, or even what
exactly they are doing, ...
so, ammusingly, it seems in rebuilding my project, much more of the time is
apparently going into 'make' than is into 'cl'.
'make', however, does not seem to be a significant user of processor time
(and as well, CPU load is very low when make is doing its thing), making me
suspect it is mostly IO-bound...
so, yeah, MSVC has its good points and its bad points...
Return to the
comp.compilers page.
Search the
comp.compilers archives again.