|Compile time of C++ vs C# firstname.lastname@example.org (Shirsoft) (2009-09-01)|
|Re: Compile time of C++ vs C# email@example.com (Marco van de Voort) (2009-09-02)|
|Re: Compile time of C++ vs C# DrDiettrich1@aol.com (Hans-Peter Diettrich) (2009-09-02)|
|Re: Compile time of C++ vs C# firstname.lastname@example.org (BGB / cr88192) (2009-09-02)|
|Re: Compile time of C++ vs C# email@example.com (Stephen Horne) (2009-09-03)|
|Re: Compile time of C++ vs C# firstname.lastname@example.org (BGB / cr88192) (2009-09-03)|
|Re: Compile time of C++ vs C# email@example.com (Olivier Lefevre) (2009-09-04)|
|From:||"BGB / cr88192" <firstname.lastname@example.org>|
|Date:||Wed, 2 Sep 2009 13:16:10 -0700|
|Posted-Date:||02 Sep 2009 23:41:50 EDT|
"Shirsoft" <email@example.com> wrote in message
> I am curious to know why C# code much faster than a similar sized C++
> code. How does MSIL help? Does having a common base class like object
> help in reducing compile times?
invoking my experience as a compiler writer, I can give a fairly solid
basically, in C and C++, a non-trivial amount of time and effort goes into
getting the code parsed (in my case, parser-related tasks are the majority
of the total time).
the main reason for this: headers.
now, in a language like C#, there are no headers or inclusion.
instead, it is possible to pool everything, and essentially process the
entire assembly at once, and the parser manages to work, initially, with an
incomplete understanding of the language (apparently, I am not entirely
certain how it works though...).
my guess is that it potentially uses 2 passes to do parsing:
a first pass to basically figure out all of the declarations (the top-level
structure for the assembly, skipping over function contents, ...);
a second pass to basically parse all of the code (at this point, it knowing
the top-level structure, so it can employ an exact parser).
the first pass may not actually do a full parser (just guessing here), but
may rather use a heuristic/approximate parser, where it looks for items of
interest (via pattern matching) but ignores everything else.
I partially guess this by noting that C# imposes a few restrictions on the
syntax similar to those I employ for my automated header-generation tools,
which similarly use a pattern-matching parser strategy (it does not parse
the full syntax, rather it looks for things which look like declarations,
and I constrain the declarations such that the tool finds them).
noting that MS's stuff uses pattern-matching at the machine code level,
little is to say they don't do similarly for processing C# syntax as well.
hence, much faster parsing.
at this point, it can do whatever, and spew out the bytecode.
I don't suspect all that much work is done at this point (judging from my
compiler, it is very possibly just a few sorts of micro-optimizations, and
more or less flattening the AST's into bytecode).
> [The optimizer is usually the slowest part of a compiler and I would guess
> MSIL offers fewer opportunities than native code. -John]
judging from MSVC's ASM output on x64, well, their optimizer is not exactly
the best around...
basically, I infer that it doesn't even figure out some really basic/obvious
things, so I have some doubts that their optimizer is exactly what is using
up a lot of the time.
what gives me this impression:
movss [rsp+20h], xmm0
movss xmm0, [rsp+20h]
be ready to see things like this, a lot...
I suspect their optimizer is unable to figure out things like, for example,
that the value of a given variable is still located in a register, ...
similarly, they seem to copy structs around via this particular gold nugget:
mov rsi, ...
mov rdi, ...
mov rcx, ...
as opposed, to, say, moving the data around using registers.
granted, I have done little to test or prove that register-based moving is
faster, but I would think it would be.
for example, for a 16 byte struct:
movups xmm0, [...]
movups [...], xmm0
(or 'movaps', if it can be determined that the memory is aligned).
actually, apparently many of their optimizations are not so much about
generating highly optimized code, but rather in turning off things that are
rather questionable and serve to do little more than eat clock cycles
("well, who knows, maybe a user will do a division by zero and expect to get
a nice little status code from errno...").
my guess is that, for whatever reason, MS is just not "that" concerned about
performance on x64.
(and, I have noticed that my code manages to run maybe 2x-3x slower than x86
and gcc, from what I can tell...).
so, yeah, I am left doubting that this is where most of the compile-time is
Return to the
Search the comp.compilers archives again.