|Justifying Optimization firstname.lastname@example.org (MICHAEL DILLON) (2003-01-17)|
|Re: Justifying Optimization email@example.com (Joachim Durchholz) (2003-01-20)|
|Re: Justifying Optimization firstname.lastname@example.org (srikanth) (2003-01-21)|
|Re: Justifying Optimization email@example.com (Christian Bau) (2003-01-21)|
|Re: Justifying Optimization firstname.lastname@example.org (2003-01-21)|
|Re: Justifying Optimization email@example.com (2003-01-21)|
|Re: Justifying Optimization firstname.lastname@example.org (Sid TOUATI) (2003-01-25)|
|Re: Justifying Optimization ONeillCJ@logica.com (Conor O'Neill) (2003-01-25)|
|[18 later articles]|
|From:||Joachim Durchholz <email@example.com>|
|Date:||20 Jan 2003 23:57:58 -0500|
|Posted-Date:||20 Jan 2003 23:57:58 EST|
MICHAEL DILLON wrote:
> I've been developing for twenty years, and ever since I've been
> allowed to have an opinion I've insisted that code ready for final
> testing and deployment be optimized. I'm currently responsible for my
> program's development strategy, and was recently blindsided by
> resistance to this approach. Developers are stating that optimized
> code produces errors and makes debugging more difficult.
It's a question of circumstances. If you're compiling with a single
compiler, for a single platform, the sentiment is true. In that
situation, you couldn't care less whether your code is written in C++;
all you care is whether the code does the Right Thing on your target
An example from personal experience: An application we were using
crashed when going to (optimized) production code. It turned out that
a callback had been declared with a wrong signature, causing stack
corruption with production code and running well and fine in debug
mode. The callback didn't use its parameters, so the mismatch
actually didn't matter.
The point is: I spent an entire week debugging at assembly level to
fix a bug that was entirely irrelevant to the logic of the
application. From this perspective, it was a total waste of time; the
decision to run the application with optimizations turned on didn't do
the project any good (since the final application turned out to be
fast enough even with debug code inside) but wasted a man-week of time
(and added a lot of grey hairs to my project leader's head).
Michael's project is, of course, different: it's using two compilers
on two platforms, with a multiplicatively increased probability that
compiler and/or platform will change over the lifetime of the project.
Under these assumptions, it's probably worth the effort to make the
code clean enough to work whether it's optimized or not.
I agree with Michael's colleagues that performance is rarely a reason
to turn optimization on. And other than that, I see no real reason to
switch it on. One can use it as a bug detector, but I'd think that
there are better ways - letting the optimizer uncover bugs is a very
inefficient way to pinpoint dubious practices. One can use it as a
quality checker - if the optimized program produces
bit-for-bit-compatible results with the unoptimized version, you have
a higher confidence that it's correct. (Unfortunately, bit-for-bit
compatibility is difficult to check. File and other operating system
object handles are beyond your ability to reproduce. If the
application is interactive, it's difficult to produce equivalent input
streams for the two program versions.)
I agree with John that bugs in a compiler optimizer are rare. Actually
I never encountered one (which is partly because I had no access to
optimizing compilers when they were new, buggy, and expensive *g*).
Just my 5c, YMMV, and all that.
Return to the
Search the comp.compilers archives again.